Re: [openstack-dev] [horizon] static files handling, bower/

2015-01-12 Thread Matthias Runge
On 12/01/15 21:53, Drew Fisher wrote:

> I know I'm very very late to this thread but can I ask why Bower?  Bower
> has a hard requirement on Node.js which was removed as a dependency in
> Havana.  Why are we reintroducing this requirement?
> 
> For Solaris, a requirement on Node.js is especially problematic as there
> is no official SPARC port and I'm not aware of anybody else working on one.
> 
> I agree that XStatic isn't really the best solution here but are there
> any other solutions that don't involve Node.js?
> 
The same is true for ARM based machines, as node.js is AFAIK x86 only.

But, as far as I understand, node.js will become a development
requirement (and most probably a requirement for testing), but not for
deployment.

Bower is just another package manager, comparable to npm, pip etc. if
you use that aside your systems package manager.

The idea is, to use something like dpkg or rpm to provide dependencies
for installation. During development and testing, it's proposed to rely
on bower to install dependencies.

Matthias


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Dropping Python-2.6 support

2015-01-12 Thread Bartłomiej Piotrowski

On 01/12/2015 03:55 PM, Roman Prykhodchenko wrote:

Folks,

as it was planned and then announced at the OpenStack summit OpenStack services 
deprecated Python-2.6 support. At the moment several services and libraries are 
already only compatible with Python>=2.7. And there is no common sense in 
trying to get back compatibility with Py2.6 because OpenStack infra does not run 
tests for that version of Python.

The point of this email is that some components of Fuel, say, Nailgun and Fuel 
Client are still only tested with Python-2.6. Fuel Client in it’s turn is about 
to use OpenStack CI’s python-jobs for running unit tests. That means that in 
order to make it compatible with Py2.6 there is a need to run a separate python 
job in FuelCI.

However, I believe that forcing the things being compatible with 2.6 when the 
rest of ecosystem decided not to go with it and when Py2.7 is already available 
in the main CentOS repo sounds like a battle with the common sense. So my 
proposal is to drop 2.6 support in Fuel-6.1.


While I come from the lands where being bleeding edge is preferred, I 
ask myself (as not programmer) one thing: what does 2.7 provide that you 
cannot easily achieve in 2.6?


Regards,
Bartłomiej

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gantt] Scheduler sub-group meeting agenda 1/13

2015-01-12 Thread Dugger, Donald D
Meeting on #openstack-meeting at 1500 UTC (8:00AM MST)


1)  Remove direct nova DB/API access by Scheduler Filters - 
https://review.opernstack.org/138444/

2)  Status on cleanup work - https://wiki.openstack.org/wiki/Gantt/kilo

3)  Topics for mid-cycle meetup

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Requesting exception for add separated policy rule for each v2.1 api

2015-01-12 Thread Alex Xu
https://review.openstack.org/#/c/127863/

This spec is part of Nova REST API policy improvement. And those
improvement already got generic agreement as in this full view devref
https://review.openstack.org/#/c/138270/

This spec is just for Nova REST API v2.1. So really hope it can be done
before v2.1 released, then we needn't think about upgrade impact for
deployer. Finish this simple task when it's simple.

Thanks
Alex
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] request spec freeze exception for Attach/Detach SR-IOV interface

2015-01-12 Thread Alex Xu
2015-01-13 13:57 GMT+08:00 少合冯 :

> Hello,
>
> I'd like to request an exception for Attach/Detach SR-IOV interface
> feature. [1]
> This is an important feature that aims to improve better performance than
> normal
> network interface in guests and not too hard to implement.
>
> Thanks,
> Shao He, Feng
>
> [1] https://review.openstack.org/#/c/139910/
>


Oops, after I clicked the link it forward to an wrong link, but I can open
it by copy the text https://review.openstack.org/#/c/139910/ into
web-browser directly. :)



>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] request spec freeze exception for Attach/Detach SR-IOV interface

2015-01-12 Thread 少合冯
2015-01-13 13:57 GMT+08:00 少合冯 :

> Hello,
>
> I'd like to request an exception for Attach/Detach SR-IOV interface
> feature. [1]
> This is an important feature that aims to improve better performance than
> normal
> network interface in guests and not too hard to implement.
>
> Thanks,
> Shao He, Feng
>
> [1] https://review.openstack.org/#/c/139910/
> 
>


Sorry, the above link is wrong

This is the right one.
[1] https://review.openstack.org/#/c/139910/
Thanks.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] request spec freeze exception for Attach/Detach SR-IOV interface

2015-01-12 Thread 少合冯
Hello,

I'd like to request an exception for Attach/Detach SR-IOV interface
feature. [1]
This is an important feature that aims to improve better performance than
normal
network interface in guests and not too hard to implement.

Thanks,
Shao He, Feng

[1] https://review.openstack.org/#/c/139910/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] request spec freeze exception for Attach/Detach SR-IOV interface

2015-01-12 Thread 少合冯
Hello,

I'd like to request an exception for Attach/Detach SR-IOV interface
feature. [1]
This is an important feature that aims to improve better performance than
normal
network interface in guests and not too hard to implement.

Thanks,
Shao He, Feng

[1] https://review.openstack.org/#/c/139910/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Requesting Exception/Review for Compute-Capabilities spec

2015-01-12 Thread Nisha Agarwal
Hi,

The ironic needs this feature from nova to implement Firmware settings.
The code also has been proposed for the same.

Spec link: https://review.openstack.org/133534
Code link: https://review.openstack.org/141010

Regards
Nisha
-- 
The Secret Of Success is learning how to use pain and pleasure, instead
of having pain and pleasure use you. If You do that you are in control
of your life. If you don't life controls you.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Requesting exception for JSON-Home spec

2015-01-12 Thread Ken'ichi Ohmichi
Hi,

This spec[1] is for adding JSON-Home feature to Nova v2.1 API.
This feature will provide API resource information with a
standard way which has been already implemented in Keystone.
I hope this feature will promote that people use v2.1 API in production
environments.

I created a prototype[2] for this feature, and I have known it is
not difficult to implement this feature.

Thanks
Ken'ichi Ohmichi

---
[1]: https://review.openstack.org/#/c/130715/
[2]: https://review.openstack.org/#/c/145100/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] sqlalchemy-migrate 0.9.4 released

2015-01-12 Thread Matt Riedemann
0.9.2 was blocked because of a change that broke unit tests in some 
projects, that is fixed in 0.9.4.


What happened to 0.9.3? Problems, don't ask - fixed in 0.9.4 (thanks 
mordred).


Changes:

mriedem@ubuntu:~/git/sqlalchemy-migrate$ git log --no-merges --oneline 
0.9.2..0.9.4

b011e6c Remove svn version tag setting
938757e Ignore transaction management statements in SQL scripts
74553f4 Use native sqlalchemy 0.9 quote attribute with ibmdb2
244c6c5 Don't add warnings filter on import
30f6aea pep8: mark all pep8 checks that currently fail as ignored
7bb74f7 Fix ibmdb2 unique constraint handling for sqlalchemy 0.9

Of special note is 244c6c5 which should remove a ton of the 
DeprecationWarnings that show up in unit test runs for other projects, 
like Nova.


Also thanks to clarkb for helping me do my first release, you were so 
gentle. :)


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API Definition Formats

2015-01-12 Thread Ian Cordasco
On 1/12/15, 17:21, "Chris Dent"  wrote:

>On Mon, 12 Jan 2015, Ian Cordasco wrote:
>
>> This worked extremely well in my experience and helped improve
>>development
>> time for new endpoints and new endpoint versions. The documentation was
>> also heavily used for the multiple internal clients for that API.
>
>This idea of definition formats seems like a reasonable idea (see my
>response to Anne over on the gabbi thread[1]) but I worry about a few
>things:
>
>* Unless you're auto generating the code from the formal defition you
>   run into a lot of opportunities for truth to get out of sync between
>   the definition and the implementation.

The /documentation/ was used by /developers/ to build the internal
clients. It was also used by the front-end developers who built the
user-facing interface that consumed these APIs.

>* Ugh, auto generated code. Magic. Ew. This is Python by golly!

I’m not suggesting auto-generated code (although that’s always a
*possibility*).

>* Specifying every single endpoint or many endpoints is just about as
>   anti-REST as you can get if you're a HATEOAS believer. I suspect
>   this line of concern is well-trod ground and not worth bringing back
>   up, but all this stuff about versioning is meh and death to client
>   diversity.

Except that we don’t even try to achieve HATEOAS (or at least the
OpenStack APIs I’ve seen don’t). If we’re being practical about it, then
the idea that we have a contract between the API consumer (also read:
user) and the server makes for a drastic simplification. The fact that the
documentation is auto-generated means that writing tests with gabbi would
be so much simpler for you (than waiting for people familiar with it to
help you).

>* Yes to this:
>> The problem with building something like this /might/ be tying it in to
>> the different frameworks used by each of the services but on the whole
>> could be delegated to each service as it looks to integrate.
>
>All that said, what you describe in the following would be nice if it
>can be made true and work well. I suspect I'm still scarred from WSDL
>and company but I'm not optimistic that culturally it can be made to
>work. Simple HTTP APIs wins over SOAP and pragmatic HTTP wins over true
>REST and JSON wins over XML because all of the latter have a flavor of
>flexibility and easy to diddle that does not exist in the former. The
>problem is social, not technical.

Well I’ve only seen it used with JSON, so I’m not sure where you got XML
from (or SOAP for that matter). Besides, this is a tool that will help the
API developers more than it will hurt them. In-tree definitions in a
(fairly) human readable format that clearly states what is accepted and
generated by an endpoint means that scrutinizing Pecan and WSME isn’t
necessary (until you start writing the endpoint itself).

>
>> From my personal perspective, YAML is a nice way to document all of this
>> data, especially since it’s a format that most any language can parse.
>>We
>> used these endpoint definitions to simply how we wrote clients for the
>>API
>> we were developing and I suspect we could do something similar with the
>> existing clients. It would also definitely help any new clients that
>> people are currently writing. The biggest win for us would be having our
>> documentation mostly auto-generated for us and having a whole suite of
>> tests that would check that a real response matches the schema. If it
>> doesn’t, we know the schema needs to be updated and then the docs would
>>be
>> automatically updated as a consequence. It’s a nice way of enforcing
>>that
>> the response changes are documented as they’re changed.
>
>[1] 
>http://lists.openstack.org/pipermail/openstack-dev/2015-January/054287.htm
>l
>
>-- 
>Chris Dent tw:@anticdent freenode:cdent
>https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] SR-IOV IRC meeting on Jan, 13th Canceled

2015-01-12 Thread Robert Li (baoli)
Hi,

I’m canceling the meeting since I’m traveling this week.

Regards,
Robert
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Where to keep data about stack breakpoints?

2015-01-12 Thread Ton Ngo
I was also thinking of using the environment to hold the breakpoint,
similarly to parameters.  The CLI and API would process it just like
parameters.

   As for the state of a stack hitting the breakpoint, leveraging the
FAILED state seems to be sufficient, we just need to add enough information
to differentiate between a failed resource and a resource at a breakpoint.
Something like emitting an event or a message should be enough to make that
distinction.   Debugger for native program typically does the same thing,
leveraging the exception handling in the OS by inserting an artificial
error at the breakpoint to force a program to stop.  Then the debugger
would just remember the address of these artificial errors to decode the
state of the stopped program.

   As for the workflow, instead of spinning in the scheduler waiting for a
signal, I was thinking of moving the stack off the engine as a failed
stack. So this would be an end-state for the stack as Steve suggested, but
without adding a new stack state.   Again, this is similar to how a program
being debugged is handled:  they are moved off the ready queue and their
context is preserved for examination.  This seems to keep the
implementation simple and we don't have to worry about timeout,
performance, etc.  Continuing from the breakpoint then should be similar to
stack-update on a failed stack.  We do need some additional handling, such
as allowing resource in-progress to run to completion instead of aborting.

For the parallel paths in a template, I am thinking about these
alternatives:
1. Stop after all the current in-progress resources complete, but do not
start any new resources even if there is no dependency.  This should be
easier to implement, but the state of the stack would be non-deterministic.
2. Stop only the paths with the breakpoint, continue all other parallel
paths to completion.  This seems harder to implement, but the stack would
be in a deterministic state and easier for the user to reason with.

   To be compatible with convergence, I had suggested to Clint earlier to
add a mode where the convergence engine does not attempt to retry so the
user can debug, and I believe this was added to the blueprint.

Ton,




From:   Steven Hardy 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   01/12/2015 02:40 PM
Subject:Re: [openstack-dev] [Heat] Where to keep data about stack
breakpoints?



On Mon, Jan 12, 2015 at 05:10:47PM -0500, Zane Bitter wrote:
> On 12/01/15 13:05, Steven Hardy wrote:
> >>>I also had a chat with Steve Hardy and he suggested adding a STOPPED
state
> >>>to the stack (this isn't in the spec). While not strictly necessary to
> >>>implement the spec, this would help people figure out that the stack
has
> >>>reached a breakpoint instead of just waiting on a resource that takes
a long
> >>>time to finish (the heat-engine log and event-list still show that a
> >>>breakpoint was reached but I'd like to have it in stack-list and
> >>>resource-list, too).
> >>>
> >>>It makes more sense to me to call it PAUSED (we're not completely
stopping
> >>>the stack creation after all, just pausing it for a bit), I'll let
Steve
> >>>explain why that's not the right choice :-).
> >So, I've not got strong opinions on the name, it's more the workflow:
> >
> >1. User triggers a stack create/update
> >2. Heat walks the graph, hits a breakpoint and stops.
> >3. Heat explicitly triggers continuation of the create/update
>
> Did you mean the user rather than Heat for (3)?

Oops, yes I did.

> >My argument is that (3) is always a stack update, either a PUT or PATCH
> >update, e.g we_are_  completely stopping stack creation, then a user can
> >choose to re-start it (either with the same or a different definition).
>
> Hmmm, ok that's interesting. I have not been thinking of it that way.
I've
> always thought of it like this:
>
>
http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/adding-lifecycle-hooks.html

>
> (Incidentally, this suggests an implementation where the lifecycle hook
is
> actually a resource - with its own API, naturally.)
>
> So, if it's requested, before each operation we send out a notification
> (hopefully via Zaqar), and if a breakpoint is set that operation is not
> carried out until the user makes an API call acknowledging it.

I guess I was trying to keep it initially simpler than that, given that we
don't have any integration with a heat-user messaging system at present.

> >So, it_is_  really an end state, as a user might never choose to update
> >from the stopped state, in which case *_STOPPED makes more sense.
>
> That makes a bit more sense now.
>
> I think this is going to be really hard to implement though. Because
while
> one branch of the graph stops, other branches have to continue as far as
> they can. At what point do you change the state of the stack?

True, this is a disadvantage of specifying a single breakpoint when there
may be paral

[openstack-dev] [devstack] Devstack plugins and gate testing

2015-01-12 Thread Ian Wienand
Hi,

With [1] merged, we now have people working on creating external
plugins for devstack.

I worry about use of arbitrary external locations as plugins for gate
jobs.  If a plugin is hosted externally (github, bitbucket, etc) we
are introducing a whole host of problems when it is used as a gate
job.  Lack of CI testing for proposed changes, uptime of the remote
end, ability to accept contributions, lack of administrative access
and consequent ability to recover from bad merges are a few.

I would propose we agree that plugins used for gate testing should be
hosted in stackforge unless there are very compelling reasons
otherwise.

To that end, I've proposed [2] as some concrete wording.  If we agree,
I could add some sort of lint for this to project-config testing.

Thanks,

-i

[1] https://review.openstack.org/#/c/142805/ (Implement devstack external 
plugins)
[2] https://review.openstack.org/#/c/146679/ (Document use of plugins for gate 
jobs)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API Definition Formats

2015-01-12 Thread Chris Dent

On Mon, 12 Jan 2015, Ian Cordasco wrote:


This worked extremely well in my experience and helped improve development
time for new endpoints and new endpoint versions. The documentation was
also heavily used for the multiple internal clients for that API.


This idea of definition formats seems like a reasonable idea (see my
response to Anne over on the gabbi thread[1]) but I worry about a few
things:

* Unless you're auto generating the code from the formal defition you
  run into a lot of opportunities for truth to get out of sync between
  the definition and the implementation.

* Ugh, auto generated code. Magic. Ew. This is Python by golly!

* Specifying every single endpoint or many endpoints is just about as
  anti-REST as you can get if you're a HATEOAS believer. I suspect
  this line of concern is well-trod ground and not worth bringing back
  up, but all this stuff about versioning is meh and death to client
  diversity.

* Yes to this:

The problem with building something like this /might/ be tying it in to
the different frameworks used by each of the services but on the whole
could be delegated to each service as it looks to integrate.


All that said, what you describe in the following would be nice if it
can be made true and work well. I suspect I'm still scarred from WSDL
and company but I'm not optimistic that culturally it can be made to
work. Simple HTTP APIs wins over SOAP and pragmatic HTTP wins over true
REST and JSON wins over XML because all of the latter have a flavor of
flexibility and easy to diddle that does not exist in the former. The
problem is social, not technical.


From my personal perspective, YAML is a nice way to document all of this
data, especially since it’s a format that most any language can parse. We
used these endpoint definitions to simply how we wrote clients for the API
we were developing and I suspect we could do something similar with the
existing clients. It would also definitely help any new clients that
people are currently writing. The biggest win for us would be having our
documentation mostly auto-generated for us and having a whole suite of
tests that would check that a real response matches the schema. If it
doesn’t, we know the schema needs to be updated and then the docs would be
automatically updated as a consequence. It’s a nice way of enforcing that
the response changes are documented as they’re changed.


[1] http://lists.openstack.org/pipermail/openstack-dev/2015-January/054287.html

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-12 Thread Chris Dent

On Tue, 13 Jan 2015, Boris Pavlovic wrote:


Having separated engine seems like a good idea. It will really simplify
stuff


I'm not certain that's the case, but it may be worth exploration.


This seems like a huge duplication of efforts. I mean operators will write
own
tools developers own... Why not just resolve more common problem:
"Does it work or not?"


Because no one tool can solve all problems well. I think it is far
better to have lots of small tools that are fairly focused on doing a
one or a few small jobs well.

It may be that there are pieces of gabbi which can be reused or
extracted to more general libraries. If there, that's fantastic. But
I think it is very important to try to solve one problem at a time
rather than everything at once.


$ python -m subunit.run discover gabbi |subunit-trace
[...]
gabbi.driver.test_intercept_self_inheritance_of_defaults.test_request
[0.027512s] ... ok
[...]



What is "test_request" Just one RestAPI call?


That long dotted name is the name of a dynamically (some metaclass
mumbo jumbo magic is used to turn the YAML into TestCase classes)
created single TestCase and within that TestCase is one single HTTP
request and the evaluation of its response. It directly corresponds to a
test named "inheritance of defaults" in a file called self.yaml.
self.yaml is in a directory containing other YAML files, all of which
are loaded by a python filed named test_intercept.py.


Btw the thin that I am interested how they are all combined?


As I said before: Each yaml file is an ordered sequence of tests, each
one representing a singe HTTP request. Fixtures are per yaml file.
There is no cleanup phase outside of the fixtures. Each fixture is
expected to do its own cleanup, if required.


And where are you doing cleanup? (like if you would like to test only
creation of resource?)


In the ceilometer integration that is currently being built, the
test_gabbi.py[1] file configures itself to use a mongodb database that
is unique for this process. The test harness is responsible for
starting the mongodb. In a concurrency situation, each process will
have a different database in the same monogo server. When the test run
is done, mongo is shut down, the databases removed.

In other words, the environment surrounding gabbi is responsible for
doing the things it is good at, and gabbi does the HTTP tests. A long
running test cannot necessarily depend on what else might be in the
datastore used by the API. It needs to test that which it knows about.

I hope that clarifies things a bit.

[1] https://review.openstack.org/#/c/146187/2/ceilometer/gabbi/test_gabbi.py,cm

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-12 Thread Boris Pavlovic
Sean,


So I'd say let's focus on that problem right now, and get some traction
> on this as part of functional test suites in OpenStack. Genericizing it
> too much just turns this back into a version of every other full stack
> testing tool, which we know isn't sufficient for having quality
> components in OpenStack.


Please be more specific about what tools were tested?
It will be nice to see overview.  At least what tool were tested
and why they can't be used for testing-in-tree.


Best regards,
Boris Pavlovic



On Tue, Jan 13, 2015 at 1:37 AM, Anne Gentle  wrote:

>
>
> On Mon, Jan 12, 2015 at 1:20 PM, Chris Dent  wrote:
>
>>
>> After some discussion with Sean Dague and a few others it became
>> clear that it would be a good idea to introduce a new tool I've been
>> working on to the list to get a sense of its usefulness generally,
>> work towards getting it into global requirements, and get the
>> documentation fleshed out so that people can actually figure out how
>> to use it well.
>>
>> tl;dr: Help me make this interesting tool useful to you and your
>> HTTP testing by reading this message and following some of the links
>> and asking any questions that come up.
>>
>> The tool is called gabbi
>>
>> https://github.com/cdent/gabbi
>> http://gabbi.readthedocs.org/
>> https://pypi.python.org/pypi/gabbi
>>
>> It describes itself as a tool for running HTTP tests where requests
>> and responses are represented in a declarative form. Its main
>> purpose is to allow testing of APIs where the focus of test writing
>> (and reading!) is on the HTTP requests and responses, not on a bunch of
>> Python (that obscures the HTTP).
>>
>>
> Hi Chris,
>
> I'm interested, sure. What did you use to write the HTTP tests, as in,
> what was the source of truth for what the requests and responses should be?
>
> Thanks,
> Anne
>
>
>> The tests are written in YAML and the simplest test file has this form:
>>
>> ```
>> tests:
>> - name: a test
>>   url: /
>> ```
>>
>> This test will pass if the response status code is '200'.
>>
>> The test file is loaded by a small amount of python code which transforms
>> the file into an ordered sequence of TestCases in a TestSuite[1].
>>
>> ```
>> def load_tests(loader, tests, pattern):
>> """Provide a TestSuite to the discovery process."""
>> test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR)
>> return driver.build_tests(test_dir, loader, host=None,
>>   intercept=SimpleWsgi,
>>   fixture_module=sys.modules[__name__])
>> ```
>>
>> The loader provides either:
>>
>> * a host to which real over-the-network requests are made
>> * a WSGI app which is wsgi-intercept-ed[2]
>>
>> If an individual TestCase is asked to be run by the testrunner, those
>> tests
>> that are prior to it in the same file are run first, as prerequisites.
>>
>> Each test file can declare a sequence of nested fixtures to be loaded
>> from a configured (in the loader) module. Fixtures are context managers
>> (they establish the fixture upon __enter__ and destroy it upon
>> __exit__).
>>
>> With a proper group_regex setting in .testr.conf each YAML file can
>> run in its own process in a concurrent test runner.
>>
>> The docs contain information on the format of the test files:
>>
>> http://gabbi.readthedocs.org/en/latest/format.html
>>
>> Each test can state request headers and bodies and evaluate both response
>> headers and response bodies. Request bodies can be strings in the
>> YAML, files read from disk, or JSON created from YAML structures.
>> Response verifcation can use JSONPath[3] to inspect the details of
>> response bodies. Response header validation may use regular
>> expressions.
>>
>> There is limited support for refering to the previous request
>> to construct URIs, potentially allowing traversal of a full HATEOAS
>> compliant API.
>>
>> At the moment the most complete examples of how things work are:
>>
>> * Ceilometer's pending use of gabbi:
>>   https://review.openstack.org/#/c/146187/
>> * Gabbi's testing of gabbi:
>>   https://github.com/cdent/gabbi/tree/master/gabbi/gabbits_intercept
>>   (the loader and faked WSGI app for those yaml files is in:
>>   https://github.com/cdent/gabbi/blob/master/gabbi/test_intercept.py)
>>
>> One obvious thing that will need to happen is a suite of concrete
>> examples on how to use the various features. I'm hoping that
>> feedback will help drive that.
>>
>> In my own experimentation with gabbi I've found it very useful. It's
>> helped me explore and learn the ceilometer API in a way that existing
>> test code has completely failed to do. It's also helped reveal
>> several warts that will be very useful to fix. And it is fast. To
>> run and to write. I hope that with some work it can be useful to you
>> too.
>>
>> Thanks.
>>
>> [1] Getting gabbi to play well with PyUnit style tests and
>> with infrastructure like subunit and testrepository was one of
>> 

Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-12 Thread Chris Dent

On Mon, 12 Jan 2015, Anne Gentle wrote:


I'm interested, sure. What did you use to write the HTTP tests, as in, what
was the source of truth for what the requests and responses should be?


That is an _extremely_ good question and one I really struggled with
as I started integrating gabbi with ceilometer.

Initially I thought "I'll just use the API docs[1] as the source of
truth" but I found they were a bit incomplete on some of the nuances,
so I asked around for other sources of truth, but got little in the
way of response.

So then I tried to use the api controller code, but not to put too fine a
point on it, but the combination of WSME and Pecan makes for utterly
inscrutable code if you're interested in the actual structure of the
HTTP requests and response and the URIs being used.

So then I tried to use the existing api unit tests and was able to
extract a bit there, but it wasn't smooth sailing.

So finally what I did was decide that I would do the work in phases
and with collaborators: I'd get the initial framework in place and
then impose upon those more familiar with the API than I to do
subsequent dependent patchsets that cover the API more completely.

I have to admit that the concept of API truth is part of the reason I
wanted to create this kind of testing. None of the resources I could
find in the ceilometer code tree gave any clear overview that mapped
URIs to methods, allowing easy discovery of how the code works. I
wanted to find some kind of map[2]. Gabbi itself doesn't solve this
problem (there's no map between URI and python method) but it can at
least show the API, there in the code. It's a step in the right
direction.

I know that there are discussions in progress about formalizing APIs
with tools like RAML (for example the thread Ian just extended[3]). I
think these have their place, especially for declaring truth, but they
aren't necessarily good learning aids for new developers or good
assistants for enabling and maintaining transparency.

[1] I started at: http://docs.openstack.org/developer/ceilometer/webapi/v2.html
but I think I should have used: 
http://developer.openstack.org/api-ref-telemetry-v2.html

[2] https://github.com/tiddlyweb/tiddlyweb/blob/master/tiddlyweb/urls.map

[3] http://lists.openstack.org/pipermail/openstack-dev/2015-January/054153.html
--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Where to keep data about stack breakpoints?

2015-01-12 Thread Steven Hardy
On Mon, Jan 12, 2015 at 05:10:47PM -0500, Zane Bitter wrote:
> On 12/01/15 13:05, Steven Hardy wrote:
> >>>I also had a chat with Steve Hardy and he suggested adding a STOPPED state
> >>>to the stack (this isn't in the spec). While not strictly necessary to
> >>>implement the spec, this would help people figure out that the stack has
> >>>reached a breakpoint instead of just waiting on a resource that takes a 
> >>>long
> >>>time to finish (the heat-engine log and event-list still show that a
> >>>breakpoint was reached but I'd like to have it in stack-list and
> >>>resource-list, too).
> >>>
> >>>It makes more sense to me to call it PAUSED (we're not completely stopping
> >>>the stack creation after all, just pausing it for a bit), I'll let Steve
> >>>explain why that's not the right choice :-).
> >So, I've not got strong opinions on the name, it's more the workflow:
> >
> >1. User triggers a stack create/update
> >2. Heat walks the graph, hits a breakpoint and stops.
> >3. Heat explicitly triggers continuation of the create/update
> 
> Did you mean the user rather than Heat for (3)?

Oops, yes I did.

> >My argument is that (3) is always a stack update, either a PUT or PATCH
> >update, e.g we_are_  completely stopping stack creation, then a user can
> >choose to re-start it (either with the same or a different definition).
> 
> Hmmm, ok that's interesting. I have not been thinking of it that way. I've
> always thought of it like this:
> 
> http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/adding-lifecycle-hooks.html
> 
> (Incidentally, this suggests an implementation where the lifecycle hook is
> actually a resource - with its own API, naturally.)
> 
> So, if it's requested, before each operation we send out a notification
> (hopefully via Zaqar), and if a breakpoint is set that operation is not
> carried out until the user makes an API call acknowledging it.

I guess I was trying to keep it initially simpler than that, given that we
don't have any integration with a heat-user messaging system at present.

> >So, it_is_  really an end state, as a user might never choose to update
> >from the stopped state, in which case *_STOPPED makes more sense.
> 
> That makes a bit more sense now.
> 
> I think this is going to be really hard to implement though. Because while
> one branch of the graph stops, other branches have to continue as far as
> they can. At what point do you change the state of the stack?

True, this is a disadvantage of specifying a single breakpoint when there
may be parallel paths through the graph.

However, I was thinking we could just reuse our existing error path
implementation, so it needn't be hard to implement at all, e.g.

1. Stack action started where a resource has a breakpoint set
2. Stack.stack_task.resource_action checks if resource is a breakpoint
3. If a breakpoint is set, we raise a exception.ResourceFailure subclass
4. The normal error_wait_time is respected, e.g currently in-progress
actions are given a chance to complete.

Basically, the only implementation would be raising a special new type of
exception, which would enable a suitable message (and event) to be shown to
the user "Stack create aborted due to breakpoint on resource foo".

Pre/post breakpoint actions/messaging could be added later via a similar
method to the stack-level lifecycle plugin hooks.

If folks are happy with e.g CREATE_FAILED as a post-breakpoint state, this
could simplify things a lot, as we'd not need any new state or much new
code at all?

> >Paused implies the same action as the PATCH update, only we trigger
> >continuation of the operation from the point we reached via some sort of
> >user signal.
> >
> >If we actually pause an in-progress action via the scheduler, we'd have to
> >start worrying about stuff like token expiry, hitting timeouts, resilience
> >to engine restarts, etc, etc.  So forcing an explicit update seems simpler
> >to me.
> 
> Yes, token expiry and stack timeouts are annoying things we'd have to deal
> with. (Resilience to engine restarts is not affected though.) However, I'm
> not sure your model is simpler, and in particular it sounds much harder to
> implement in the convergence architecture.

So you're advocating keeping the scheduler spinning, until a user sends a
signal to the resource to clear the breakpoint?

I don't see why we couldn't do both, have a "abort_on_breakpoint" flag or
something, but I'd be interested in further understanding how the
error-path approach outlined above would be incompatible with convergence.

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-12 Thread Anne Gentle
On Mon, Jan 12, 2015 at 1:20 PM, Chris Dent  wrote:

>
> After some discussion with Sean Dague and a few others it became
> clear that it would be a good idea to introduce a new tool I've been
> working on to the list to get a sense of its usefulness generally,
> work towards getting it into global requirements, and get the
> documentation fleshed out so that people can actually figure out how
> to use it well.
>
> tl;dr: Help me make this interesting tool useful to you and your
> HTTP testing by reading this message and following some of the links
> and asking any questions that come up.
>
> The tool is called gabbi
>
> https://github.com/cdent/gabbi
> http://gabbi.readthedocs.org/
> https://pypi.python.org/pypi/gabbi
>
> It describes itself as a tool for running HTTP tests where requests
> and responses are represented in a declarative form. Its main
> purpose is to allow testing of APIs where the focus of test writing
> (and reading!) is on the HTTP requests and responses, not on a bunch of
> Python (that obscures the HTTP).
>
>
Hi Chris,

I'm interested, sure. What did you use to write the HTTP tests, as in, what
was the source of truth for what the requests and responses should be?

Thanks,
Anne


> The tests are written in YAML and the simplest test file has this form:
>
> ```
> tests:
> - name: a test
>   url: /
> ```
>
> This test will pass if the response status code is '200'.
>
> The test file is loaded by a small amount of python code which transforms
> the file into an ordered sequence of TestCases in a TestSuite[1].
>
> ```
> def load_tests(loader, tests, pattern):
> """Provide a TestSuite to the discovery process."""
> test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR)
> return driver.build_tests(test_dir, loader, host=None,
>   intercept=SimpleWsgi,
>   fixture_module=sys.modules[__name__])
> ```
>
> The loader provides either:
>
> * a host to which real over-the-network requests are made
> * a WSGI app which is wsgi-intercept-ed[2]
>
> If an individual TestCase is asked to be run by the testrunner, those tests
> that are prior to it in the same file are run first, as prerequisites.
>
> Each test file can declare a sequence of nested fixtures to be loaded
> from a configured (in the loader) module. Fixtures are context managers
> (they establish the fixture upon __enter__ and destroy it upon
> __exit__).
>
> With a proper group_regex setting in .testr.conf each YAML file can
> run in its own process in a concurrent test runner.
>
> The docs contain information on the format of the test files:
>
> http://gabbi.readthedocs.org/en/latest/format.html
>
> Each test can state request headers and bodies and evaluate both response
> headers and response bodies. Request bodies can be strings in the
> YAML, files read from disk, or JSON created from YAML structures.
> Response verifcation can use JSONPath[3] to inspect the details of
> response bodies. Response header validation may use regular
> expressions.
>
> There is limited support for refering to the previous request
> to construct URIs, potentially allowing traversal of a full HATEOAS
> compliant API.
>
> At the moment the most complete examples of how things work are:
>
> * Ceilometer's pending use of gabbi:
>   https://review.openstack.org/#/c/146187/
> * Gabbi's testing of gabbi:
>   https://github.com/cdent/gabbi/tree/master/gabbi/gabbits_intercept
>   (the loader and faked WSGI app for those yaml files is in:
>   https://github.com/cdent/gabbi/blob/master/gabbi/test_intercept.py)
>
> One obvious thing that will need to happen is a suite of concrete
> examples on how to use the various features. I'm hoping that
> feedback will help drive that.
>
> In my own experimentation with gabbi I've found it very useful. It's
> helped me explore and learn the ceilometer API in a way that existing
> test code has completely failed to do. It's also helped reveal
> several warts that will be very useful to fix. And it is fast. To
> run and to write. I hope that with some work it can be useful to you
> too.
>
> Thanks.
>
> [1] Getting gabbi to play well with PyUnit style tests and
> with infrastructure like subunit and testrepository was one of
> the most challenging parts of the build, but the result has been
> a lot of flexbility.
>
> [2] https://pypi.python.org/pypi/wsgi_intercept
> [3] https://pypi.python.org/pypi/jsonpath-rw
>
> --
> Chris Dent tw:@anticdent freenode:cdent
> https://tank.peermore.com/tanks/cdent
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)

Re: [openstack-dev] [api] API Definition Formats

2015-01-12 Thread Ian Cordasco
On 1/9/15, 15:17, "Everett Toews"  wrote:

>One thing that has come up in the past couple of API WG meetings [1] is
>just how useful a proper API definition would be for the OpenStack
>projects.
>
>By API definition I mean a format like Swagger, RAML, API Blueprint, etc.
>These formats are a machine/human readable way of describing your API.
>Ideally they drive the implementation of both the service and the client,
>rather than treating the format like documentation where it’s produced as
>a by product of the implementation.
>
>I think this blog post [2] does an excellent job of summarizing the role
>of API definition formats.
>
>Some of the other benefits include validation of requests/responses,
>easier review of API design/changes, more consideration given to client
>design, generating some portion of your client code, generating
>documentation, mock testing, etc.
>
>If you have experience with an API definition format, how has it
>benefitted your prior projects?
>
>Do you think it would benefit your current OpenStack project?
>
>Thanks,
>Everett
>
>[1] https://wiki.openstack.org/wiki/Meetings/API-WG
>[2] 
>http://apievangelist.com/2014/12/21/making-sure-the-most-important-layers-
>of-api-space-stay-open/

Hey Everett,

As we discussed in the meeting, I have some experience with a library
called Interpol [1] and using it in a massive API service. The idea behind
that service was re-written as an open source case study in a project
called Caravan [2].

In short, each and every endpoint used JSON Schema to validate the request
and response for each version of the endpoint. (Yes, endpoints were
versioned individually and that’s a topic for a different discussion.) The
files used by Interpol (which is what applied the defined JSON Schema to
the request/response cycle via Rack middleware) looked something like
https://github.com/bendyworks/caravan/blob/master/lib/endpoint_definitions/
users/user_by_id.yml.

If you read it closely, you’ll notice that path parameters are part of the
schema [3] and status codes are required [4]. Each part of the schema also
has the ability to be described [5]. This allows for Interpol to
automatically document the API for you. Finally, you can define example
responses [6] so you can prop up a stub application for other
services/applications to use. Finally, Interpol has a way of testing the
endpoint definitions (as they’re referred to) to ensure that the example
data actually does follow the schema provided.

As far as I know, there’s nothing similar to Interpol in Python … yet. I’m
fairly confident that the middleware would take a weekend or two of
sprinting to complete. Further, we could allow for more formats than YAML
but I think this could tie in well with the gabbi testing discussion
taking place. The rest might take a bit longer to complete.

In short, using schemas in test and in production allowed the
integration/acceptance tests to remain far more succinct. If you have
something enforcing your request and response formats then you can simply
test that you did get a status code 200 because something else has
validated the contents. If you want to validate that there’s items in the
array, you can skip validating the other properties because if there’s at
least one, the objects inside have been validated by the middleware (so
you can assert at least one came back and be confident).

This worked extremely well in my experience and helped improve development
time for new endpoints and new endpoint versions. The documentation was
also heavily used for the multiple internal clients for that API.

The company that used this used the validation in production (as well as
in testing) had no problems with scaling or performance.

The problem with building something like this /might/ be tying it in to
the different frameworks used by each of the services but on the whole
could be delegated to each service as it looks to integrate.

From my personal perspective, YAML is a nice way to document all of this
data, especially since it’s a format that most any language can parse. We
used these endpoint definitions to simply how we wrote clients for the API
we were developing and I suspect we could do something similar with the
existing clients. It would also definitely help any new clients that
people are currently writing. The biggest win for us would be having our
documentation mostly auto-generated for us and having a whole suite of
tests that would check that a real response matches the schema. If it
doesn’t, we know the schema needs to be updated and then the docs would be
automatically updated as a consequence. It’s a nice way of enforcing that
the response changes are documented as they’re changed.

Cheers,
Ian

[1] https://github.com/seomoz/interpol
[2] https://github.com/bendyworks/caravan
[3] 
https://github.com/bendyworks/caravan/blob/aa05fb345ad346b85fa989e857478491
2104570b/lib/endpoint_definitions/users/user_by_id.yml#L8..L12
[4] 
https://github.com/bendyworks/caravan/blob/

Re: [openstack-dev] [Fuel] Lack of additional setup on 10Gbit interfaces.

2015-01-12 Thread Dmitry Borodaenko
Hi Piotr,

Thanks for writing up a detailed summary of the problem! At the
moment, we have a way to set MTU using Fuel CLI [0] and a blueprint to
add this functionality to Fuel Web UI [1]

[0] 
http://docs.mirantis.com/openstack/fuel/fuel-6.0/reference-architecture.html#adjust-the-network-configuration-via-cli
[1] https://blueprints.launchpad.net/fuel/+spec/set-mtu-for-interfaces

I'm not sure it's safe to assume that if you have a 10G NICs the rest
of your network is going to support jumbo frames, do you think simply
being able to set MTU explicitly (when you know for a fact that
particular MTU value works) would be good enough of a solution?


On Mon, Jan 12, 2015 at 1:24 PM, Skamruk, Piotr  wrote:
> Hi.
>
>
>
> I’m testing OpenStack setup set on our hardware with Fuel 6.0 and I found
> the problem with 10Gbit network interfaces configuration.
>
> Our setup uses Centos on deployed nodes – I didn’t look how this situation
> looks from Ubuntu perspective, but looking on the fuel-library – there is
> probably the same effect.
>
>
>
> With default settings, nodes deployed by fuel have 2.6.32.xxx linux kernel,
> with 3.10 available and marked as “experimental”.
>
> Under webui for deployment, network interfaces are correctly shown as
> running on 10Gbit, but… Maximal transfer rates which We could achieve were
> around 2.5Gbit/s.
>
>
>
> After some investigation I found that interfaces configured by
> /etc/sysconfig/network-scripts/ifcfg-* have set default MTU, no matter if
> particular interface is or is not 10Gbit. I did not searched how other than
> igxbe drivers works, but this particular under so old kernels in 10Gbit
> configuration requires MTU set to at least 9000 (to turn on jumbo frames –
> probably other drivers have similar requirement), to work properly.
>
>
>
> Manual adding (this is only simplification, this should be set more
> carefully):
>
>   for f in /etc/sysconfig/network-scripts/ifcfg-* ; do echo “MTU=9000” >>$f
> ; done
>
> partially resolves this problem (partially, because under default 2.6.32.xxx
> still We do not have better than 6Gbit/s transfers in single stream, but
> situation is much better under mentioned above 3.10 kernel – We have full
> 10Gbit/s).
>
>
>
> Looking into fuel-library, l23network::l3::ifconfig have ability to also
> configure MTU, but this functionality looks unused in this situation.
>
>
>
> End user which buys setup with 10Gbit/s 82599 based network adapters expects
> that in default configuration “all should work as expected”. From user
> perspective – actual situation is faulty.
>
> For this moment – not only in time of deploy he must select option marked as
> “experimental”, but also he must patch deployed setup, and remember to patch
> in same way every one added in future physical node.
>
>
>
> So, what We can do to make end user happier?
>
> Could We in puppet files do something like:
>
>   if interface_link == ‘10Gbit’ and interface_driver == ‘igxbe’:
>
> set_mtu(9000)
>
> interface_driver could be readed from link name, from
> /sys/class/net//device/driver/module
>
> interface_link could be readed from ethtool  | grep Speed
>
>
>
> -
> Intel Technology Poland sp. z o.o.
> ul. Słowackiego 173 | 80-298 Gdańsk | Sąd Rejonowy Gdańsk
> Północ | VII Wydział Gospodarczy Krajowego Rejestru Sądowego
> - KRS 101882 | NIP 957-07-52-316 | Kapitał zakładowy 200.000 PLN.
>
> Ta wiadomość wraz z załącznikami jest przeznaczona dla
> określonego adresata i może zawierać informacje poufne. W razie
> przypadkowego otrzymania tej wiadomości, prosimy o powiadomienie nadawcy
> oraz trwałe jej usunięcie; jakiekolwiek przeglądanie lub
> rozpowszechnianie jest zabronione.
> This e-mail and any attachments may contain confidential material for the
> sole use of the intended recipient(s). If you are not the intended
> recipient, please contact the sender and delete all copies; any review or
> distribution by others is strictly prohibited.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Dmitry Borodaenko

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Cutoff deadlines for cinder drivers

2015-01-12 Thread Mike Perez
On 09:03 Mon 12 Jan , Erlon Cruz wrote:
> Hi guys,
> 
> Thanks for answering my questions. I have 2 points:
> 
> 1 - This (remove drivers without CI) is a way impacting change to be
> implemented without exhausting notification and discussion on the mailing
> list. I myself was in the meeting but this decision wasn't crystal clear.
> There must be other driver maintainers completely unaware of this.

I agree that the mailing list has not been exhausted, however, just reaching
out to the mailing list is not good enough. My instructions back in November
19th [1][2] were that we need to email individual maintainers and the
openstack-dev list. That was not done. As far as I'm concerned, we can't stick
to the current deadline for existing drivers. I will bring this up in the next
Cinder meeting.

> 2 - Build a CI infrastructure and have people to maintain a the CI for a
> new driver in a 5 weeks frame. Not all companies has the knowledge and
> resources necessary to this in such sort period. We should consider a grace
> release period, i.e. drivers entering on K, have until L to implement
> theirs CIs.

New driver maintainers have until March 19th. [3] That's around 17 weeks since
we discussed this in November [2]. This is part the documentation for how to
contribute a driver [4], which links to the third party requirement deadline
[3].

[1] - 
http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-11-19-16.00.html
[2] - 
http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-11-19-16.00.log.html#l-34
[3] - https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#Deadlines
[4] - https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-12 Thread Sean Dague
On 01/12/2015 05:00 PM, Boris Pavlovic wrote:
> Hi Chris, 
> 
> If there's sufficient motivation and time it might make sense to
> separate the part of gabbi that builds TestCases from the part that
> runs (and evaluates) HTTP requests and responses. If that happens then
> integration with tools like Rally and runners is probably possible.
> 
> 
> 
> Having separated engine seems like a good idea. It will really simplify
> stuff 
> 
> 
> So, while this is an interesting idea, it's not something that gabbi
> intends to be. It doesn't validate existing clouds. It validates code
> that is used to run clouds.
> Such a thing is probably possible (especially given the fact that you
> can give a "real" host to gabbi tests) but that's not the primary
> goal.
> 
> 
> 
> This seems like a huge duplication of efforts. I mean operators will
> write own 
> tools developers own... Why not just resolve more common problem: 
> "Does it work or not?"

I think it's important to look at this in the narrower context, we're
not testing full environments here, this is custom crafting HTTP req /
resp in a limited context to make sure components are completing a contract.

"Does it work or not?" is so broad a statement as to be meaningless most
of the time. It's important to be able to looking at these lower level
response flows and make sure they both function, and that when they
break, they do so in a debuggable way.

So I'd say let's focus on that problem right now, and get some traction
on this as part of functional test suites in OpenStack. Genericizing it
too much just turns this back into a version of every other full stack
testing tool, which we know isn't sufficient for having quality
components in OpenStack.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Where to keep data about stack breakpoints?

2015-01-12 Thread Zane Bitter

On 12/01/15 13:05, Steven Hardy wrote:

>I also had a chat with Steve Hardy and he suggested adding a STOPPED state
>to the stack (this isn't in the spec). While not strictly necessary to
>implement the spec, this would help people figure out that the stack has
>reached a breakpoint instead of just waiting on a resource that takes a long
>time to finish (the heat-engine log and event-list still show that a
>breakpoint was reached but I'd like to have it in stack-list and
>resource-list, too).
>
>It makes more sense to me to call it PAUSED (we're not completely stopping
>the stack creation after all, just pausing it for a bit), I'll let Steve
>explain why that's not the right choice :-).

So, I've not got strong opinions on the name, it's more the workflow:

1. User triggers a stack create/update
2. Heat walks the graph, hits a breakpoint and stops.
3. Heat explicitly triggers continuation of the create/update


Did you mean the user rather than Heat for (3)?


My argument is that (3) is always a stack update, either a PUT or PATCH
update, e.g we_are_  completely stopping stack creation, then a user can
choose to re-start it (either with the same or a different definition).


Hmmm, ok that's interesting. I have not been thinking of it that way. 
I've always thought of it like this:


http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/adding-lifecycle-hooks.html

(Incidentally, this suggests an implementation where the lifecycle hook 
is actually a resource - with its own API, naturally.)


So, if it's requested, before each operation we send out a notification 
(hopefully via Zaqar), and if a breakpoint is set that operation is not 
carried out until the user makes an API call acknowledging it.



So, it_is_  really an end state, as a user might never choose to update
from the stopped state, in which case *_STOPPED makes more sense.


That makes a bit more sense now.

I think this is going to be really hard to implement though. Because 
while one branch of the graph stops, other branches have to continue as 
far as they can. At what point do you change the state of the stack?



Paused implies the same action as the PATCH update, only we trigger
continuation of the operation from the point we reached via some sort of
user signal.

If we actually pause an in-progress action via the scheduler, we'd have to
start worrying about stuff like token expiry, hitting timeouts, resilience
to engine restarts, etc, etc.  So forcing an explicit update seems simpler
to me.


Yes, token expiry and stack timeouts are annoying things we'd have to 
deal with. (Resilience to engine restarts is not affected though.) 
However, I'm not sure your model is simpler, and in particular it sounds 
much harder to implement in the convergence architecture.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-12 Thread Boris Pavlovic
Hi Chris,

If there's sufficient motivation and time it might make sense to
> separate the part of gabbi that builds TestCases from the part that
> runs (and evaluates) HTTP requests and responses. If that happens then
> integration with tools like Rally and runners is probably possible.



Having separated engine seems like a good idea. It will really simplify
stuff


So, while this is an interesting idea, it's not something that gabbi
> intends to be. It doesn't validate existing clouds. It validates code
> that is used to run clouds.
> Such a thing is probably possible (especially given the fact that you
> can give a "real" host to gabbi tests) but that's not the primary
> goal.



This seems like a huge duplication of efforts. I mean operators will write
own
tools developers own... Why not just resolve more common problem:
"Does it work or not?"


But if you are concerned about individual test times gabbi makes every
> request an individual TestCase, which means that subunit can record times
> for it. Here's a sample of the output from running gabbi's own gabbi
> tests:
> $ python -m subunit.run discover gabbi |subunit-trace
> [...]
> gabbi.driver.test_intercept_self_inheritance_of_defaults.test_request
> [0.027512s] ... ok
> [...]



What is "test_request" Just one RestAPI call?

Btw the thin that I am interested how they are all combined?

 -> fixtures.set
-> run first Rest call
-> run second Rest call
...
 -> fixtures.clean

Something like that?

And where are you doing cleanup? (like if you would like to test only
creation of resource?)


Best regards,
Boris Pavlovic



On Tue, Jan 13, 2015 at 12:37 AM, Chris Dent  wrote:

> On Tue, 13 Jan 2015, Boris Pavlovic wrote:
>
>  The Idea is brilliant. I may steal it! =)
>>
>
> Feel free.
>
>  But there are some issues that will be faced:
>>
>> 1) Using as a base unittest:
>>
>>  python -m subunit.run discover -f gabbi | subunit2pyunit
>>>
>>
>> So rally team won't be able to reuse it for load testing (if we directly
>> integrate it) because we will have huge overhead (of discover stuff)
>>
>
> So the use of unittest, subunit and related tools are to allow the
> tests to be integrated with the usual OpenStack testing handling. That
> is, gabbi is primarily oriented towards being a tool for developers to
> drive or validate their work.
>
> However we may feel about subunit, testr etc they are a de facto
> standard. As I said in my message at the top of the thread the vast
> majority of effort made in gabbi was getting it to be "tests" in the
> PyUnit view of the universe. And not just appear to be tests, but each
> request as an individual TestCase discoverable and addressable in the
> PyUnit style.
>
> In any case, can you go into more details about your concerns with
> discovery? In my limited exploration thus far the discovery portion is
> not too heavyweight: reading the YAML files.
>
>  2.3) It makes it hardly integratabtle with other tools. Like Rally..
>>
>
> If there's sufficient motivation and time it might make sense to
> separate the part of gabbi that builds TestCases from the part that
> runs (and evaluates) HTTP requests and responses. If that happens then
> integration with tools like Rally and runners is probably possible.
>
>  3) Usage by Operators is hard in case of N projects.
>>
>
> This is not a use case that I really imagined for gabbi. I didn't want
> to create a tool for everyone, I was after satisfying a narrow part of
> the "in tree functional tests" need that's been discussed for the past
> several months. That narrow part is: legible tests of the HTTP aspects
> of project APIs.
>
>  Operators would like to have 1 button that will say (does cloud work or
>> not). And they don't want to combine all gabbi files from all projects and
>> run test.
>>
>
> So, while this is an interesting idea, it's not something that gabbi
> intends to be. It doesn't validate existing clouds. It validates code
> that is used to run clouds.
>
> Such a thing is probably possible (especially given the fact that you
> can give a "real" host to gabbi tests) but that's not the primary
> goal.
>
>  4) Using subunit format is not good for functional testing.
>>
>> It doesn't allow you to collect detailed information about execution of
>> test. Like for benchmarking it will be quite interesting to collect
>> durations of every API call.
>>
>
> I think we've all got different definitions of functional testing. For
> example in my own personal defintion I'm not too concerned about test
> times: I'm worried about what fails.
>
> But if you are concerned about individual test times gabbi makes every
> request an individual TestCase, which means that subunit can record times
> for it. Here's a sample of the output from running gabbi's own gabbi
> tests:
>
> $ python -m subunit.run discover gabbi |subunit-trace
> [...]
> gabbi.driver.test_intercept_self_inheritance_of_defaults.test_request
> [0.027512s] ... ok
> [...]
>
>
>
> --
> Chris Dent tw:@anticdent

Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-12 Thread Chris Dent

On Tue, 13 Jan 2015, Boris Pavlovic wrote:


The Idea is brilliant. I may steal it! =)


Feel free.


But there are some issues that will be faced:

1) Using as a base unittest:


python -m subunit.run discover -f gabbi | subunit2pyunit


So rally team won't be able to reuse it for load testing (if we directly
integrate it) because we will have huge overhead (of discover stuff)


So the use of unittest, subunit and related tools are to allow the
tests to be integrated with the usual OpenStack testing handling. That
is, gabbi is primarily oriented towards being a tool for developers to
drive or validate their work.

However we may feel about subunit, testr etc they are a de facto
standard. As I said in my message at the top of the thread the vast
majority of effort made in gabbi was getting it to be "tests" in the
PyUnit view of the universe. And not just appear to be tests, but each
request as an individual TestCase discoverable and addressable in the
PyUnit style.

In any case, can you go into more details about your concerns with
discovery? In my limited exploration thus far the discovery portion is
not too heavyweight: reading the YAML files.


2.3) It makes it hardly integratabtle with other tools. Like Rally..


If there's sufficient motivation and time it might make sense to
separate the part of gabbi that builds TestCases from the part that
runs (and evaluates) HTTP requests and responses. If that happens then
integration with tools like Rally and runners is probably possible.


3) Usage by Operators is hard in case of N projects.


This is not a use case that I really imagined for gabbi. I didn't want
to create a tool for everyone, I was after satisfying a narrow part of
the "in tree functional tests" need that's been discussed for the past
several months. That narrow part is: legible tests of the HTTP aspects
of project APIs.


Operators would like to have 1 button that will say (does cloud work or
not). And they don't want to combine all gabbi files from all projects and
run test.


So, while this is an interesting idea, it's not something that gabbi
intends to be. It doesn't validate existing clouds. It validates code
that is used to run clouds.

Such a thing is probably possible (especially given the fact that you
can give a "real" host to gabbi tests) but that's not the primary
goal.


4) Using subunit format is not good for functional testing.

It doesn't allow you to collect detailed information about execution of
test. Like for benchmarking it will be quite interesting to collect
durations of every API call.


I think we've all got different definitions of functional testing. For
example in my own personal defintion I'm not too concerned about test
times: I'm worried about what fails.

But if you are concerned about individual test times gabbi makes every
request an individual TestCase, which means that subunit can record times
for it. Here's a sample of the output from running gabbi's own gabbi
tests:

$ python -m subunit.run discover gabbi |subunit-trace
[...]
gabbi.driver.test_intercept_self_inheritance_of_defaults.test_request 
[0.027512s] ... ok
[...]


--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Where to keep data about stack breakpoints?

2015-01-12 Thread Zane Bitter

On 12/01/15 10:49, Ryan Brown wrote:

On 01/12/2015 10:29 AM, Tomas Sedovic wrote:

Hey folks,

I did a quick proof of concept for a part of the Stack Breakpoint
spec[1] and I put the "does this resource have a breakpoint" flag into
the metadata of the resource:

https://review.openstack.org/#/c/146123/

I'm not sure where this info really belongs, though. It does sound like
metadata to me (plus we don't have to change the database schema that
way), but can we use it for breakpoints etc., too? Or is metadata
strictly for Heat users and not for engine-specific stuff?


I'd rather not store it in metadata so we don't mix user metadata with
implementation-specific-and-also-subject-to-change runtime metadata. I
think this is a big enough feature to warrant a schema update (and I
can't think of another place I'd want to put the breakpoint info).


+1

I'm actually not convinced it should be in the template at all. Steve's 
suggestion of putting it the environment might be a good one, or maybe 
it should even just be an extra parameter to the stack create/update 
APIs (like e.g. the timeout is)?



I also had a chat with Steve Hardy and he suggested adding a STOPPED
state to the stack (this isn't in the spec). While not strictly
necessary to implement the spec, this would help people figure out that
the stack has reached a breakpoint instead of just waiting on a resource
that takes a long time to finish (the heat-engine log and event-list
still show that a breakpoint was reached but I'd like to have it in
stack-list and resource-list, too).

It makes more sense to me to call it PAUSED (we're not completely
stopping the stack creation after all, just pausing it for a bit), I'll
let Steve explain why that's not the right choice :-).


+1 to PAUSED. To me, STOPPED implies an end state (which a breakpoint is
not).


I agree we need an easy way for the user to see why nothing is 
happening, but adding additional states to the stack is a pretty 
dangerous change that risks creating regressions all over the place. If 
we can find _any_ other way to surface the information, it would be 
preferable IMHO.


cheers,
Zane.


For sublime end user confusion, we could use BROKEN. ;)


Tomas

[1]:
http://specs.openstack.org/openstack/heat-specs/specs/juno/stack-breakpoint.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Lack of additional setup on 10Gbit interfaces.

2015-01-12 Thread Skamruk, Piotr
Hi.

I'm testing OpenStack setup set on our hardware with Fuel 6.0 and I found the 
problem with 10Gbit network interfaces configuration.
Our setup uses Centos on deployed nodes - I didn't look how this situation 
looks from Ubuntu perspective, but looking on the fuel-library - there is 
probably the same effect.

With default settings, nodes deployed by fuel have 2.6.32.xxx linux kernel, 
with 3.10 available and marked as "experimental".
Under webui for deployment, network interfaces are correctly shown as running 
on 10Gbit, but... Maximal transfer rates which We could achieve were around 
2.5Gbit/s.

After some investigation I found that interfaces configured by 
/etc/sysconfig/network-scripts/ifcfg-* have set default MTU, no matter if 
particular interface is or is not 10Gbit. I did not searched how other than 
igxbe drivers works, but this particular under so old kernels in 10Gbit 
configuration requires MTU set to at least 9000 (to turn on jumbo frames - 
probably other drivers have similar requirement), to work properly.

Manual adding (this is only simplification, this should be set more carefully):
  for f in /etc/sysconfig/network-scripts/ifcfg-* ; do echo "MTU=9000" >>$f ; 
done
partially resolves this problem (partially, because under default 2.6.32.xxx 
still We do not have better than 6Gbit/s transfers in single stream, but 
situation is much better under mentioned above 3.10 kernel - We have full 
10Gbit/s).

Looking into fuel-library, l23network::l3::ifconfig have ability to also 
configure MTU, but this functionality looks unused in this situation.

End user which buys setup with 10Gbit/s 82599 based network adapters expects 
that in default configuration "all should work as expected". From user 
perspective - actual situation is faulty.
For this moment - not only in time of deploy he must select option marked as 
"experimental", but also he must patch deployed setup, and remember to patch in 
same way every one added in future physical node.

So, what We can do to make end user happier?
Could We in puppet files do something like:
  if interface_link == '10Gbit' and interface_driver == 'igxbe':
set_mtu(9000)
interface_driver could be readed from link name, from 
/sys/class/net//device/driver/module
interface_link could be readed from ethtool  | grep Speed



Intel Technology Poland sp. z o.o.
ul. Slowackiego 173 | 80-298 Gdansk | Sad Rejonowy Gdansk Polnoc | VII Wydzial 
Gospodarczy Krajowego Rejestru Sadowego - KRS 101882 | NIP 957-07-52-316 | 
Kapital zakladowy 200.000 PLN.

Ta wiadomosc wraz z zalacznikami jest przeznaczona dla okreslonego adresata i 
moze zawierac informacje poufne. W razie przypadkowego otrzymania tej 
wiadomosci, prosimy o powiadomienie nadawcy oraz trwale jej usuniecie; 
jakiekolwiek
przegladanie lub rozpowszechnianie jest zabronione.
This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). If you are not the intended recipient, please 
contact the sender and delete all copies; any review or distribution by
others is strictly prohibited.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][python-clients] More freedom for all python clients

2015-01-12 Thread Boris Pavlovic
Hello TC,

I would like to propose to allow adding all python-clients from stackforge
(that are regarding global-requirements) to global requirements.

It doesn't cost anything and simplifies life for everybody on stackforge.

P.S. We already have billions libs in global requirements that aren't even
on stackforge.
Having few more or less doesn't make any sense...


Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Hacking 0.10 released

2015-01-12 Thread Ben Nemec
Just a heads up for anyone else making these changes: Even though the
g-r entry is >=0.10, you need to do >=0.10.0 in the project for it to
pass the requirements check.

On 01/10/2015 07:15 PM, Joe Gordon wrote:
> Hi all,
> 
> I am happy to announce the release of hacking 0.10. Below is a list of
> whats new. Unlike most dependencies hacking changes are not automatically
> pushed out by the OpenStack Proposal Bot. In order to migrate to the new
> release each project will need a patch like this:
> https://review.openstack.org/#/c/145570/
> 
> 
>- flake8 now uses multiprocessing by default!
>- Remove H402: first line of docstring should end with punctuation
>- Remove H904: Wrap long lines in parentheses and not backslash for line
>continuation
>- Update H501, don't use locals() for formatting strings. to also check
>for self.__dict__
>- Add H105: don't use author tags
>- Add H238: check for old style class declarations
>- Remove all git commit message rules: H801, H802, H803
>- Remove complex import rules: H302, H306, H307
> 
> 
> Dependency changes:
> 
>- pep8 from 1.5.6 to 1.5.7 (https://pypi.python.org/pypi/pep8)
>- flake8 from 2.1.0 to 2.2.4 (https://pypi.python.org/pypi/flake8)
>- six from >= 1.60 to >=1.7.0
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-12 Thread Chris Dent

On Mon, 12 Jan 2015, Gregory Haynes wrote:


Awesome! I was discussing trying to add extensions to RAML[1] so we
could do something like this the other day. Is there any reason you
didnt use an existing modeling language like this?


Glad you like it.

I chose to go with my own model in the YAML for a few different
reasons:

* I had some pre-existing code[1] that had worked well (but was
  considerably less featureful[2]) so I used that as a starting point.

* I wanted to model HTTP requests and responses _not_ APIs. RAML looks
  pretty interesting but it abstracts at a slightly different level
  for a considerably different purpose. To use it in the context I was
  working towards would require ignoring a lot of the syntax and (as
  far as a superficial read goes) adding a fair bit more.

* I wanted small, simple and clean but [2] came along so now it is
  like most languages: small, simple and clean if you try to make it
  that way, noisy if you let things get out of hand.

[1]
https://github.com/tiddlyweb/tiddlyweb/blob/master/test/http_runner.py
https://github.com/tiddlyweb/tiddlyweb/blob/master/test/httptest.yaml

[2] What I found while building gabbi was that it could be a useful as
a TDD tool without many features. The constrained feature set would
result in constrained (and thus limited in the good way) APIs because
the limited expressiveness of the tests would limit ambiguity in the
API.

However, existing APIs were not limited from the outset and have a fair
bit of ambiguity so to test them a lot of flexibility is required in
the tests. Already in conversations this evening people are asking for
more features in the evaluation of response bodies in order to be able
to test more flexibily.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] static files handling, bower/

2015-01-12 Thread Drew Fisher


On 12/18/14 6:58 AM, Radomir Dopieralski wrote:
> Hello,
> 
> revisiting the package management for the Horizon's static files again,
> I would like to propose a particular solution. Hopefully it will allow
> us to both simplify the whole setup, and use the popular tools for the
> job, without losing too much of benefits of our current process.
> 
> The changes we would need to make are as follows:
> 
> * get rid of XStatic entirely;
> * add to the repository a configuration file for Bower, with all the
> required bower packages listed and their versions specified;

I know I'm very very late to this thread but can I ask why Bower?  Bower
has a hard requirement on Node.js which was removed as a dependency in
Havana.  Why are we reintroducing this requirement?

For Solaris, a requirement on Node.js is especially problematic as there
is no official SPARC port and I'm not aware of anybody else working on one.

I agree that XStatic isn't really the best solution here but are there
any other solutions that don't involve Node.js?

Thanks.

-Drew

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-12 Thread Gregory Haynes
Excerpts from Chris Dent's message of 2015-01-12 19:20:18 +:
> 
> After some discussion with Sean Dague and a few others it became
> clear that it would be a good idea to introduce a new tool I've been
> working on to the list to get a sense of its usefulness generally,
> work towards getting it into global requirements, and get the
> documentation fleshed out so that people can actually figure out how
> to use it well.
> 
> tl;dr: Help me make this interesting tool useful to you and your
> HTTP testing by reading this message and following some of the links
> and asking any questions that come up.
> 
> The tool is called gabbi
> 
>  https://github.com/cdent/gabbi
>  http://gabbi.readthedocs.org/
>  https://pypi.python.org/pypi/gabbi
> 
> It describes itself as a tool for running HTTP tests where requests
> and responses are represented in a declarative form. Its main
> purpose is to allow testing of APIs where the focus of test writing
> (and reading!) is on the HTTP requests and responses, not on a bunch of
> Python (that obscures the HTTP).
> 
> The tests are written in YAML and the simplest test file has this form:
> 
> ```
> tests:
> - name: a test
>url: /
> ```
> 
> This test will pass if the response status code is '200'.
> 
> The test file is loaded by a small amount of python code which transforms
> the file into an ordered sequence of TestCases in a TestSuite[1].
> 
> ```
> def load_tests(loader, tests, pattern):
>  """Provide a TestSuite to the discovery process."""
>  test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR)
>  return driver.build_tests(test_dir, loader, host=None,
>intercept=SimpleWsgi,
>fixture_module=sys.modules[__name__])
> ```
> 
> The loader provides either:
> 
> * a host to which real over-the-network requests are made
> * a WSGI app which is wsgi-intercept-ed[2]
> 
> If an individual TestCase is asked to be run by the testrunner, those tests
> that are prior to it in the same file are run first, as prerequisites.
> 
> Each test file can declare a sequence of nested fixtures to be loaded
> from a configured (in the loader) module. Fixtures are context managers
> (they establish the fixture upon __enter__ and destroy it upon
> __exit__).
> 
> With a proper group_regex setting in .testr.conf each YAML file can
> run in its own process in a concurrent test runner.
> 
> The docs contain information on the format of the test files:
> 
>  http://gabbi.readthedocs.org/en/latest/format.html
> 
> Each test can state request headers and bodies and evaluate both response
> headers and response bodies. Request bodies can be strings in the
> YAML, files read from disk, or JSON created from YAML structures.
> Response verifcation can use JSONPath[3] to inspect the details of
> response bodies. Response header validation may use regular
> expressions.
> 
> There is limited support for refering to the previous request
> to construct URIs, potentially allowing traversal of a full HATEOAS
> compliant API.
> 
> At the moment the most complete examples of how things work are:
> 
> * Ceilometer's pending use of gabbi:
>https://review.openstack.org/#/c/146187/
> * Gabbi's testing of gabbi:
>https://github.com/cdent/gabbi/tree/master/gabbi/gabbits_intercept
>(the loader and faked WSGI app for those yaml files is in:
>https://github.com/cdent/gabbi/blob/master/gabbi/test_intercept.py)
> 
> One obvious thing that will need to happen is a suite of concrete
> examples on how to use the various features. I'm hoping that
> feedback will help drive that.
> 
> In my own experimentation with gabbi I've found it very useful. It's
> helped me explore and learn the ceilometer API in a way that existing
> test code has completely failed to do. It's also helped reveal
> several warts that will be very useful to fix. And it is fast. To
> run and to write. I hope that with some work it can be useful to you
> too.
> 
> Thanks.
> 
> [1] Getting gabbi to play well with PyUnit style tests and
>  with infrastructure like subunit and testrepository was one of
>  the most challenging parts of the build, but the result has been
>  a lot of flexbility.
> 
> [2] https://pypi.python.org/pypi/wsgi_intercept
> [3] https://pypi.python.org/pypi/jsonpath-rw
> 

Awesome! I was discussing trying to add extensions to RAML[1] so we
could do something like this the other day. Is there any reason you
didnt use an existing modeling language like this?

Cheers,
Greg

[1] http://raml.org/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] LDAP Identity Use Survey

2015-01-12 Thread Morgan Fainberg
The Keystone development team is looking for deployment feedback regarding the 
use of the LDAP Identity backend. The Identity backend only covers Users and 
Groups.

We are looking to get an idea of types (read-only, read-write, etc) and reasons 
for use of the LDAP backend. The answers to this survey will help us to 
prioritize updates, changes, and set direction for the the LDAP Identity 
backend.

http://goo.gl/forms/bzZT5KGqkv 

This survey is only meant to get information on the use of the LDAP Identity 
backend. Identity only contains User and Group information.

Cheers,
Morgan Fainberg__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-12 Thread Boris Pavlovic
Sean,


I definitely like the direction that gabbi seems to be headed. It feels
> like a much cleaner version of what nova tried to do with API samples.
> As long as multiple projects think this is an interesting direction, I
> think it's probably fine to add it to global-requirements and let them
> start working with it.



+1 more testing better code.

Best regards,
Boris Pavlovic

On Mon, Jan 12, 2015 at 11:20 PM, Sean Dague  wrote:

> On 01/12/2015 03:11 PM, Dean Troyer wrote:
> > Thanks for this Chris, I'm hoping to get my fingers dirty with it Real
> > Soon Now.
> >
> > On Mon, Jan 12, 2015 at 1:54 PM, Eoghan Glynn  > > wrote:
> >
> > I'd be interested in hearing the api-wg viewpoint, specifically
> whether
> > that working group intends to recommend any best practices around the
> > approach to API testing.
> >
> >
> > Testing recommendations haven't been part of the conversation yet, but I
> > think it is within scope for the WG to have some opinions on REST API
> > design and validation tools.
>
> I definitely like the direction that gabbi seems to be headed. It feels
> like a much cleaner version of what nova tried to do with API samples.
>
> As long as multiple projects think this is an interesting direction, I
> think it's probably fine to add it to global-requirements and let them
> start working with it.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-12 Thread Sean Dague
On 01/12/2015 03:18 PM, Boris Pavlovic wrote:
> Chris, 
> 
> The Idea is brilliant. I may steal it! =)
> 
> But there are some issues that will be faced: 
> 
> 1) Using as a base unittest: 
> 
> python -m subunit.run discover -f gabbi | subunit2pyunit
> 
> 
> So rally team won't be able to reuse it for load testing (if we directly
> integrate it) because we will have huge overhead (of discover stuff)
> 
> 2) Load testing. 
> 
> Using unittest for functional testing adds a lot of troubles: 
> 2.1) It makes things complicated: 
> Like reusing fixtures via input YAML will be painfull
> 2.2) It adds a lot of functionality that is not required 
> 2.3) It makes it hardly integratabtle with other tools. Like Rally.. 
> 
> 3) Usage by Operators is hard in case of N projects. 
> 
> So you should have some kind of 
> 
> Operators would like to have 1 button that will say (does cloud work or
> not). And they don't want to combine all gabbi files from all projects
> and run test. 
> 
> From other side there should be a way to write such code
> in-projects-tree (so new features are directly tested) and then moved to
> some common place that is run on every patch (without breaking gates) 
> 
> 4) Using subunit format is not good for functional testing.
>  
> It doesn't allow you to collect detailed information about execution of
> test. Like for benchmarking it will be quite interesting to collect
> durations of every API call. 

I'm not sure how subunit causes an issue here either way. You can either
put content into one of the existing subunit attachments, or could
modify it to have a new one.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-12 Thread Sean Dague
On 01/12/2015 03:11 PM, Dean Troyer wrote:
> Thanks for this Chris, I'm hoping to get my fingers dirty with it Real
> Soon Now.
> 
> On Mon, Jan 12, 2015 at 1:54 PM, Eoghan Glynn  > wrote:
> 
> I'd be interested in hearing the api-wg viewpoint, specifically whether
> that working group intends to recommend any best practices around the
> approach to API testing.
> 
> 
> Testing recommendations haven't been part of the conversation yet, but I
> think it is within scope for the WG to have some opinions on REST API
> design and validation tools.

I definitely like the direction that gabbi seems to be headed. It feels
like a much cleaner version of what nova tried to do with API samples.

As long as multiple projects think this is an interesting direction, I
think it's probably fine to add it to global-requirements and let them
start working with it.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-12 Thread Boris Pavlovic
Chris,

The Idea is brilliant. I may steal it! =)

But there are some issues that will be faced:

1) Using as a base unittest:

> python -m subunit.run discover -f gabbi | subunit2pyunit


So rally team won't be able to reuse it for load testing (if we directly
integrate it) because we will have huge overhead (of discover stuff)

2) Load testing.

Using unittest for functional testing adds a lot of troubles:
2.1) It makes things complicated:
Like reusing fixtures via input YAML will be painfull
2.2) It adds a lot of functionality that is not required
2.3) It makes it hardly integratabtle with other tools. Like Rally..

3) Usage by Operators is hard in case of N projects.

So you should have some kind of

Operators would like to have 1 button that will say (does cloud work or
not). And they don't want to combine all gabbi files from all projects and
run test.

>From other side there should be a way to write such code in-projects-tree
(so new features are directly tested) and then moved to some common place
that is run on every patch (without breaking gates)

4) Using subunit format is not good for functional testing.

It doesn't allow you to collect detailed information about execution of
test. Like for benchmarking it will be quite interesting to collect
durations of every API call.



Best regards,
Boris Pavlovic


On Mon, Jan 12, 2015 at 10:54 PM, Eoghan Glynn  wrote:

>
>
> > After some discussion with Sean Dague and a few others it became
> > clear that it would be a good idea to introduce a new tool I've been
> > working on to the list to get a sense of its usefulness generally,
> > work towards getting it into global requirements, and get the
> > documentation fleshed out so that people can actually figure out how
> > to use it well.
> >
> > tl;dr: Help me make this interesting tool useful to you and your
> > HTTP testing by reading this message and following some of the links
> > and asking any questions that come up.
> >
> > The tool is called gabbi
> >
> >  https://github.com/cdent/gabbi
> >  http://gabbi.readthedocs.org/
> >  https://pypi.python.org/pypi/gabbi
> >
> > It describes itself as a tool for running HTTP tests where requests
> > and responses are represented in a declarative form. Its main
> > purpose is to allow testing of APIs where the focus of test writing
> > (and reading!) is on the HTTP requests and responses, not on a bunch of
> > Python (that obscures the HTTP).
> >
> > The tests are written in YAML and the simplest test file has this form:
> >
> > ```
> > tests:
> > - name: a test
> >url: /
> > ```
> >
> > This test will pass if the response status code is '200'.
> >
> > The test file is loaded by a small amount of python code which transforms
> > the file into an ordered sequence of TestCases in a TestSuite[1].
> >
> > ```
> > def load_tests(loader, tests, pattern):
> >  """Provide a TestSuite to the discovery process."""
> >  test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR)
> >  return driver.build_tests(test_dir, loader, host=None,
> >intercept=SimpleWsgi,
> >fixture_module=sys.modules[__name__])
> > ```
> >
> > The loader provides either:
> >
> > * a host to which real over-the-network requests are made
> > * a WSGI app which is wsgi-intercept-ed[2]
> >
> > If an individual TestCase is asked to be run by the testrunner, those
> tests
> > that are prior to it in the same file are run first, as prerequisites.
> >
> > Each test file can declare a sequence of nested fixtures to be loaded
> > from a configured (in the loader) module. Fixtures are context managers
> > (they establish the fixture upon __enter__ and destroy it upon
> > __exit__).
> >
> > With a proper group_regex setting in .testr.conf each YAML file can
> > run in its own process in a concurrent test runner.
> >
> > The docs contain information on the format of the test files:
> >
> >  http://gabbi.readthedocs.org/en/latest/format.html
> >
> > Each test can state request headers and bodies and evaluate both response
> > headers and response bodies. Request bodies can be strings in the
> > YAML, files read from disk, or JSON created from YAML structures.
> > Response verifcation can use JSONPath[3] to inspect the details of
> > response bodies. Response header validation may use regular
> > expressions.
> >
> > There is limited support for refering to the previous request
> > to construct URIs, potentially allowing traversal of a full HATEOAS
> > compliant API.
> >
> > At the moment the most complete examples of how things work are:
> >
> > * Ceilometer's pending use of gabbi:
> >https://review.openstack.org/#/c/146187/
> > * Gabbi's testing of gabbi:
> >https://github.com/cdent/gabbi/tree/master/gabbi/gabbits_intercept
> >(the loader and faked WSGI app for those yaml files is in:
> >https://github.com/cdent/gabbi/blob/master/gabbi/test_intercept.py)
> >
> > One obvious thing

Re: [openstack-dev] [heat][tripleo] Making diskimage-builder install from forked repo?

2015-01-12 Thread Steve Baker

On 09/01/15 07:06, Gregory Haynes wrote:

Excerpts from Steven Hardy's message of 2015-01-08 17:37:55 +:

Hi all,

I'm trying to test a fedora-software-config image with some updated
components.  I need:

- Install latest master os-apply-config (the commit I want isn't released)
- Install os-refresh-config fork from https://review.openstack.org/#/c/145764

I can't even get the o-a-c from master part working:

export PATH="${PWD}/dib-utils/bin:$PATH"
export
ELEMENTS_PATH=tripleo-image-elements/elements:heat-templates/hot/software-config/elements
export DIB_INSTALLTYPE_os_apply_config=source

diskimage-builder/bin/disk-image-create vm fedora selinux-permissive \
   os-collect-config os-refresh-config os-apply-config \
   heat-config-ansible \
   heat-config-cfn-init \
   heat-config-docker \
   heat-config-puppet \
   heat-config-salt \
   heat-config-script \
   ntp \
   -o fedora-software-config.qcow2

This is what I'm doing, both tools end up as pip installed versions AFAICS,
so I've had to resort to manually hacking the image post-DiB using
virt-copy-in.

Pretty sure there's a way to make DiB do this, but don't know what, anyone
able to share some clues?  Do I have to hack the elements, or is there a
better way?

The docs are pretty sparse, so any help would be much appreciated! :)

Thanks,

Steve


Hey Steve,

source-repositories is your friend here :) (check out
dib/elements/source-repositires/README). One potential gotcha is that
because source-repositires is an element it really only applies to tools
used within images (and os-apply-config is used outside the image). To
fix this we have a shim in tripleo-incubator/scripts/pull-tools which
emulates the functionality of source-repositories.

Example usage:

* checkout os-apply-config to the ref you wish to use
* export DIB_REPOLOCATION_os_apply_config="/path/to/oac"
* export DIB_REPOREF_os_refresh_config="refs/changes/64/145764/1"
* start your devtesting


The good news is that devstack is already set up to do this. When 
HEAT_CREATE_TEST_IMAGE=True devstack will build packages from the 
currently checked-out os-*-config tools, build a pip repo and configure 
apache to serve it.


Then the elements *should* install from these packages - we're not 
gating on this functionality (yet) so its possible it has regressed but 
shouldn't be too hard to get going again.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-12 Thread Dean Troyer
Thanks for this Chris, I'm hoping to get my fingers dirty with it Real Soon
Now.

On Mon, Jan 12, 2015 at 1:54 PM, Eoghan Glynn  wrote:
>
> I'd be interested in hearing the api-wg viewpoint, specifically whether
> that working group intends to recommend any best practices around the
> approach to API testing.
>

Testing recommendations haven't been part of the conversation yet, but I
think it is within scope for the WG to have some opinions on REST API
design and validation tools.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] request spec freeze exception for virtio-net multiqueue

2015-01-12 Thread Vladik Romanovsky
Hello,

I'd like to request an exception for virtio-net multiqueue feature. [1]
This is an important feature that aims to increase the total network throughput
in guests and not too hard to implement.

Thanks,
Vladik

[1] https://review.openstack.org/#/c/128825

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Dropping Python-2.6 support

2015-01-12 Thread Doug Hellmann

> On Jan 12, 2015, at 9:55 AM, Roman Prykhodchenko  wrote:
> 
> Folks,
> 
> as it was planned and then announced at the OpenStack summit OpenStack 
> services deprecated Python-2.6 support. At the moment several services and 
> libraries are already only compatible with Python>=2.7. And there is no 
> common sense in trying to get back compatibility with Py2.6 because OpenStack 
> infra does not run tests for that version of Python.

The intent was to keep 2.6 compatibility for client and Oslo libraries. Which 
libraries are you referring to that require at least 2.7?

Doug

> 
> The point of this email is that some components of Fuel, say, Nailgun and 
> Fuel Client are still only tested with Python-2.6. Fuel Client in it’s turn 
> is about to use OpenStack CI’s python-jobs for running unit tests. That means 
> that in order to make it compatible with Py2.6 there is a need to run a 
> separate python job in FuelCI.
> 
> However, I believe that forcing the things being compatible with 2.6 when the 
> rest of ecosystem decided not to go with it and when Py2.7 is already 
> available in the main CentOS repo sounds like a battle with the common sense. 
> So my proposal is to drop 2.6 support in Fuel-6.1.
> 
> 
> - romcheg
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-12 Thread Eoghan Glynn


> After some discussion with Sean Dague and a few others it became
> clear that it would be a good idea to introduce a new tool I've been
> working on to the list to get a sense of its usefulness generally,
> work towards getting it into global requirements, and get the
> documentation fleshed out so that people can actually figure out how
> to use it well.
> 
> tl;dr: Help me make this interesting tool useful to you and your
> HTTP testing by reading this message and following some of the links
> and asking any questions that come up.
> 
> The tool is called gabbi
> 
>  https://github.com/cdent/gabbi
>  http://gabbi.readthedocs.org/
>  https://pypi.python.org/pypi/gabbi
> 
> It describes itself as a tool for running HTTP tests where requests
> and responses are represented in a declarative form. Its main
> purpose is to allow testing of APIs where the focus of test writing
> (and reading!) is on the HTTP requests and responses, not on a bunch of
> Python (that obscures the HTTP).
> 
> The tests are written in YAML and the simplest test file has this form:
> 
> ```
> tests:
> - name: a test
>url: /
> ```
> 
> This test will pass if the response status code is '200'.
> 
> The test file is loaded by a small amount of python code which transforms
> the file into an ordered sequence of TestCases in a TestSuite[1].
> 
> ```
> def load_tests(loader, tests, pattern):
>  """Provide a TestSuite to the discovery process."""
>  test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR)
>  return driver.build_tests(test_dir, loader, host=None,
>intercept=SimpleWsgi,
>fixture_module=sys.modules[__name__])
> ```
> 
> The loader provides either:
> 
> * a host to which real over-the-network requests are made
> * a WSGI app which is wsgi-intercept-ed[2]
> 
> If an individual TestCase is asked to be run by the testrunner, those tests
> that are prior to it in the same file are run first, as prerequisites.
> 
> Each test file can declare a sequence of nested fixtures to be loaded
> from a configured (in the loader) module. Fixtures are context managers
> (they establish the fixture upon __enter__ and destroy it upon
> __exit__).
> 
> With a proper group_regex setting in .testr.conf each YAML file can
> run in its own process in a concurrent test runner.
> 
> The docs contain information on the format of the test files:
> 
>  http://gabbi.readthedocs.org/en/latest/format.html
> 
> Each test can state request headers and bodies and evaluate both response
> headers and response bodies. Request bodies can be strings in the
> YAML, files read from disk, or JSON created from YAML structures.
> Response verifcation can use JSONPath[3] to inspect the details of
> response bodies. Response header validation may use regular
> expressions.
> 
> There is limited support for refering to the previous request
> to construct URIs, potentially allowing traversal of a full HATEOAS
> compliant API.
> 
> At the moment the most complete examples of how things work are:
> 
> * Ceilometer's pending use of gabbi:
>https://review.openstack.org/#/c/146187/
> * Gabbi's testing of gabbi:
>https://github.com/cdent/gabbi/tree/master/gabbi/gabbits_intercept
>(the loader and faked WSGI app for those yaml files is in:
>https://github.com/cdent/gabbi/blob/master/gabbi/test_intercept.py)
> 
> One obvious thing that will need to happen is a suite of concrete
> examples on how to use the various features. I'm hoping that
> feedback will help drive that.
> 
> In my own experimentation with gabbi I've found it very useful. It's
> helped me explore and learn the ceilometer API in a way that existing
> test code has completely failed to do. It's also helped reveal
> several warts that will be very useful to fix. And it is fast. To
> run and to write. I hope that with some work it can be useful to you
> too.

Thanks for the write-up Chris,

Needless to say, we're sold on the utility of this on the ceilometer
side, in terms of crafting readable, self-documenting tests that reveal
the core aspects of an API in a easily consumable way.

I'd be interested in hearing the api-wg viewpoint, specifically whether
that working group intends to recommend any best practices around the
approach to API testing.

If so, I think gabbi would be a worthy candidate for consideration.

Cheers,
Eoghan

> Thanks.
> 
> [1] Getting gabbi to play well with PyUnit style tests and
>  with infrastructure like subunit and testrepository was one of
>  the most challenging parts of the build, but the result has been
>  a lot of flexbility.
> 
> [2] https://pypi.python.org/pypi/wsgi_intercept
> [3] https://pypi.python.org/pypi/jsonpath-rw
> 
> --
> Chris Dent tw:@anticdent freenode:cdent
> https://tank.peermore.com/tanks/cdent
> 
> __
> OpenStack Development Mailing List (not for usag

[openstack-dev] [Keystone] Spec proposal deadline Feb 5

2015-01-12 Thread Morgan Fainberg
This is a reminder that the Keystone spec proposal deadline is Feb 5. Please 
work to have your specs submitted and approved by that date. 

The keystone team will be spending time at the midcycle next week (Jan 19, 20, 
21) to discuss specs; specs proposed before the midcycle will get priority when 
reviewing / considering the spec for inclusion in the Kilo release. 

Any spec that is not approved by the deadline will need an explicit exception 
granted to land in Kilo. 

Cheers,
Morgan Fainberg
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Kilo devstack issue

2015-01-12 Thread John Griffith
On Mon, Jan 12, 2015 at 10:03 AM, Nikesh Kumar Mahalka
 wrote:
> Hi,
> We deployed a kilo devstack on ubuntu 14.04 server.
> We successfully launched a instance from dashboard, but we are unable to
> open console from dashboard for instance.Also instacne is unable to get ip
>
> Below is link for local.conf
> http://paste.openstack.org/show/156497/
>
>
>
> Regards
> Nikesh
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
Correct, see this thread:
http://lists.openstack.org/pipermail/openstack-dev/2015-January/054157.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-12 Thread Chris Dent


After some discussion with Sean Dague and a few others it became
clear that it would be a good idea to introduce a new tool I've been
working on to the list to get a sense of its usefulness generally,
work towards getting it into global requirements, and get the
documentation fleshed out so that people can actually figure out how
to use it well.

tl;dr: Help me make this interesting tool useful to you and your
HTTP testing by reading this message and following some of the links
and asking any questions that come up.

The tool is called gabbi

https://github.com/cdent/gabbi
http://gabbi.readthedocs.org/
https://pypi.python.org/pypi/gabbi

It describes itself as a tool for running HTTP tests where requests
and responses are represented in a declarative form. Its main
purpose is to allow testing of APIs where the focus of test writing
(and reading!) is on the HTTP requests and responses, not on a bunch of
Python (that obscures the HTTP).

The tests are written in YAML and the simplest test file has this form:

```
tests:
- name: a test
  url: /
```

This test will pass if the response status code is '200'.

The test file is loaded by a small amount of python code which transforms
the file into an ordered sequence of TestCases in a TestSuite[1].

```
def load_tests(loader, tests, pattern):
"""Provide a TestSuite to the discovery process."""
test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR)
return driver.build_tests(test_dir, loader, host=None,
  intercept=SimpleWsgi,
  fixture_module=sys.modules[__name__])
```

The loader provides either:

* a host to which real over-the-network requests are made
* a WSGI app which is wsgi-intercept-ed[2]

If an individual TestCase is asked to be run by the testrunner, those tests
that are prior to it in the same file are run first, as prerequisites.

Each test file can declare a sequence of nested fixtures to be loaded
from a configured (in the loader) module. Fixtures are context managers
(they establish the fixture upon __enter__ and destroy it upon
__exit__).

With a proper group_regex setting in .testr.conf each YAML file can
run in its own process in a concurrent test runner.

The docs contain information on the format of the test files:

http://gabbi.readthedocs.org/en/latest/format.html

Each test can state request headers and bodies and evaluate both response
headers and response bodies. Request bodies can be strings in the
YAML, files read from disk, or JSON created from YAML structures.
Response verifcation can use JSONPath[3] to inspect the details of
response bodies. Response header validation may use regular
expressions.

There is limited support for refering to the previous request
to construct URIs, potentially allowing traversal of a full HATEOAS
compliant API.

At the moment the most complete examples of how things work are:

* Ceilometer's pending use of gabbi:
  https://review.openstack.org/#/c/146187/
* Gabbi's testing of gabbi:
  https://github.com/cdent/gabbi/tree/master/gabbi/gabbits_intercept
  (the loader and faked WSGI app for those yaml files is in:
  https://github.com/cdent/gabbi/blob/master/gabbi/test_intercept.py)

One obvious thing that will need to happen is a suite of concrete
examples on how to use the various features. I'm hoping that
feedback will help drive that.

In my own experimentation with gabbi I've found it very useful. It's
helped me explore and learn the ceilometer API in a way that existing
test code has completely failed to do. It's also helped reveal
several warts that will be very useful to fix. And it is fast. To
run and to write. I hope that with some work it can be useful to you
too.

Thanks.

[1] Getting gabbi to play well with PyUnit style tests and
with infrastructure like subunit and testrepository was one of
the most challenging parts of the build, but the result has been
a lot of flexbility.

[2] https://pypi.python.org/pypi/wsgi_intercept
[3] https://pypi.python.org/pypi/jsonpath-rw

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ThirdPartyCI][PCI] Intel Third party Hardware based CI for PCI

2015-01-12 Thread Kurt Taylor
The public link for your test logs should really be a host name instead of
an IP address. That way if you have to change it again in the future, you
won't have dead links in old comments. You may already know, but all of the
requirements and recommendations are here:
http://git.openstack.org/cgit/openstack-infra/system-config/tree/doc/source/third_party.rst

Kurt Taylor (krtaylor)

On Sun, Jan 11, 2015 at 11:18 PM, yongli he  wrote:

>  在 2015年01月08日 10:31, yongli he 写道:
> to make a more stable service we upgrade the networking device,  then the
> log server address change to a new
> IP address:  198.175.100.33
>
> so  the sample log change to(replace the 192.55.68.190 to new address):
>
>
> http://198.175.100.33/143614/6/
> http://198.175.100.33/139900/4
> http://198.175.100.33/143372/3/
> http://198.175.100.33/141995/6/
> http://198.175.100.33/137715/13/
> http://198.175.100.33/133269/14/
>
> Yongli He
>
>
>  Hi,
>
> Intel  set up a Hardware based Third Part CI.   it's already  running sets
> of PCI test cases
> for several  weeks (do not sent out comments, just log the result)
> the log server and these test cases seems fairly stable now.   to begin
> given comments  to nova
> repository,  what other necessary work need to be address?
>
> Details:
> 1. ThirdPartySystems 
> Information
> https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-PCI-CI
>
> 2. a sample logs:
>  
>
> 3. Test cases on github:
>
> https://github.com/intel-hw-ci/Intel-Openstack-Hardware-CI/tree/master/pci_testcases
>
>
>
> Thanks
> Yongli He
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Bug team meeting tomorrow 1700 UTC

2015-01-12 Thread Devananda van der Veen
Following up from our IRC meeting to let folks know that we'll be having a
"bug day" tomorrow (Tuesday) to clean up our bug list. Folks who want to
join Dmitry and I are welcome - we'll start at the same time as today's
meeting (1700 UTC // 9am PST), but in our usual channel (#openstack-ironic)
rather than the meeting room.

The goal for tomorrow isn't to fix all the bugs, but to make sure the
status is correct. We have a fair number of stale bugs that should be
closed or re-prioritized, and we have a growing list of new bugs that need
to be triaged.

-Devananda
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Dropping Python-2.6 support

2015-01-12 Thread Dmitry Borodaenko
The python27 packages Ihar has linked do not conflict with the system
python 2.6, so I don't see any problems with the rollback here.

Is rollback the only concern for the master node upgrade use case?

On Mon, Jan 12, 2015 at 9:51 AM, Evgeniy L  wrote:
> Hi Dmitry,
>
>>> Client uses REST API to interact with Fuel, how is Python version a
>>> factor?
>
> Fuel client is written in python it means it won't work on the master node
> with 2.6 python if you drop compatibility with it.
>
>>> What exactly is the use cases that requires a new client deployed on an
>>> old Fuel master node (or vice versa)?
>
> Fuel master node upgrade, we install newer client during the upgrade.
>
>>> It's not that hard ...
>
> It looks not so hard, but it should be well tested before it's merged,
> and it's risky because fuel client is installed on the host system, not
> into the container, hence in case if something goes wrong we cannot
> make automatic rollback.
>
> Thanks,
>
> On Mon, Jan 12, 2015 at 8:24 PM, Dmitry Borodaenko
>  wrote:
>>
>> On Mon, Jan 12, 2015 at 9:10 AM, Evgeniy L  wrote:
>> > Agree with Igor, I think we cannot just drop compatibility for fuel
>> > client
>> > with 2.6 python,
>>
>> Hm, didn't Igor say in his email that "we have to drop python 2.6
>> support"?
>>
>> > the reason is we have old master nodes which have
>> > 2.6 python, and the newer fuel client should work fine on these
>> > environments.
>>
>> Client uses REST API to interact with Fuel, how is Python version a
>> factor?
>>
>> What exactly is the use cases that requires a new client deployed on
>> an old Fuel master node (or vice versa)?
>>
>> > Or we can try to install python 2.7 on the master during the upgrade.
>>
>> Lets do this. It's not that hard, see the link in an email from Ihar
>> Hrachyshka on this thread.
>>
>> > As for Nailgun I don't see any problems to use 2.7.
>> >
>> > Thanks,
>> >
>> > On Mon, Jan 12, 2015 at 7:32 PM, Igor Kalnitsky
>> > 
>> > wrote:
>> >>
>> >> Hi, Roman,
>> >>
>> >> Indeed, we have to go forward and drop python 2.6 support. That's how
>> >> it supposed to be, but, unfortunately, it may not be as easy as it
>> >> seems at first glance.
>> >>
>> >> Fuel Master is flying on top of Cent OS 6.5 which doesn't have python
>> >> 2.7 at all. So we must either run master node on Cent OS 7 or build
>> >> python2.7 for Cent OS 6.5. The first case, obviously, requires a lot
>> >> of work, while the second one is not. But I may wrong, since I have no
>> >> idea what dependencies python 2.7 requires and what we have in our
>> >> repos.
>> >>
>> >> - Igor
>> >>
>> >> On Mon, Jan 12, 2015 at 4:55 PM, Roman Prykhodchenko 
>> >> wrote:
>> >> > Folks,
>> >> >
>> >> > as it was planned and then announced at the OpenStack summit
>> >> > OpenStack
>> >> > services deprecated Python-2.6 support. At the moment several
>> >> > services and
>> >> > libraries are already only compatible with Python>=2.7. And there is
>> >> > no
>> >> > common sense in trying to get back compatibility with Py2.6 because
>> >> > OpenStack infra does not run tests for that version of Python.
>> >> >
>> >> > The point of this email is that some components of Fuel, say, Nailgun
>> >> > and Fuel Client are still only tested with Python-2.6. Fuel Client in
>> >> > it’s
>> >> > turn is about to use OpenStack CI’s python-jobs for running unit
>> >> > tests. That
>> >> > means that in order to make it compatible with Py2.6 there is a need
>> >> > to run
>> >> > a separate python job in FuelCI.
>> >> >
>> >> > However, I believe that forcing the things being compatible with 2.6
>> >> > when the rest of ecosystem decided not to go with it and when Py2.7
>> >> > is
>> >> > already available in the main CentOS repo sounds like a battle with
>> >> > the
>> >> > common sense. So my proposal is to drop 2.6 support in Fuel-6.1.
>> >> >
>> >> >
>> >> > - romcheg
>> >> >
>> >> >
>> >> >
>> >> > __
>> >> > OpenStack Development Mailing List (not for usage questions)
>> >> > Unsubscribe:
>> >> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >> >
>> >>
>> >>
>> >> __
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>>
>>
>> --
>> Dmitry Borodaenko
>>
>> __
>> OpenStack De

[openstack-dev] oslo.db 1.4.0 released

2015-01-12 Thread Doug Hellmann
The Oslo team is pleased to announce the release of
oslo.db 1.4.0: oslo.db library

The primary reason for this release is to move the code
out of the oslo namespace package as part of
https://blueprints.launchpad.net/oslo-incubator/+spec/drop-namespace-packages

For more details, please see the git log history below and
 http://launchpad.net/oslo.db/+milestone/1.4.0

Please report issues through launchpad:
 http://bugs.launchpad.net/oslo.db



Changes in openstack/oslo.db  1.3.0..1.4.0

9a510e8 Fix slowest test output after test run
2c3768c Updated from global requirements
bce8ed3 Make sure sort_key_attr is QueryableAttribute when query
cfbe5c5 Ensure mysql_sql_mode is set for MySQLOpportunisticTests
efbb388 Add pretty_tox wrapper script
91b0199 Fix PatchStacktraceTest test
3043338 Ensure PostgreSQL connection errors are wrapped
75b402b Remove check_foreign_keys from ModelsMigrationsSync
7063585 Move files out of the namespace package
4a57952 Updated from global requirements
c6ddb04 Fix the link to the bug reporting site

  diffstat (except docs and test files):

 .testr.conf|2 +-
 README.rst |   10 +-
 oslo/db/__init__.py|   26 +
 oslo/db/_i18n.py   |   35 -
 oslo/db/api.py |  216 +---
 oslo/db/concurrency.py |   68 +-
 oslo/db/exception.py   |  160 +--
 oslo/db/options.py |  229 +---
 oslo/db/sqlalchemy/compat/__init__.py  |   23 +-
 oslo/db/sqlalchemy/compat/engine_connect.py|   60 --
 oslo/db/sqlalchemy/compat/handle_error.py  |  289 -
 oslo/db/sqlalchemy/compat/utils.py |   17 +-
 oslo/db/sqlalchemy/exc_filters.py  |  349 +-
 oslo/db/sqlalchemy/migration.py|  165 +--
 oslo/db/sqlalchemy/migration_cli/README.rst|9 -
 oslo/db/sqlalchemy/migration_cli/__init__.py   |   18 +
 oslo/db/sqlalchemy/migration_cli/ext_alembic.py|   78 --
 oslo/db/sqlalchemy/migration_cli/ext_base.py   |   79 --
 oslo/db/sqlalchemy/migration_cli/ext_migrate.py|   69 --
 oslo/db/sqlalchemy/migration_cli/manager.py|   71 --
 oslo/db/sqlalchemy/models.py   |  115 +-
 oslo/db/sqlalchemy/provision.py|  494 +
 oslo/db/sqlalchemy/session.py  |  834 +--
 oslo/db/sqlalchemy/test_base.py|  132 +--
 oslo/db/sqlalchemy/test_migrations.py  |  600 +--
 oslo/db/sqlalchemy/utils.py|  999 +-
 oslo_db/__init__.py|0
 oslo_db/_i18n.py   |   35 +
 oslo_db/api.py |  229 
 oslo_db/concurrency.py |   81 ++
 oslo_db/exception.py   |  173 +++
 oslo_db/options.py |  220 
 oslo_db/sqlalchemy/__init__.py |0
 oslo_db/sqlalchemy/compat/__init__.py  |   30 +
 oslo_db/sqlalchemy/compat/engine_connect.py|   60 ++
 oslo_db/sqlalchemy/compat/handle_error.py  |  289 +
 oslo_db/sqlalchemy/compat/utils.py |   26 +
 oslo_db/sqlalchemy/exc_filters.py  |  359 +++
 oslo_db/sqlalchemy/migration.py|  160 +++
 oslo_db/sqlalchemy/migration_cli/README.rst|9 +
 oslo_db/sqlalchemy/migration_cli/__init__.py   |0
 oslo_db/sqlalchemy/migration_cli/ext_alembic.py|   78 ++
 oslo_db/sqlalchemy/migration_cli/ext_base.py   |   79 ++
 oslo_db/sqlalchemy/migration_cli/ext_migrate.py|   69 ++
 oslo_db/sqlalchemy/migration_cli/manager.py|   71 ++
 oslo_db/sqlalchemy/models.py   |  128 +++
 oslo_db/sqlalchemy/provision.py|  514 +
 oslo_db/sqlalchemy/session.py  |  847 +++
 oslo_db/sqlalchemy/test_base.py|  127 +++
 oslo_db/sqlalchemy/test_migrations.py  |  542 ++
 oslo_db/sqlalchemy/utils.py| 1014 ++
 .../sqlalchemy/test_engine_connect.py  |   68 ++
 .../old_import_api/sqlalchemy/test_exc_filters.py  |  833 +++
 .../old_import_api/sqlalchemy/test_handle_error.py |  194 
 .../old_import_api/sqlalchemy/test_migrate_cli.py  |  222 
 .../sqlalchemy/test_migration_common.py|  174 +++
 .../old_import_api/sqlalchemy/test_migrations.py   |  309 ++
 .../old_import_api/sqlalchemy/test_options.py  |  127 +++
 .../old_import_api/sqlalchemy/test_sqlalchemy.py   |  566 ++
 requirements.txt   |6 +-
 setup.cfg   

Re: [openstack-dev] [Heat] Where to keep data about stack breakpoints?

2015-01-12 Thread Steven Hardy
On Mon, Jan 12, 2015 at 04:29:15PM +0100, Tomas Sedovic wrote:
> Hey folks,
> 
> I did a quick proof of concept for a part of the Stack Breakpoint spec[1]
> and I put the "does this resource have a breakpoint" flag into the metadata
> of the resource:
> 
> https://review.openstack.org/#/c/146123/
> 
> I'm not sure where this info really belongs, though. It does sound like
> metadata to me (plus we don't have to change the database schema that way),
> but can we use it for breakpoints etc., too? Or is metadata strictly for
> Heat users and not for engine-specific stuff?

Metadata is supposed to be for template defined metadata (with the notable
exception of server resources where we merge SoftwareDeployment metadata in
to that defined in the template).

So if we're going to use the metadata template interface as a way to define
the breakpoint, this is OK, but do we want to mix the definition of the
stack with this flow control data? (I personally think probably not).

I can think of a couple of alternatives:

1. Use resource_data, which is intended for per-resource internal data, and
set it based on API data passed on create/update (see Resource.data_set)

2. Store the breakpoint metadata in the environment

I think the environment may be the best option, but we'll have to work out
how to best represent a tree of nested stacks (something the spec interface
description doesn't consider AFAICS).

If we use the environment, then no additional API interfaces are needed,
just supporting a new key in the existing data, and python-heatclient can
take care of translating any CLI --breakpoint argument into environment
data.

> I also had a chat with Steve Hardy and he suggested adding a STOPPED state
> to the stack (this isn't in the spec). While not strictly necessary to
> implement the spec, this would help people figure out that the stack has
> reached a breakpoint instead of just waiting on a resource that takes a long
> time to finish (the heat-engine log and event-list still show that a
> breakpoint was reached but I'd like to have it in stack-list and
> resource-list, too).
> 
> It makes more sense to me to call it PAUSED (we're not completely stopping
> the stack creation after all, just pausing it for a bit), I'll let Steve
> explain why that's not the right choice :-).

So, I've not got strong opinions on the name, it's more the workflow:

1. User triggers a stack create/update
2. Heat walks the graph, hits a breakpoint and stops.
3. Heat explicitly triggers continuation of the create/update

My argument is that (3) is always a stack update, either a PUT or PATCH
update, e.g we _are_ completely stopping stack creation, then a user can
choose to re-start it (either with the same or a different definition).

So, it _is_ really an end state, as a user might never choose to update
from the stopped state, in which case *_STOPPED makes more sense.

Paused implies the same action as the PATCH update, only we trigger
continuation of the operation from the point we reached via some sort of
user signal.

If we actually pause an in-progress action via the scheduler, we'd have to
start worrying about stuff like token expiry, hitting timeouts, resilience
to engine restarts, etc, etc.  So forcing an explicit update seems simpler
to me.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] noVNC disabled by default?

2015-01-12 Thread Ben Nemec
On 01/09/2015 05:24 PM, Sean Dague wrote:
> On 01/09/2015 06:12 PM, Solly Ross wrote:
>> Hi,  
>>
>> I just noticed that noVNC was disabled by default in devstack (the relevant  
>>
>> change was   
>>
>> https://review.openstack.org/#/c/140860/).   
>>
>>  
>>
>> Now, if I understand correctly (based on the short commit message), the  
>>
>> rationale is that we don't want devstack to reply on non-OpenStack Git   
>>
>> repos, so that devstack doesn't fail when some external Git hosting  
>>
>> service (e.g. GitHub) goes down.
> 
> Realistically the policy is more about the fact that we should be using
> released (and commonly available) versions of dependent software.
> Ideally from packages, but definitely not from git trees. We don't want
> to be testing everyone else's bleeding edge, there are lots of edges and
> pointy parts in OpenStack as it is.
> 
>>  
>>
>> This is all fine and dandy (and a decent idea, IMO), but this leaves 
>> devstack   
>> installing a "broken" installation of Horizon by default -- Horizon still
>>   
>> attempts to show the noVNC console when you go to the "console" tab for an   
>>
>> instance, which is a bit confusing, initially.  Now, it wasn't particularly  
>>
>> hard to track not particularly hard to track down *why* this happened 
>> (hmm...   
>> my stackrc seems to be missing "n-novnc" in ENABLED_SERVICES.  Go-go-gadget  
>>
>> `git blame`), but it strikes me as a bit inconsistent and inconvenient.  
>>
>>  
>>
>> Personally, I would like to see noVNC back as a default service, since it
>>
>> can be useful when trying to see what your VM is actually doing during   
>>
>> boot, or if you're having network issues.  Is there anything I can do
>>
>> as a noVNC maintainer to help?   
>>
>>  
>>
>> We (the noVNC team) do publish releases, and I've been trying to make
>>
>> sure that they happen in a more timely fashion.  In the past, it was 
>> necessary  
>> to use Git master to ensure that you got the latest version (there was a 
>>
>> 2-year gap between 0.4 and 0.5!), but I'm trying to change that.  Currently, 
>>
>> it would appear that most of the distros are still using the old version 
>> (0.4), 
>> but versions 0.5 and 0.5.1 are up on GitHub as release tarballs (0.5 being a 
>> 3  
>> months old and 0.5.1 having been tagged a couple weeks ago).  I will attempt 
>> to 
>> work with distro maintainers to get the packages updated.  However, in the 
>> mean 
>> time, is there a place would be acceptable to place the releases so that 
>> devstack
>> can install them?
> 
> If you rewrite the noNVC installation in devstack to work from a release
> URL that includes the released version on it, I think that would be
> sufficient to turn it back on. Again, ideally this should be in distros,

FWIW, I looked into installing novnc from distro packages quite a while
ago and ran into problems because the dependencies were wonky.  Like,
novnc would pull in Nova which then overwrote a bunch of the devstack
Nova stuff.  I don't know if that's still an issue, but that's the main
reason I never pushed ahead with removing the git install of novnc (that
was during the release drought, so those weren't an option at the time
either).

> but I think we could work on doing release installs until then,
> especially if the install process is crisp.
> 
> I am looking at the upstream release tarball right now though, and don't
> see and INSTALL instructions in it. So lets see what the devstack patch
> would look like to do the install.
> 
>   -Sean
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Policy][Group-based-policy] ODL Policy Driver Specs

2015-01-12 Thread Sumit Naiksatam
Hi, The 'group' CLI commands are an alias to the 'policy-target-group'
commands and are provided for convenience of use. Per your
observation, both are equally functional and supported.

Thanks,
~Sumit.

On Mon, Jan 12, 2015 at 4:02 AM, Sachi Gupta  wrote:
> Hi,
>
> Can anyone explain the difference between gbp group-create and gbp
> policy-target-group-create??
>
> I think both these are working same.
>
> Thanks & Regards
> Sachi Gupta
>
>
>
>
> From:Sumit Naiksatam 
> To:"OpenStack Development Mailing List (not for usage questions)"
> 
> Date:11/26/2014 01:35 PM
> Subject:Re: [openstack-dev] [Policy][Group-based-policy] ODL Policy
> DriverSpecs
> 
>
>
>
> Hi, This GBP spec is currently being worked on:
> https://review.openstack.org/#/c/134285/
>
> It will be helpful if you can add "[Policy][Group-based-policy]" in
> the subject of your emails, so that the email gets characterized
> correctly.
>
> Thanks,
> ~Sumit.
>
> On Tue, Nov 25, 2014 at 4:27 AM, Sachi Gupta  wrote:
>> Hey All,
>>
>> I need to understand the interaction between the Openstack GBP and the
>> Opendaylight GBP project which will be done by ODL Policy driver.
>>
>> Can someone provide me with specs of ODL Policy driver for making my
>> understanding on call flow.
>>
>>
>> Thanks & Regards
>> Sachi Gupta
>>
>> =-=-=
>> Notice: The information contained in this e-mail
>> message and/or attachments to it may contain
>> confidential or privileged information. If you are
>> not the intended recipient, any dissemination, use,
>> review, distribution, printing or copying of the
>> information contained in this e-mail message
>> and/or attachments to it are strictly prohibited. If
>> you have received this communication in error,
>> please notify us by reply e-mail or telephone and
>> immediately and permanently delete the message
>> and any attachments. Thank you
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Group Based Policy - Announcing Release

2015-01-12 Thread Sumit Naiksatam
The first release of Group Based Policy (GBP) [1] is now available! It
is designed to work with OpenStack stable Juno, and comprises of four
components:
Service, Client, Heat Automation, and Horizon UI

This release includes GBP network policy drivers for connectivity
rendering using Neutron, configured with any core plugin (including
ML2)[2], or OpenDaylight Controller [3]. In addition, vendor-specific
policy drivers are available for Cisco ACI [4], Nuage Virtual
Services’ Platform (VSP) [5], and One Convergence Network
Virtualization and Service Delivery (NVSD) Controller [6].

This release introduces the foundation of the Group Based Policy model
that provides for the following features:

* Intent-driven declarative abstractions
* Separation of application, network, and security concerns
* Late binding to facilitate non-linear workflows
* Ability to introduce operator modulation in the form of admin constraints
* Policy-driven Network Services’ composition and chaining
* Policy-driven external connectivity abstractions

Fedora [7] and Ubuntu [8] packages are available for installation
(RHEL and CentOS packages are being tested). An all-in-one single-step
devstack installation is also available with usage instructions [2].
Please give it a try; we would greatly appreciate your feedback.
Please also join the weekly IRC meetings [9] or on #openstack-gbp.

Thanks,
Team GBP.

[1] https://wiki.openstack.org/wiki/GroupBasedPolicy
[2] https://wiki.openstack.org/wiki/GroupBasedPolicy/InstallDevstack
[3] 
https://wiki.openstack.org/wiki/GroupBasedPolicy/InstallODLIntegrationDevstack
[4] https://wiki.openstack.org/wiki/GroupBasedPolicy/InstallCiscoACI
[5] https://wiki.openstack.org/wiki/GroupBasedPolicy/InstallNuage
[6] https://wiki.openstack.org/wiki/GroupBasedPolicy/InstallOneConvergence
[7] https://openstack.redhat.com/Neutron_GBP
[8] https://launchpad.net/~group-based-policy-drivers/+archive/ubuntu/ppa
[9] https://wiki.openstack.org/wiki/Meetings/Neutron_Group_Policy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Dropping Python-2.6 support

2015-01-12 Thread Evgeniy L
Hi Dmitry,

>> Client uses REST API to interact with Fuel, how is Python version a
factor?

Fuel client is written in python it means it won't work on the master node
with 2.6 python if you drop compatibility with it.

>> What exactly is the use cases that requires a new client deployed on an
old Fuel master node (or vice versa)?

Fuel master node upgrade, we install newer client during the upgrade.

>> It's not that hard ...

It looks not so hard, but it should be well tested before it's merged,
and it's risky because fuel client is installed on the host system, not
into the container, hence in case if something goes wrong we cannot
make automatic rollback.

Thanks,

On Mon, Jan 12, 2015 at 8:24 PM, Dmitry Borodaenko  wrote:

> On Mon, Jan 12, 2015 at 9:10 AM, Evgeniy L  wrote:
> > Agree with Igor, I think we cannot just drop compatibility for fuel
> client
> > with 2.6 python,
>
> Hm, didn't Igor say in his email that "we have to drop python 2.6 support"?
>
> > the reason is we have old master nodes which have
> > 2.6 python, and the newer fuel client should work fine on these
> > environments.
>
> Client uses REST API to interact with Fuel, how is Python version a factor?
>
> What exactly is the use cases that requires a new client deployed on
> an old Fuel master node (or vice versa)?
>
> > Or we can try to install python 2.7 on the master during the upgrade.
>
> Lets do this. It's not that hard, see the link in an email from Ihar
> Hrachyshka on this thread.
>
> > As for Nailgun I don't see any problems to use 2.7.
> >
> > Thanks,
> >
> > On Mon, Jan 12, 2015 at 7:32 PM, Igor Kalnitsky  >
> > wrote:
> >>
> >> Hi, Roman,
> >>
> >> Indeed, we have to go forward and drop python 2.6 support. That's how
> >> it supposed to be, but, unfortunately, it may not be as easy as it
> >> seems at first glance.
> >>
> >> Fuel Master is flying on top of Cent OS 6.5 which doesn't have python
> >> 2.7 at all. So we must either run master node on Cent OS 7 or build
> >> python2.7 for Cent OS 6.5. The first case, obviously, requires a lot
> >> of work, while the second one is not. But I may wrong, since I have no
> >> idea what dependencies python 2.7 requires and what we have in our
> >> repos.
> >>
> >> - Igor
> >>
> >> On Mon, Jan 12, 2015 at 4:55 PM, Roman Prykhodchenko 
> >> wrote:
> >> > Folks,
> >> >
> >> > as it was planned and then announced at the OpenStack summit OpenStack
> >> > services deprecated Python-2.6 support. At the moment several
> services and
> >> > libraries are already only compatible with Python>=2.7. And there is
> no
> >> > common sense in trying to get back compatibility with Py2.6 because
> >> > OpenStack infra does not run tests for that version of Python.
> >> >
> >> > The point of this email is that some components of Fuel, say, Nailgun
> >> > and Fuel Client are still only tested with Python-2.6. Fuel Client in
> it’s
> >> > turn is about to use OpenStack CI’s python-jobs for running unit
> tests. That
> >> > means that in order to make it compatible with Py2.6 there is a need
> to run
> >> > a separate python job in FuelCI.
> >> >
> >> > However, I believe that forcing the things being compatible with 2.6
> >> > when the rest of ecosystem decided not to go with it and when Py2.7 is
> >> > already available in the main CentOS repo sounds like a battle with
> the
> >> > common sense. So my proposal is to drop 2.6 support in Fuel-6.1.
> >> >
> >> >
> >> > - romcheg
> >> >
> >> >
> >> >
> __
> >> > OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe:
> >> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Dmitry Borodaenko
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Weekly subteam status report

2015-01-12 Thread Jim Rollenhagen
Hi all,

Following is the subteam report for Ironic. As usual, this is pulled
directly from the Ironic whiteboard[0] and formatted.

Bugs (dtantsur)
  (As of Mon, 12 Jan, 12:00 UTC)
Open: 133 (+15). 14 new (+5), 32 in progress (+3), 1 critical (+1),
 13 high (+2) and 5 incomplete (+2)
  help is needed triaging some bugs: http://ironic-bugs.divius.net/
  kind requests to the core team to assign correct status and priority to
their bugs themselves

Drivers:
  IPA (jroll/JayF/JoshNang)
Python 2.6 tests removed from gate and tox.ini for IPA.
  https://review.openstack.org/#/c/145631/
  https://review.openstack.org/#/c/145621/
Standalone mode merged, which allows testing the IPA rest API by allowing
  IPA to run without having to perform a lookup in Ironic or heartbeat.
  https://review.openstack.org/#/c/141957/
*API BREAKING CHANGE* - Support added for multiple, simultaneous
  HardwareManagers.
  https://review.openstack.org/#/c/143193/
Agent client improvements/refactoring
  going to be necessary for cleaning/zapping/new features
  https://review.openstack.org/#/c/18/

  iLO (wanyen)
Tried to setup 2-node CI test environment
  one for CC and one for bare-metal
  ran into errors while setting up Jenkins master
  Problem investigation is in progress.
Nova spec https://review.openstack.org/#/c/136104/
  "Pass on the capabilities in the flavor to the ironic" has been merged.
  This spec enables Ironic drivers to support multiple capabilities.
  iLO drivers can use this feature to support a node with multiple
capabilities such as secure boot, UEFI boot mode and BIOS boot mode.
  Code changes to Nova Ironic virt driver to support this spec has been
submitted to Nova for review
  https://review.openstack.org/#/c/141012
Secure boot
  one of the key features that iLO driver plans to support in Kilo
  hopefully it can be merged in Kilo2.
  Need review for secure boot specs
https://review.openstack.org/#/c/135845/
https://review.openstack.org/#/c/135228/
  These two specs have not received reviews from core reviewers for more
than a month. Review for these two specs will be highly appreciated.

// jim

[0] https://etherpad.openstack.org/p/IronicWhiteBoard

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Dropping Python-2.6 support

2015-01-12 Thread Dmitry Borodaenko
On Mon, Jan 12, 2015 at 9:10 AM, Evgeniy L  wrote:
> Agree with Igor, I think we cannot just drop compatibility for fuel client
> with 2.6 python,

Hm, didn't Igor say in his email that "we have to drop python 2.6 support"?

> the reason is we have old master nodes which have
> 2.6 python, and the newer fuel client should work fine on these
> environments.

Client uses REST API to interact with Fuel, how is Python version a factor?

What exactly is the use cases that requires a new client deployed on
an old Fuel master node (or vice versa)?

> Or we can try to install python 2.7 on the master during the upgrade.

Lets do this. It's not that hard, see the link in an email from Ihar
Hrachyshka on this thread.

> As for Nailgun I don't see any problems to use 2.7.
>
> Thanks,
>
> On Mon, Jan 12, 2015 at 7:32 PM, Igor Kalnitsky 
> wrote:
>>
>> Hi, Roman,
>>
>> Indeed, we have to go forward and drop python 2.6 support. That's how
>> it supposed to be, but, unfortunately, it may not be as easy as it
>> seems at first glance.
>>
>> Fuel Master is flying on top of Cent OS 6.5 which doesn't have python
>> 2.7 at all. So we must either run master node on Cent OS 7 or build
>> python2.7 for Cent OS 6.5. The first case, obviously, requires a lot
>> of work, while the second one is not. But I may wrong, since I have no
>> idea what dependencies python 2.7 requires and what we have in our
>> repos.
>>
>> - Igor
>>
>> On Mon, Jan 12, 2015 at 4:55 PM, Roman Prykhodchenko 
>> wrote:
>> > Folks,
>> >
>> > as it was planned and then announced at the OpenStack summit OpenStack
>> > services deprecated Python-2.6 support. At the moment several services and
>> > libraries are already only compatible with Python>=2.7. And there is no
>> > common sense in trying to get back compatibility with Py2.6 because
>> > OpenStack infra does not run tests for that version of Python.
>> >
>> > The point of this email is that some components of Fuel, say, Nailgun
>> > and Fuel Client are still only tested with Python-2.6. Fuel Client in it’s
>> > turn is about to use OpenStack CI’s python-jobs for running unit tests. 
>> > That
>> > means that in order to make it compatible with Py2.6 there is a need to run
>> > a separate python job in FuelCI.
>> >
>> > However, I believe that forcing the things being compatible with 2.6
>> > when the rest of ecosystem decided not to go with it and when Py2.7 is
>> > already available in the main CentOS repo sounds like a battle with the
>> > common sense. So my proposal is to drop 2.6 support in Fuel-6.1.
>> >
>> >
>> > - romcheg
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Dmitry Borodaenko

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Dropping Python-2.6 support

2015-01-12 Thread Evgeniy L
Hi,

Agree with Igor, I think we cannot just drop compatibility for fuel client
with 2.6 python, the reason is we have old master nodes which have
2.6 python, and the newer fuel client should work fine on these
environments.

Or we can try to install python 2.7 on the master during the upgrade.

As for Nailgun I don't see any problems to use 2.7.

Thanks,

On Mon, Jan 12, 2015 at 7:32 PM, Igor Kalnitsky 
wrote:

> Hi, Roman,
>
> Indeed, we have to go forward and drop python 2.6 support. That's how
> it supposed to be, but, unfortunately, it may not be as easy as it
> seems at first glance.
>
> Fuel Master is flying on top of Cent OS 6.5 which doesn't have python
> 2.7 at all. So we must either run master node on Cent OS 7 or build
> python2.7 for Cent OS 6.5. The first case, obviously, requires a lot
> of work, while the second one is not. But I may wrong, since I have no
> idea what dependencies python 2.7 requires and what we have in our
> repos.
>
> - Igor
>
> On Mon, Jan 12, 2015 at 4:55 PM, Roman Prykhodchenko 
> wrote:
> > Folks,
> >
> > as it was planned and then announced at the OpenStack summit OpenStack
> services deprecated Python-2.6 support. At the moment several services and
> libraries are already only compatible with Python>=2.7. And there is no
> common sense in trying to get back compatibility with Py2.6 because
> OpenStack infra does not run tests for that version of Python.
> >
> > The point of this email is that some components of Fuel, say, Nailgun
> and Fuel Client are still only tested with Python-2.6. Fuel Client in it’s
> turn is about to use OpenStack CI’s python-jobs for running unit tests.
> That means that in order to make it compatible with Py2.6 there is a need
> to run a separate python job in FuelCI.
> >
> > However, I believe that forcing the things being compatible with 2.6
> when the rest of ecosystem decided not to go with it and when Py2.7 is
> already available in the main CentOS repo sounds like a battle with the
> common sense. So my proposal is to drop 2.6 support in Fuel-6.1.
> >
> >
> > - romcheg
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Kilo devstack issue

2015-01-12 Thread Nikesh Kumar Mahalka
Hi,
We deployed a kilo devstack on ubuntu 14.04 server.
We successfully launched a instance from dashboard, but we are unable to
open console from dashboard for instance.Also instacne is unable to get ip

Below is link for local.conf
http://paste.openstack.org/show/156497/



Regards
Nikesh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Dropping Python-2.6 support

2015-01-12 Thread Ihar Hrachyshka
You can get Python 2.7 via SCL: 
https://www.softwarecollections.org/en/scls/rhscl/python27/


On 01/12/2015 05:32 PM, Igor Kalnitsky wrote:

Hi, Roman,

Indeed, we have to go forward and drop python 2.6 support. That's how
it supposed to be, but, unfortunately, it may not be as easy as it
seems at first glance.

Fuel Master is flying on top of Cent OS 6.5 which doesn't have python
2.7 at all. So we must either run master node on Cent OS 7 or build
python2.7 for Cent OS 6.5. The first case, obviously, requires a lot
of work, while the second one is not. But I may wrong, since I have no
idea what dependencies python 2.7 requires and what we have in our
repos.

- Igor

On Mon, Jan 12, 2015 at 4:55 PM, Roman Prykhodchenko  wrote:

Folks,

as it was planned and then announced at the OpenStack summit OpenStack services 
deprecated Python-2.6 support. At the moment several services and libraries are 
already only compatible with Python>=2.7. And there is no common sense in 
trying to get back compatibility with Py2.6 because OpenStack infra does not run 
tests for that version of Python.

The point of this email is that some components of Fuel, say, Nailgun and Fuel 
Client are still only tested with Python-2.6. Fuel Client in it’s turn is about 
to use OpenStack CI’s python-jobs for running unit tests. That means that in 
order to make it compatible with Py2.6 there is a need to run a separate python 
job in FuelCI.

However, I believe that forcing the things being compatible with 2.6 when the 
rest of ecosystem decided not to go with it and when Py2.7 is already available 
in the main CentOS repo sounds like a battle with the common sense. So my 
proposal is to drop 2.6 support in Fuel-6.1.


- romcheg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] IRC logging

2015-01-12 Thread Jeremy Stanley
There's really no way to _force_ official logging on all
project-related channels. People who are opposed to the idea simply
move their conversations to new channels. They'll straddle the line
between "somewhat official looking" and "official enough to require
logging."
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Dropping Python-2.6 support

2015-01-12 Thread Igor Kalnitsky
Hi, Roman,

Indeed, we have to go forward and drop python 2.6 support. That's how
it supposed to be, but, unfortunately, it may not be as easy as it
seems at first glance.

Fuel Master is flying on top of Cent OS 6.5 which doesn't have python
2.7 at all. So we must either run master node on Cent OS 7 or build
python2.7 for Cent OS 6.5. The first case, obviously, requires a lot
of work, while the second one is not. But I may wrong, since I have no
idea what dependencies python 2.7 requires and what we have in our
repos.

- Igor

On Mon, Jan 12, 2015 at 4:55 PM, Roman Prykhodchenko  wrote:
> Folks,
>
> as it was planned and then announced at the OpenStack summit OpenStack 
> services deprecated Python-2.6 support. At the moment several services and 
> libraries are already only compatible with Python>=2.7. And there is no 
> common sense in trying to get back compatibility with Py2.6 because OpenStack 
> infra does not run tests for that version of Python.
>
> The point of this email is that some components of Fuel, say, Nailgun and 
> Fuel Client are still only tested with Python-2.6. Fuel Client in it’s turn 
> is about to use OpenStack CI’s python-jobs for running unit tests. That means 
> that in order to make it compatible with Py2.6 there is a need to run a 
> separate python job in FuelCI.
>
> However, I believe that forcing the things being compatible with 2.6 when the 
> rest of ecosystem decided not to go with it and when Py2.7 is already 
> available in the main CentOS repo sounds like a battle with the common sense. 
> So my proposal is to drop 2.6 support in Fuel-6.1.
>
>
> - romcheg
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI

2015-01-12 Thread Asselin, Ramy
You are correct to run nodepoold as nodepool user.
I didn’t see any issues…
Could you double check the public keys listed in .ssh/authorized_keys in the 
template for Ubuntu and Jenkins users match $NODEPOOL_SSH_KEY?
Ramy

From: Eduard Matei [mailto:eduard.ma...@cloudfounders.com]
Sent: Monday, January 12, 2015 5:30 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting 
up CI

Hi,
Regarding the last issue, i fixed it by logging in and manually "pip install 
docutils". Image was created successfully.

Now the problem is that nodepool is not able to login into instances created 
from that image.
I have NODEPOOL_SSH_KEY exported in the screen where nodepool is running, and 
also i am able to login to the instance from user nodepool, but nodepoold gives 
error:
2015-01-12 14:19:03,095 DEBUG paramiko.transport: Switch to new keys ...
2015-01-12 14:19:03,109 DEBUG paramiko.transport: Trying key 
c03fbf64440cd0c2ecbc07ce4ed59804 from /home/nodepool/.ssh/id_rsa
2015-01-12 14:19:03,135 DEBUG paramiko.transport: userauth is OK
2015-01-12 14:19:03,162 INFO paramiko.transport: Authentication (publickey) 
failed.
2015-01-12 14:19:03,185 DEBUG paramiko.transport: Trying discovered key 
c03fbf64440cd0c2ecbc07ce4ed59804 in /home/nodepool/.ssh/id_rsa
2015-01-12 14:19:03,187 DEBUG paramiko.transport: userauth is OK
^C2015-01-12 14:19:03,210 INFO paramiko.transport: Authentication (publickey) 
failed.
2015-01-12 14:19:03,253 DEBUG paramiko.transport: EOF in transport thread
2015-01-12 14:19:03,254 INFO nodepool.utils: Password auth exception. Try 
number 4...


echo $NODEPOOL_SSH_KEY
B3NzaC1yc2EDAQABAAABAQC9gP6qui1fmHrj02p6OGvnz7kMTJ2rOC3SBYP/Ij/6yz+SU8rL5rqL6jqT30xzy9t1q0zsdJCNB2jExD4xb+NFbaoGlvjF85m12eFqP4CQenxUOdYAepf5sjV2l8WAO3ylspQ78ipLKec98NeKQwLrHB+xon6QfAHXr6ZJ9NRZbmWw/OdpOgAG9Cab+ELTmkfEYgQz01cZE22xEAMvPXz57KlWPvxtE7YwYWy180Yib97EftylsNkrchbSXCwiqgKUf04qWhTgNrVuRJ9mytil6S82VNDxHzTzeCCxY412CV6dDJNLzJItpf/CXQelj/6wJs1GgFl5GWJnqortMR2v

 cat /home/nodepool/.ssh/id_rsa.pub
ssh-rsa 
B3NzaC1yc2EDAQABAAABAQC9gP6qui1fmHrj02p6OGvnz7kMTJ2rOC3SBYP/Ij/6yz+SU8rL5rqL6jqT30xzy9t1q0zsdJCNB2jExD4xb+NFbaoGlvjF85m12eFqP4CQenxUOdYAepf5sjV2l8WAO3ylspQ78ipLKec98NeKQwLrHB+xon6QfAHXr6ZJ9NRZbmWw/OdpOgAG9Cab+ELTmkfEYgQz01cZE22xEAMvPXz57KlWPvxtE7YwYWy180Yib97EftylsNkrchbSXCwiqgKUf04qWhTgNrVuRJ9mytil6S82VNDxHzTzeCCxY412CV6dDJNLzJItpf/CXQelj/6wJs1GgFl5GWJnqortMR2v
 jenkins@jenkins-cinderci

ssh ubuntu@10.100.128.136 -v
OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014
debug1: Reading configuration data /home/nodepool/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Connecting to 10.100.128.136 [10.100.128.136] port 22.
debug1: Connection established.

debug1: Offering RSA public key: /home/nodepool/.ssh/id_rsa
debug1: Server accepts key: pkalg ssh-rsa blen 279
debug1: key_parse_private2: missing begin marker
debug1: read PEM private key done: type RSA
debug1: Authentication succeeded (publickey).
Authenticated to 10.100.128.136 ([10.100.128.136]:22).
...

I was able to login into the "template" instance and also am able to login into 
the "slave" instances.
Also nodepoold was able to login into "template" instance but now it fails 
loging in into "slave".

I tried running it as either nodepol or jenkins users, same result.

Thanks,
Eduard

On Mon, Jan 12, 2015 at 2:09 PM, Eduard Matei 
mailto:eduard.ma...@cloudfounders.com>> wrote:
Hi,
Back with another error during image creation with nodepool:
2015-01-12 13:05:17,775 INFO nodepool.image.build.local_01.d-p-c:   Downloading 
python-daemon-2.0.1.tar.gz (62kB)
2015-01-12 13:05:18,022 INFO nodepool.image.build.local_01.d-p-c: Traceback 
(most recent call last):
2015-01-12 13:05:18,023 INFO nodepool.image.build.local_01.d-p-c:   File 
"", line 20, in 
2015-01-12 13:05:18,023 INFO nodepool.image.build.local_01.d-p-c:   File 
"/tmp/pip-build-r6RJKq/python-daemon/setup.py", line 27, in 
2015-01-12 13:05:18,024 INFO nodepool.image.build.local_01.d-p-c: 
import version
2015-01-12 13:05:18,024 INFO nodepool.image.build.local_01.d-p-c:   File 
"version.py", line 51, in 
2015-01-12 13:05:18,024 INFO nodepool.image.build.local_01.d-p-c: 
import docutils.core
2015-01-12 13:05:18,024 INFO nodepool.image.build.local_01.d-p-c: 
ImportError: No module named docutils.core
2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c: Complete 
output from command python setup.py egg_info:
2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c: Traceback 
(most recent call last):
2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c:
2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c:   File 
"", line 20, in 
2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c:
2015-01-12 13:05:18,025 INFO nodepool.i

Re: [openstack-dev] [Fuel] Image based provisioning

2015-01-12 Thread Alexander Gordeev
On Tue, Dec 16, 2014 at 11:09 PM, Andrey Danin  wrote:
> Adding Mellanox team explicitly.
>
> Gil, Nurit, Aviram, can you confirm that you tested that feature? It can be
> enabled on every fresh ISO. You just need to enable the Experimental mode
> (please, see the documentation for instructions).

At least, there's a bug against mellanox for image based feature.
https://bugs.launchpad.net/fuel/+bug/1405661
So it won't work at all

>
> On Tuesday, December 16, 2014, Dmitry Pyzhov  wrote:
>>
>> Guys,
>>
>> we are about to enable image based provisioning in our master by default.
>> I'm trying to figure out requirement for this change. As far as I know, it
>> was not tested on scale lab. Is it true? Have we ever run full system tests
>> cycle with this option?
>>
>> Do we have any other pre-requirements?
>
>
>
> --
> Andrey Danin
> ada...@mirantis.com
> skype: gcon.monolake
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Where to keep data about stack breakpoints?

2015-01-12 Thread Ryan Brown
On 01/12/2015 10:29 AM, Tomas Sedovic wrote:
> Hey folks,
> 
> I did a quick proof of concept for a part of the Stack Breakpoint
> spec[1] and I put the "does this resource have a breakpoint" flag into
> the metadata of the resource:
> 
> https://review.openstack.org/#/c/146123/
> 
> I'm not sure where this info really belongs, though. It does sound like
> metadata to me (plus we don't have to change the database schema that
> way), but can we use it for breakpoints etc., too? Or is metadata
> strictly for Heat users and not for engine-specific stuff?

I'd rather not store it in metadata so we don't mix user metadata with
implementation-specific-and-also-subject-to-change runtime metadata. I
think this is a big enough feature to warrant a schema update (and I
can't think of another place I'd want to put the breakpoint info).

> I also had a chat with Steve Hardy and he suggested adding a STOPPED
> state to the stack (this isn't in the spec). While not strictly
> necessary to implement the spec, this would help people figure out that
> the stack has reached a breakpoint instead of just waiting on a resource
> that takes a long time to finish (the heat-engine log and event-list
> still show that a breakpoint was reached but I'd like to have it in
> stack-list and resource-list, too).
> 
> It makes more sense to me to call it PAUSED (we're not completely
> stopping the stack creation after all, just pausing it for a bit), I'll
> let Steve explain why that's not the right choice :-).

+1 to PAUSED. To me, STOPPED implies an end state (which a breakpoint is
not).

For sublime end user confusion, we could use BROKEN. ;)

> Tomas
> 
> [1]:
> http://specs.openstack.org/openstack/heat-specs/specs/juno/stack-breakpoint.html
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova]Why nova mounts FS for LXC container instead of libvirt?

2015-01-12 Thread Daniel P. Berrange
On Mon, Jan 12, 2015 at 06:28:53PM +0300, Dmitry Guryanov wrote:
> On 01/05/2015 02:30 PM, Daniel P. Berrange wrote:
> >On Tue, Dec 30, 2014 at 05:18:19PM +0300, Dmitry Guryanov wrote:
> >>Hello,
> >>
> >>Libvirt can create loop or nbd device for LXC container and mount it by
> >>itself, for instance, you can add something like this to xml config:
> >>
> >>
> >>   
> >>   
> >>   
> >>
> >>
> >>But nova mounts filesystem for container by itself. Is this because rhel-6
> >>doesn't support filesystems with type='file' or there are some other 
> >>reasons?
> >The support for mounting using NBD in OpenStack pre-dated the support
> >for doing this in Libvirt. In faact the reason I added this feature to
> >libvirt was precisely because OpenStack was doing this.
> >
> >We haven't switched Nova over to use this new syntax yet though, because
> >that would imply a change to the min required libvirt version for LXC.
> >That said we should probably make such a change, because honestly no
> >one should be using LXC without using user namespaces, othewise their
> >cloud is horribly insecure. This would imply making the min libvirt for
> >LXC much much newer than it is today.
> >
> 
> It's not very hard to replace mounting in nova with generating proper xml
> config. Can we do it before kilo release? Are there any people, who use
> openstack with LXC in production?

Looking at libvirt history, it would mean we mandate 1.0.6 as the min
libvirt for use with the LXC driver.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Policy][Group-based-policy] ODL Policy Driver Specs

2015-01-12 Thread Yapeng Wu
Hello, Sachi,

They both works. "End point group" has been renamed to "policy target group". 
It is recommended to use "gbp policy-target-group-create".

Yapeng


From: Sachi Gupta [mailto:sachi.gu...@tcs.com]
Sent: Monday, January 12, 2015 7:03 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Policy][Group-based-policy] ODL Policy Driver 
Specs

Hi,

Can anyone explain the difference between gbp group-create and gbp 
policy-target-group-create??

I think both these are working same.

Thanks & Regards
Sachi Gupta




From:Sumit Naiksatam 
To:"OpenStack Development Mailing List (not for usage questions)" 

Date:11/26/2014 01:35 PM
Subject:Re: [openstack-dev] [Policy][Group-based-policy] ODL Policy 
DriverSpecs




Hi, This GBP spec is currently being worked on:
https://review.openstack.org/#/c/134285/

It will be helpful if you can add "[Policy][Group-based-policy]" in
the subject of your emails, so that the email gets characterized
correctly.

Thanks,
~Sumit.

On Tue, Nov 25, 2014 at 4:27 AM, Sachi Gupta  wrote:
> Hey All,
>
> I need to understand the interaction between the Openstack GBP and the
> Opendaylight GBP project which will be done by ODL Policy driver.
>
> Can someone provide me with specs of ODL Policy driver for making my
> understanding on call flow.
>
>
> Thanks & Regards
> Sachi Gupta
>
> =-=-=
> Notice: The information contained in this e-mail
> message and/or attachments to it may contain
> confidential or privileged information. If you are
> not the intended recipient, any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the message
> and any attachments. Thank you
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Where to keep data about stack breakpoints?

2015-01-12 Thread Tomas Sedovic

Hey folks,

I did a quick proof of concept for a part of the Stack Breakpoint 
spec[1] and I put the "does this resource have a breakpoint" flag into 
the metadata of the resource:


https://review.openstack.org/#/c/146123/

I'm not sure where this info really belongs, though. It does sound like 
metadata to me (plus we don't have to change the database schema that 
way), but can we use it for breakpoints etc., too? Or is metadata 
strictly for Heat users and not for engine-specific stuff?


I also had a chat with Steve Hardy and he suggested adding a STOPPED 
state to the stack (this isn't in the spec). While not strictly 
necessary to implement the spec, this would help people figure out that 
the stack has reached a breakpoint instead of just waiting on a resource 
that takes a long time to finish (the heat-engine log and event-list 
still show that a breakpoint was reached but I'd like to have it in 
stack-list and resource-list, too).


It makes more sense to me to call it PAUSED (we're not completely 
stopping the stack creation after all, just pausing it for a bit), I'll 
let Steve explain why that's not the right choice :-).


Tomas

[1]: 
http://specs.openstack.org/openstack/heat-specs/specs/juno/stack-breakpoint.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova]Why nova mounts FS for LXC container instead of libvirt?

2015-01-12 Thread Dmitry Guryanov

On 01/05/2015 02:30 PM, Daniel P. Berrange wrote:

On Tue, Dec 30, 2014 at 05:18:19PM +0300, Dmitry Guryanov wrote:

Hello,

Libvirt can create loop or nbd device for LXC container and mount it by
itself, for instance, you can add something like this to xml config:


   
   
   


But nova mounts filesystem for container by itself. Is this because rhel-6
doesn't support filesystems with type='file' or there are some other reasons?

The support for mounting using NBD in OpenStack pre-dated the support
for doing this in Libvirt. In faact the reason I added this feature to
libvirt was precisely because OpenStack was doing this.

We haven't switched Nova over to use this new syntax yet though, because
that would imply a change to the min required libvirt version for LXC.
That said we should probably make such a change, because honestly no
one should be using LXC without using user namespaces, othewise their
cloud is horribly insecure. This would imply making the min libvirt for
LXC much much newer than it is today.

Regards,
Daniel


It's not very hard to replace mounting in nova with generating proper 
xml config. Can we do it before kilo release? Are there any people, who 
use openstack with LXC in production?


--
Dmitry Guryanov


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] IRC logging

2015-01-12 Thread Nikhil Komawar
+1 to Flavio's proposal.

Thanks,
-Nikhil


From: Flavio Percoco [fla...@redhat.com]
Sent: Monday, January 12, 2015 3:16 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] IRC logging

On 09/01/15 12:11 -0800, Joshua Harlow wrote:
>So the only comment I'll put in is one that I know not everyone agrees
>with but might as well throw it out there.
>
>http://freenode.net/channel_guidelines.shtml (this page has a bunch of
>useful advice IMHO).
>
>From that; something useful to look/think over at least...
>
>"""
>If you're considering publishing channel logs, think it through. The
>freenode network is an interactive environment. Even on public
>channels, most users don't weigh their comments with the idea that
>they'll be enshrined in perpetuity. For that reason, few participants
>publish logs.
>
>If you're publishing logs on an ongoing basis, your channel topic
>should reflect that fact. Be sure to provide a way for users to make
>comments without logging, and get permission from the channel owners
>before you start. If you're thinking of "anonymizing" your logs
>(removing information that identifies the specific users), be aware
>that it's difficult to do it well—replies and general context often
>provide identifying information which is hard to filter.
>
>If you just want to publish a single conversation, be careful to get
>permission from each participant. Provide as much context as you can.
>Avoid the temptation to publish or distribute logs without permission
>in order to portray someone in a bad light. The reputation you save
>will most likely be your own.
>"""

(Joshua, the below is not about what you posted, I really appreciate
you bringing the above into the discussion)

FWIW, I kind of feel that channel logging should become an OpenStack
thing and not a per-project thing. Log all openstack "official"
channels, make it clear in the
wiki/homepage/HoToContribute/WhateverYouWannaCallIt and move on.

Nothing, absolutely nothing, prevents the above from happening right
now. I've local (as in my ZNC server) logs of the last 1y 1/2 and I
could just make them public.

Really, IRC is basically(?) public by default and I - I know this is
probably personal opinion - don't think there's a difference between a
logged channel and a not logged one. If we wanted to make the channel
private, we should password protect it and invite few people, make
them sign a contract where they swear they won't publish logs and
whatnot.

Anyway, I think a good way to avoid these discussions for future
projects is to simply enable logging on all openstack- channels.

Cheers,
Flavio


>
>Brian Rosmaita wrote:
>>The response on the review is overwhelmingly positive (or, strictly
>>speaking, unanimously non-negative).
>>
>>If anyone has an objection, could you please register it before 12:00
>>UTC on Monday, January 12?
>>
>>https://review.openstack.org/#/c/145025/
>>
>>thanks,
>>brian
>>
>>*From:* David Stanek [dsta...@dstanek.com]
>>*Sent:* Wednesday, January 07, 2015 4:43 PM
>>*To:* OpenStack Development Mailing List (not for usage questions)
>>*Subject:* Re: [openstack-dev] [Glance] IRC logging
>>
>>It's also important to remember that IRC channels are typically not
>>private and are likely already logged by dozens of people anyway.
>>
>>On Tue, Jan 6, 2015 at 1:22 PM, Christopher Aedo >> wrote:
>>
>>On Tue, Jan 6, 2015 at 2:49 AM, Flavio Percoco >> wrote:
>>>  Fully agree... I don't see how enable logging should be a limitation
>>>  for freedom of thought. We've used it in Zaqar since day 0 and it's
>>>  bee of great help for all of us.
>>>
>>>  The logging does not remove the need of meetings where decisions and
>>>  more relevant/important topics are discussed.
>>
>>Wanted to second this as well. I'm strongly in favor of logging -
>>looking through backlogs of chats on other channels has been very
>>helpful to me in the past, and it sure to help others in the future.
>>I don't think there is danger of anyone pointing to a logged IRC
>>conversation in this context as some statement of record.
>>
>>-Christopher
>>
>>___
>>OpenStack-dev mailing list
>>OpenStack-dev@lists.openstack.org
>>
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>>--
>>David
>>blog: http://www.traceback.org
>>twitter: http://twitter.com/dstanek
>>www: http://dstanek.com
>>
>>___
>>OpenStack-dev mailing list
>>OpenStack-dev@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.open

[openstack-dev] [Fuel] Dropping Python-2.6 support

2015-01-12 Thread Roman Prykhodchenko
Folks,

as it was planned and then announced at the OpenStack summit OpenStack services 
deprecated Python-2.6 support. At the moment several services and libraries are 
already only compatible with Python>=2.7. And there is no common sense in 
trying to get back compatibility with Py2.6 because OpenStack infra does not 
run tests for that version of Python.

The point of this email is that some components of Fuel, say, Nailgun and Fuel 
Client are still only tested with Python-2.6. Fuel Client in it’s turn is about 
to use OpenStack CI’s python-jobs for running unit tests. That means that in 
order to make it compatible with Py2.6 there is a need to run a separate python 
job in FuelCI.

However, I believe that forcing the things being compatible with 2.6 when the 
rest of ecosystem decided not to go with it and when Py2.7 is already available 
in the main CentOS repo sounds like a battle with the common sense. So my 
proposal is to drop 2.6 support in Fuel-6.1.


- romcheg


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][plugins] Fuel 6.0 plugin for Pacemaker STONITH (HA fencing)

2015-01-12 Thread Bogdan Dobrelya
> 
> --
> 
> Message: 16
> Date: Wed, 31 Dec 2014 17:41:10 -0800
> From: Andrew Woodward 
> To: "OpenStack Development Mailing List (not for usage questions)"
>   
> Subject: Re: [openstack-dev] [Fuel][plugins] Fuel 6.0 plugin for
>   Pacemaker STONITH (HA fencing)
> Message-ID:
>   
> Content-Type: text/plain; charset="utf-8"
> 
> Bogdan,
> 
> Do you think that the existing post deployment hook is sufficient to
> implement this or does additional plugins development need to be done to
> support this
> On Dec 30, 2014 3:39 AM, "Bogdan Dobrelya"  wrote:
> 

Hello.
Post deployment hooks are a hardcode and is a bad place to contribute
the code I believe. Plugins are a framework and should be used instead
in further development.

If someone would want to use this plugin to configure any custom power
management device type, he or she should:
* make sure there is a corresponding fence agent script exists amongst
the other ones shipped with standard fence-agents package,
* provide required parameters and values for this agent and put them in
a pcs_fencing YAML file and apply puppet manifest for plugin on nodes
(see plugin dev docs) and that's it.

>> Hello.
>> There is a long living blueprint [0] about HA fencing of failed nodes
>> in Corosync and Pacemaker cluster. Happily, in 6.0 release we have a
>> pluggable architecture supported in Fuel.
>>
>> I propose the following implementation [1] (WIP repo [2]) for this
>> feature as a plugin for puppet. It addresses the related blueprint for
>> HA Fencing in puppet manifests of Fuel library [3].
>>
>> For initial version,  all the data definitions for power management
>> devices should be done manually in YAML files (see the plugin's
>> README.md file). Later it could be done in a more user friendly way, as
>> a part of Fuel UI perhaps.
>>
>> Note that the similar approach - YAML data structures which should be
>> filled in by the cloud admin and passed to Fuel Orchestrator
>> automatically at PXE provision stage - could be used as well for Power
>> management blueprint, see the related ML thread [4].
>>
>> Please also note, there is a dev docs for Fuel plugins merged recently
>> [5] where you can find how to build and install this plugin.
>>
>> [0] https://blueprints.launchpad.net/fuel/+spec/ha-fencing
>> [1] https://review.openstack.org/#/c/144425/
>> [2]
>>
>> https://github.com/bogdando/fuel-plugins/tree/fencing_puppet_newprovider/ha_fencing
>> [3]
>> https://blueprints.launchpad.net/fuel/+spec/fencing-in-puppet-manifests
>> [4]
>>
>> http://lists.openstack.org/pipermail/openstack-dev/2014-November/049794.html
>> [5]
>>
>> http://docs.mirantis.com/fuel/fuel-6.0/plugin-dev.html#what-is-pluggable-architecture
>>
>> --
>> Best regards,
>> Bogdan Dobrelya,
>> Skype #bogdando_at_yahoo.com
>> Irc #bogdando
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>


-- 
Best regards,
Bogdan Dobrelya,
Skype #bogdando_at_yahoo.com
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Future-based api for openstack clients calls, that starts background tasks

2015-01-12 Thread Konstantin Danilov
On Mon, Jan 12, 2015 at 3:33 PM, Angus Salkeld 
wrote:

>
> On Mon, Jan 12, 2015 at 10:17 PM, Konstantin Danilov <
> kdani...@mirantis.com> wrote:
>
>> Boris,
>>
>> Move from sync http to something like websocket requires a lot of work
>> and not directly connected
>> with API issue. When openstack api servers begin to support
>> websockets - it would be easy to change implementation of monitoring
>> thread without breaking compatibility.
>> At the moment periodical pooling from additional thread looks reasonable
>> for me
>> and it creates same amount of http requests as all current
>> implementation.
>>
>> BP is not about improving performance,
>> but about providing convenient and common API to handle background tasks.
>>
>> > So we won't need to retrieve 100500 times information about object.
>> As I sad before - this API creates same amount of load as any code which
>> we use to check background task currently.
>> It even can decrease load due to requests aggregation in some cases (but
>> there a points to discuss).
>>
>> > As well this pattern doesn't look great.
>> > I would prefer to see something like:
>> > vm = novaclient.servers.create(, sync=True)
>>
>> This is completely different pattern. It is blocking call, which don't
>> allows you to start two(or more) background tasks
>> and from same thread and make some calculations while they running in
>> background.
>>
>
> Except if you use threads (eventlet or other) - I am still struggling to
> enjoy Futures/yield based flowcontrol, lost battle i guess:(.
>

>>
>>
>>
>>
>> On Mon, Jan 12, 2015 at 1:42 PM, Boris Pavlovic 
>> wrote:
>>
>>> Konstantin,
>>>
>>>
>>> I believe it's better to work on server side, and use some modern
>>> approach like web sockets for async operations. So we won't need to
>>> retrieve 100500 times information about object. And then use this feature
>>> in clients.
>>>
>>>  create_future = novaclient.servers.create_async()
 .
 vm = create_future.result()
>>>
>>>
>>> As well this pattern doesn't look great.
>>>
>>> I would prefer to see something like:
>>>
>>>   vm = novaclient.servers.create(, sync=True)
>>>
>>>
>>> Best regards,
>>> Boris Pavlovic
>>>
>>>
>>>
>>>
>>>
>>> On Mon, Jan 12, 2015 at 2:30 PM, Konstantin Danilov <
>>> kdani...@mirantis.com> wrote:
>>>
 Hi all.

 There a set of openstack api functions which starts background actions
 and return preliminary results - like 'novaclient.create'. Those
 functions
 requires periodically check results and handle timeouts/errors
 (and often cleanup + restart help to fix an error).

 Check/retry/cleanup code duplicated over a lot of core projects.
 As examples - heat, tempest, rally, etc and definitely in many
 third-party scripts.

>>>
> We have some very similar code at the moment, but we are keen to move away
> from it to
> something like making use of rpc ".{start,end}" notifications to reduce
> the load we put on keystone and
> friends.
>

This is nice approach for core projects, yet novaclient users are typically
can't use such approach.

But the nice things about futures is that we can have different engines
(websockets, sync http, rpc with callbacks, etc)
behind same API, and hide all implementation details behind it.  It's even
possible to use them in simultaneously -
different engine would used to handle different calls.


>
>
>>
 I propose to provide common higth-level API for such functions, which
 uses
 'futures' (http://en.wikipedia.org/wiki/Futures_and_promises) as a way
 to
 present background task.

 Idea is to add to each background-task-starter function a complimentary
 call,
 that returns 'future' object. E.g.

 create_future = novaclient.servers.create_async()
 .
 vm = create_future.result()

>>>
> Is that going to return on any state change or do you pass in a list of
> acceptable states?
>

In general it should returns result if background task is completed
successfully or raise exception
if it fails. For server I currently use 'active' as success marker and
'error' or timeout for exception
( https://github.com/koder-ua/os_api/blob/master/os_api/nova.py#L74 ), and
hope that expected states
can be calculated from api call/parameters automatically, but not 100%
sure. So - yes, additional
parameter might be required.


>
> -Angus
>
>
>>
 This allows to unify(and optimize) monitoring cycles, retries, etc.
 Please found complete BP at
 https://github.com/koder-ua/os_api/blob/master/README.md

 Thanks
 --
 Kostiantyn Danilov aka koder 
 Principal software engineer, Mirantis

 skype:koder.ua
 http://koder-ua.blogspot.com/
 http://mirantis.com


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@

[openstack-dev] [neutron] Meeting tomorrow: Going over Critical and High priority Kilo-2 specs

2015-01-12 Thread Kyle Mestery
Folks:

During tomorrow's Neutron meeting, I'd like to spend a little time going
over the approved specs marked as Critical and High priority [1]. We're
about 3 weeks out from Kilo-2, so I'd like to get a feel for how these are
coming along. If you are assigned one of these specs and can't make the
meeting, feel free to find me on IRC, reply to the thread, or update your
BP in LP with any remarks there.

Thanks!
Kyle

[1] https://wiki.openstack.org/wiki/Network/Meetings#On_Demand_Agenda
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Vancouver Design Summit format changes

2015-01-12 Thread Kyle Mestery
On Fri, Jan 9, 2015 at 8:50 AM, Thierry Carrez 
wrote:

> Hi everyone,
>
> The OpenStack Foundation staff is considering a number of changes to the
> Design Summit format for Vancouver, changes on which we'd very much like
> to hear your feedback.
>
> The problems we are trying to solve are the following:
> - Accommodate the needs of more "OpenStack projects"
> - Reduce separation and perceived differences between the Ops Summit and
> the Design/Dev Summit
> - Create calm and less-crowded spaces for teams to gather and get more
> work done
>
> While some sessions benefit from large exposure, loads of feedback and
> large rooms, some others are just workgroup-oriented work sessions that
> benefit from smaller rooms, less exposure and more whiteboards. Smaller
> rooms are also cheaper space-wise, so they allow us to scale more easily
> to a higher number of "OpenStack projects".
>
> My proposal is the following. Each project team would have a track at
> the Design Summit. Ops feedback is in my opinion part of the design of
> OpenStack, so the Ops Summit would become a track within the
> forward-looking "Design Summit". Tracks may use two separate types of
> sessions:
>
> * Fishbowl sessions
> Those sessions are for open discussions where a lot of participation and
> feedback is desirable. Those would happen in large rooms (100 to 300
> people, organized in fishbowl style with a projector). Those would have
> catchy titles and appear on the general Design Summit schedule. We would
> have space for 6 or 7 of those in parallel during the first 3 days of
> the Design Summit (we would not run them on Friday, to reproduce the
> successful Friday format we had in Paris).
>
> * Working sessions
> Those sessions are for a smaller group of contributors to get specific
> work done or prioritized. Those would happen in smaller rooms (20 to 40
> people, organized in boardroom style with loads of whiteboards). Those
> would have a blanket title (like "infra team working session") and
> redirect to an etherpad for more precise and current content, which
> should limit out-of-team participation. Those would replace "project
> pods". We would have space for 10 to 12 of those in parallel for the
> first 3 days, and 18 to 20 of those in parallel on the Friday (by
> reusing fishbowl rooms).
>
> Each project track would request some mix of sessions ("We'd like 4
> fishbowl sessions, 8 working sessions on Tue-Thu + half a day on
> Friday") and the TC would arbitrate how to allocate the limited
> resources. Agenda for the fishbowl sessions would need to be published
> in advance, but agenda for the working sessions could be decided
> dynamically from an etherpad agenda.
>
> By making larger use of smaller spaces, we expect that setup to let us
> accommodate the needs of more projects. By merging the two separate Ops
> Summit and Design Summit events, it should make the Ops feedback an
> integral part of the Design process rather than a second-class citizen.
> By creating separate working session rooms, we hope to evolve the "pod"
> concept into something where it's easier for teams to get work done
> (less noise, more whiteboards, clearer agenda).
>
> What do you think ? Could that work ? If not, do you have alternate
> suggestions ?
>
> This looks great, thanks for continuing to evolve the Summit format!

Kyle

--
> Thierry Carrez (ttx)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Puppet] Manifests for granular deploy steps and testing results against the host OS

2015-01-12 Thread Sergii Golovatiuk
Hi,

Puppet OpenStack community uses Beaker for acceptance testing. I would
consider it as option [2]

[2] https://github.com/puppetlabs/beaker

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Mon, Jan 12, 2015 at 2:53 PM, Bogdan Dobrelya 
wrote:

> Hello.
>
> We are working on the modularization of Openstack deployment by puppet
> manifests in Fuel library [0].
>
> Each deploy step should be post-verified with some testing framework as
> well.
>
> I believe the framework should:
> * be shipped as a part of Fuel library for puppet manifests instead of
> orchestration or Nailgun backend logic;
> * allow the deployer to verify results right in-place, at the node being
> deployed, for example, with a rake tool;
> * be compatible / easy to integrate with the existing orchestration in
> Fuel and Mistral as an option?
>
> It looks like test resources provided by Serverspec [1] are a good
> option, what do you think?
>
> What plans have Fuel Nailgun team for testing the results of deploy
> steps aka tasks? The spec for blueprint gives no a clear answer.
>
> [0]
> https://blueprints.launchpad.net/fuel/+spec/fuel-library-modularization
> [1] http://serverspec.org/resource_types.html
>
> --
> Best regards,
> Bogdan Dobrelya,
> Skype #bogdando_at_yahoo.com
> Irc #bogdando
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][NFV][qa][Telco] Testing NUMA, CPU pinning and large pages

2015-01-12 Thread Daniel P. Berrange
On Mon, Jan 12, 2015 at 02:47:19PM +0100, Marc Koderer wrote:
> Hi Vladik,
> 
> I added the [Telco] tag.
> see below.. 
> 
> Am 12.01.2015 um 03:02 schrieb Vladik Romanovsky 
> :
> 
> > Hi everyone,
> > 
> > Following Steve Gordon's email [1], regarding CI for NUMA, SR-IOV, and other
> > features, I'd like to start a discussion about the NUMA testing in 
> > particular.
> > 
> > Recently we have started a work to test some of these features.
> > The current plan is to use the functional tests, in the Nova tree, to 
> > exercise
> > the code paths for NFV use cases. In general, these will contain tests
> > to cover various scenarios regarding NUMA, CPU pinning, large pages and
> > validate a correct placement/scheduling.
> 
> I think we need to determine where these patches are belonging to.
> So IMHO Nova tree makes sense. But I am unsure if Tempest is the right place.
> I would say all tests with a general propose can be located in Tempest
> especially scenario tests.
> 
> Since we are already planning to have a external CI system it would make
> sense to keep them somewhere outside and use the tempest lib (when ready).

NUMA, huge pages & cpu pinning are all general purpose Nova features. While
NFV / Telcos will be a large user of them, they're not the only. As such
these features should be tested in a general Nova test suite, as we would
for any other Nova functionality, not in a telco-specific test suite as
that just re-inforces the impression that this is a niche feature only
useful for a few use cases.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Request Spec Freeze Exception (DRBD for Nova)

2015-01-12 Thread Philipp Marek
Hello all,

in Paris (and later on, on IRC and the mailing list) I began to ask around 
about providing a DRBD storage driver for Nova.
This is an alternative to using iSCSI for block storage access, and would 
be especially helpful for backends already using DRBD for replicated 
storage.


The spec at

https://review.openstack.org/#/c/134153/

was not approved in December on the grounds that the DRBD Cinder driver 

https://review.openstack.org/#/c/140451/

should be merged first; because of (network) timeouts during the K-1 
milestone (and then merge conflicts, rebased dependencies, etc.) it wasn't
merged until recently (Jan 5th).

Now that the Cinder driver is already upstream, we'd like to ask for 
approval of the Nova driver - it would provide quite some performance boost 
over having all block storage data.


Thank you for your kind consideration!


Regards,

Phil

-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] [Puppet] Manifests for granular deploy steps and testing results against the host OS

2015-01-12 Thread Bogdan Dobrelya
Hello.

We are working on the modularization of Openstack deployment by puppet
manifests in Fuel library [0].

Each deploy step should be post-verified with some testing framework as
well.

I believe the framework should:
* be shipped as a part of Fuel library for puppet manifests instead of
orchestration or Nailgun backend logic;
* allow the deployer to verify results right in-place, at the node being
deployed, for example, with a rake tool;
* be compatible / easy to integrate with the existing orchestration in
Fuel and Mistral as an option?

It looks like test resources provided by Serverspec [1] are a good
option, what do you think?

What plans have Fuel Nailgun team for testing the results of deploy
steps aka tasks? The spec for blueprint gives no a clear answer.

[0] https://blueprints.launchpad.net/fuel/+spec/fuel-library-modularization
[1] http://serverspec.org/resource_types.html

-- 
Best regards,
Bogdan Dobrelya,
Skype #bogdando_at_yahoo.com
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][NFV][qa][Telco] Testing NUMA, CPU pinning and large pages

2015-01-12 Thread Marc Koderer
Hi Vladik,

I added the [Telco] tag.
see below.. 

Am 12.01.2015 um 03:02 schrieb Vladik Romanovsky 
:

> Hi everyone,
> 
> Following Steve Gordon's email [1], regarding CI for NUMA, SR-IOV, and other
> features, I'd like to start a discussion about the NUMA testing in particular.
> 
> Recently we have started a work to test some of these features.
> The current plan is to use the functional tests, in the Nova tree, to exercise
> the code paths for NFV use cases. In general, these will contain tests
> to cover various scenarios regarding NUMA, CPU pinning, large pages and
> validate a correct placement/scheduling.

I think we need to determine where these patches are belonging to.
So IMHO Nova tree makes sense. But I am unsure if Tempest is the right place.
I would say all tests with a general propose can be located in Tempest
especially scenario tests.

Since we are already planning to have a external CI system it would make
sense to keep them somewhere outside and use the tempest lib (when ready).

Regards
Marc

> In addition to the functional tests in Nova, we have also proposed two basic
> scenarios in Tempest [2][3]. One to make sure that an instance can boot with a
> minimal NUMA configuration (a topology that every host should have) and
> one that would request an "impossible" topology and fail with an expected
> exception.
> 
> This work doesn't eliminate the need of testing on a real hardware, however,
> these tests should provide coverage for the features that are currently being
> submitted upstream and hopefully be a good starting point for future testing.
> 
> Thoughts?
> 
> Vladik
> 
> [1] 
> http://lists.openstack.org/pipermail/openstack-dev/2014-November/050306.html
> [2] https://review.openstack.org/143540
> [3] https://review.openstack.org/143541
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Requesting exception for resource-object-models blueprint spec

2015-01-12 Thread Jay Pipes

https://review.openstack.org/#/c/127609/

This is a fundamental building block for the #3 priority (Scheduler) 
work in Kilo. It's been through 11 revisions so far and has support from 
at least one nova-driver and 4 non-drivers.


This work is a building block for the scheduler because it changes the 
way we publish and consume a set of resources managed by the resource 
tracker and scheduler subsystems. It also replaces the extensible 
resource tracker with a more robust method of adding new resource classes.


Thanks,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Future-based api for openstack clients calls, that starts background tasks

2015-01-12 Thread Angus Salkeld
On Mon, Jan 12, 2015 at 10:17 PM, Konstantin Danilov 
wrote:

> Boris,
>
> Move from sync http to something like websocket requires a lot of work and
> not directly connected
> with API issue. When openstack api servers begin to support
> websockets - it would be easy to change implementation of monitoring
> thread without breaking compatibility.
> At the moment periodical pooling from additional thread looks reasonable
> for me
> and it creates same amount of http requests as all current implementation.
>
> BP is not about improving performance,
> but about providing convenient and common API to handle background tasks.
>
> > So we won't need to retrieve 100500 times information about object.
> As I sad before - this API creates same amount of load as any code which
> we use to check background task currently.
> It even can decrease load due to requests aggregation in some cases (but
> there a points to discuss).
>
> > As well this pattern doesn't look great.
> > I would prefer to see something like:
> > vm = novaclient.servers.create(, sync=True)
>
> This is completely different pattern. It is blocking call, which don't
> allows you to start two(or more) background tasks
> and from same thread and make some calculations while they running in
> background.
>

Except if you use threads (eventlet or other) - I am still struggling to
enjoy Futures/yield based flowcontrol, lost battle i guess:(.


>
>
>
>
>
> On Mon, Jan 12, 2015 at 1:42 PM, Boris Pavlovic 
> wrote:
>
>> Konstantin,
>>
>>
>> I believe it's better to work on server side, and use some modern
>> approach like web sockets for async operations. So we won't need to
>> retrieve 100500 times information about object. And then use this feature
>> in clients.
>>
>>  create_future = novaclient.servers.create_async()
>>> .
>>> vm = create_future.result()
>>
>>
>> As well this pattern doesn't look great.
>>
>> I would prefer to see something like:
>>
>>   vm = novaclient.servers.create(, sync=True)
>>
>>
>> Best regards,
>> Boris Pavlovic
>>
>>
>>
>>
>>
>> On Mon, Jan 12, 2015 at 2:30 PM, Konstantin Danilov <
>> kdani...@mirantis.com> wrote:
>>
>>> Hi all.
>>>
>>> There a set of openstack api functions which starts background actions
>>> and return preliminary results - like 'novaclient.create'. Those
>>> functions
>>> requires periodically check results and handle timeouts/errors
>>> (and often cleanup + restart help to fix an error).
>>>
>>> Check/retry/cleanup code duplicated over a lot of core projects.
>>> As examples - heat, tempest, rally, etc and definitely in many
>>> third-party scripts.
>>>
>>
We have some very similar code at the moment, but we are keen to move away
from it to
something like making use of rpc ".{start,end}" notifications to reduce the
load we put on keystone and
friends.


>
>>> I propose to provide common higth-level API for such functions, which
>>> uses
>>> 'futures' (http://en.wikipedia.org/wiki/Futures_and_promises) as a way
>>> to
>>> present background task.
>>>
>>> Idea is to add to each background-task-starter function a complimentary
>>> call,
>>> that returns 'future' object. E.g.
>>>
>>> create_future = novaclient.servers.create_async()
>>> .
>>> vm = create_future.result()
>>>
>>
Is that going to return on any state change or do you pass in a list of
acceptable states?

-Angus


>
>>> This allows to unify(and optimize) monitoring cycles, retries, etc.
>>> Please found complete BP at
>>> https://github.com/koder-ua/os_api/blob/master/README.md
>>>
>>> Thanks
>>> --
>>> Kostiantyn Danilov aka koder 
>>> Principal software engineer, Mirantis
>>>
>>> skype:koder.ua
>>> http://koder-ua.blogspot.com/
>>> http://mirantis.com
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Kostiantyn Danilov aka koder.ua
> Principal software engineer, Mirantis
>
> skype:koder.ua
> http://koder-ua.blogspot.com/
> http://mirantis.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:

Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI

2015-01-12 Thread Eduard Matei
Hi,
Regarding the last issue, i fixed it by logging in and manually "pip
install docutils". Image was created successfully.

Now the problem is that nodepool is not able to login into instances
created from that image.
I have NODEPOOL_SSH_KEY exported in the screen where nodepool is running,
and also i am able to login to the instance from user nodepool, but
nodepoold gives error:
2015-01-12 14:19:03,095 DEBUG paramiko.transport: Switch to new keys ...
2015-01-12 14:19:03,109 DEBUG paramiko.transport: Trying key
c03fbf64440cd0c2ecbc07ce4ed59804 from /home/nodepool/.ssh/id_rsa
2015-01-12 14:19:03,135 DEBUG paramiko.transport: userauth is OK
2015-01-12 14:19:03,162 INFO paramiko.transport: Authentication (publickey)
failed.
2015-01-12 14:19:03,185 DEBUG paramiko.transport: Trying discovered key
c03fbf64440cd0c2ecbc07ce4ed59804 in /home/nodepool/.ssh/id_rsa
2015-01-12 14:19:03,187 DEBUG paramiko.transport: userauth is OK
^C2015-01-12 14:19:03,210 INFO paramiko.transport: Authentication
(publickey) failed.
2015-01-12 14:19:03,253 DEBUG paramiko.transport: EOF in transport thread
2015-01-12 14:19:03,254 INFO nodepool.utils: Password auth exception. Try
number 4...


echo $NODEPOOL_SSH_KEY
B3NzaC1yc2EDAQABAAABAQC9gP6qui1fmHrj02p6OGvnz7kMTJ2rOC3SBYP/Ij/6yz+SU8rL5rqL6jqT30xzy9t1q0zsdJCNB2jExD4xb+NFbaoGlvjF85m12eFqP4CQenxUOdYAepf5sjV2l8WAO3ylspQ78ipLKec98NeKQwLrHB+xon6QfAHXr6ZJ9NRZbmWw/OdpOgAG9Cab+ELTmkfEYgQz01cZE22xEAMvPXz57KlWPvxtE7YwYWy180Yib97EftylsNkrchbSXCwiqgKUf04qWhTgNrVuRJ9mytil6S82VNDxHzTzeCCxY412CV6dDJNLzJItpf/CXQelj/6wJs1GgFl5GWJnqortMR2v

 cat /home/nodepool/.ssh/id_rsa.pub
ssh-rsa
B3NzaC1yc2EDAQABAAABAQC9gP6qui1fmHrj02p6OGvnz7kMTJ2rOC3SBYP/Ij/6yz+SU8rL5rqL6jqT30xzy9t1q0zsdJCNB2jExD4xb+NFbaoGlvjF85m12eFqP4CQenxUOdYAepf5sjV2l8WAO3ylspQ78ipLKec98NeKQwLrHB+xon6QfAHXr6ZJ9NRZbmWw/OdpOgAG9Cab+ELTmkfEYgQz01cZE22xEAMvPXz57KlWPvxtE7YwYWy180Yib97EftylsNkrchbSXCwiqgKUf04qWhTgNrVuRJ9mytil6S82VNDxHzTzeCCxY412CV6dDJNLzJItpf/CXQelj/6wJs1GgFl5GWJnqortMR2v
jenkins@jenkins-cinderci

ssh ubuntu@10.100.128.136 -v
OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014
debug1: Reading configuration data /home/nodepool/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Connecting to 10.100.128.136 [10.100.128.136] port 22.
debug1: Connection established.

debug1: Offering RSA public key: /home/nodepool/.ssh/id_rsa
debug1: Server accepts key: pkalg ssh-rsa blen 279
debug1: key_parse_private2: missing begin marker
debug1: read PEM private key done: type RSA
debug1: Authentication succeeded (publickey).
Authenticated to 10.100.128.136 ([10.100.128.136]:22).
...

I was able to login into the "template" instance and also am able to login
into the "slave" instances.
Also nodepoold was able to login into "template" instance but now it fails
loging in into "slave".

I tried running it as either nodepol or jenkins users, same result.

Thanks,
Eduard

On Mon, Jan 12, 2015 at 2:09 PM, Eduard Matei <
eduard.ma...@cloudfounders.com> wrote:

> Hi,
> Back with another error during image creation with nodepool:
> 2015-01-12 13:05:17,775 INFO nodepool.image.build.local_01.d-p-c:
> Downloading python-daemon-2.0.1.tar.gz (62kB)
> 2015-01-12 13:05:18,022 INFO nodepool.image.build.local_01.d-p-c:
> Traceback (most recent call last):
> 2015-01-12 13:05:18,023 INFO nodepool.image.build.local_01.d-p-c:
> File "", line 20, in 
> 2015-01-12 13:05:18,023 INFO nodepool.image.build.local_01.d-p-c:
> File "/tmp/pip-build-r6RJKq/python-daemon/setup.py", line 27, in 
> 2015-01-12 13:05:18,024 INFO nodepool.image.build.local_01.d-p-c:
> import version
> 2015-01-12 13:05:18,024 INFO nodepool.image.build.local_01.d-p-c:
> File "version.py", line 51, in 
> 2015-01-12 13:05:18,024 INFO nodepool.image.build.local_01.d-p-c:
> import docutils.core
> 2015-01-12 13:05:18,024 INFO nodepool.image.build.local_01.d-p-c:
> ImportError: No module named docutils.core
> 2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c:
> Complete output from command python setup.py egg_info:
> 2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c:
> Traceback (most recent call last):
> 2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c:
> 2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c:
> File "", line 20, in 
> 2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c:
> 2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c:
> File "/tmp/pip-build-r6RJKq/python-daemon/setup.py", line 27, in 
> 2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c:
> 2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c:
> import version
> 2015-01-12 13:05:18,026 INFO nodepool.image.build.local_01.d-p-c:
> 2015-01-12 13:05:18,026 INFO nodepool.image.build.local_01.d-p-c:
> File "version.py", line 51, in 
> 2015-01-12 13:05:18,026 INFO nodepool.image.build.local_01.d-p-c:
> 2015-01-12 13:05:18,026 INFO nodepool.image.build.

[openstack-dev] [mistral] Cancelling today's team meeting

2015-01-12 Thread Renat Akhmerov
Hi,

We decided to cancel today’s team meeting because some key members of the team 
won’t be present.

The next on will be held on Jan 19.

Renat Akhmerov
@ Mirantis Inc.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Image based provisioning

2015-01-12 Thread Alexander Gordeev
Hello,

Andrew, thank you for pointing out all the issues. I left more
detailed comments inlined

On Tue, Jan 6, 2015 at 3:51 AM, Andrew Woodward  wrote:
> Here is a list of the issues I ran into using IBP before the 23rd. 5
> appears to not be merged yet and must be resolved prior to making IBP
> the default as you can't restart a provisioned node.
>
> 1. a full cobbler template is generated for the IBP node, if you
> wanted to re-prov the node, you would have to erase the cobbler
> profile, bootstrap and call the node provision api. If you forced it
> back to netboot (which can be done with installer methods) it loads
> the installer instead of the bootstrap image
>

Sounds like we have a bug here.

> 2. We need to be careful when considering removing cobbler from fuel,
> its still being used in IBP to manage dnsmasq (dhcp lease for
> fuelweb_admin iface) and bootp/PXE loading profiles
>

Yes, indeed. We're going to implement our own dnsmasq driven service
for managing all what cobbler performed earlier for us.

> 3. After a time all DNS names for nodes expire (ssh node-1 -> Could
> not resolve hostname) even though they are still in cobbler (cobbler
> system list)
>

Definitely a bug.

> 4. fuel-agent log is not in logs UI
>

Again, it's a bug.

> 5. image based nodes won't set up network after first boot
> https://bugs.launchpad.net/fuel/+bug/1398207
>

The fix is available on the review board. I hope it'll be merged soon

> 6 image based nodes are basically impossible to read network settings
> on unless you know everything about cloud-init
>

Sorry, i didn't get you. What did you mean?

For image based, only the interface looking to "admin network" will be
set up by cloud-init. All other network configuration will be done
later and without cloud-init.

>
> On Wed, Dec 17, 2014 at 3:08 AM, Vladimir Kozhukalov
>  wrote:
>> In case of image based we need either to update image or run "yum
>> update/apt-get upgrade" right after first boot (second option partly
>> devalues advantages of image based scheme). Besides, we are planning to
>> re-implement image build script so as to be able to build images on a master
>> node (but unfortunately 6.1 is not a real estimate for that).
>>
>> Vladimir Kozhukalov
>>
>> On Wed, Dec 17, 2014 at 5:03 AM, Mike Scherbakov 
>> wrote:
>>>
>>> Dmitry,
>>> as part of 6.1 roadmap, we are going to work on patching feature.
>>> There are two types of workflow to consider:
>>> - patch existing environment (already deployed nodes, aka "target" nodes)
>>> - ensure that new nodes, added to the existing and already patched envs,
>>> will install updated packages too.
>>>
>>> In case of anakonda/preseed install, we can simply update repo on master
>>> node and run createrepo/etc. What do we do in case of image? Will we need a
>>> separate repo alongside with main one, "updates" repo - and do
>>> post-provisioning "yum update" to fetch all patched packages?
>>>
>>> On Tue, Dec 16, 2014 at 11:09 PM, Andrey Danin 
>>> wrote:

 Adding Mellanox team explicitly.

 Gil, Nurit, Aviram, can you confirm that you tested that feature? It can
 be enabled on every fresh ISO. You just need to enable the Experimental 
 mode
 (please, see the documentation for instructions).

 On Tuesday, December 16, 2014, Dmitry Pyzhov 
 wrote:
>
> Guys,
>
> we are about to enable image based provisioning in our master by
> default. I'm trying to figure out requirement for this change. As far as I
> know, it was not tested on scale lab. Is it true? Have we ever run full
> system tests cycle with this option?
>
> Do we have any other pre-requirements?



 --
 Andrey Danin
 ada...@mirantis.com
 skype: gcon.monolake


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>> --
>>> Mike Scherbakov
>>> #mihgen
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Andrew
> Mirantis
> Ceph community
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack]Openstack installation issue.

2015-01-12 Thread Sean Dague
Is your root filesystem full? The log clearly shows the chown stack
/etc/keystone passing right before the copy is attempted.

-Sean

On 01/12/2015 06:59 AM, Abhishek Shrivastava wrote:
> Same as before.
> 
> On Mon, Jan 12, 2015 at 5:04 PM, Samta Rangare  > wrote:
> 
> Whats the log now?
> 
> On Mon, Jan 12, 2015 at 4:53 PM, Abhishek Shrivastava
> mailto:abhis...@cloudbyte.com>> wrote:
> 
> Hi Samta,
> 
> Thanks for the suggestion but still problem remains the same.
> 
> On Mon, Jan 12, 2015 at 4:29 PM, Samta Rangare
> mailto:samtarang...@gmail.com>> wrote:
> 
> Hey abhishek,
> 
> As a quick fix to this problem, edit this file
> devstack/lib/keystone +170
> in this function
> function configure_keystone {
> edit this line with adding sudo
> sudo cp -p $KEYSTONE_DIR/etc/keystone.conf.sample $KEYSTONE_CONF
> 
> Regards
> Samta
> 
> 
> 
> 
> On Mon, Jan 12, 2015 at 4:01 PM, Abhishek Shrivastava
> mailto:abhis...@cloudbyte.com>> wrote:
> 
> Hi all,
> 
> I am writing this again because I am getting the same
> error from past week while I'm installing Openstack
> through devstack on Ubuntu 13.10.
> 
> I am attaching the new log file, please go through it
> and if anyone can provide me the solution please do reply.
> 
> -- 
> *Thanks & Regards,
> *
> *Abhishek*
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> -- 
> *Thanks & Regards,
> *
> *Abhishek*
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> -- 
> *Thanks & Regards,
> *
> *Abhishek*
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] dropping namespace packages

2015-01-12 Thread Ihar Hrachyshka

You rock, man. Thanks, I'll steal those. :)
/Ihar

On 01/11/2015 09:39 PM, Davanum Srinivas wrote:

Jay,

I have a hacking rule in nova already [1] and am updating the rule in
the 3 reviews i have for oslo_utils, oslo_middleware and oslo_config
[2] in Nova

thanks,
dims

[1] https://github.com/openstack/nova/blob/master/nova/hacking/checks.py#L452
[2] 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:remove-oslo-namespace,n,z

On Sat, Jan 10, 2015 at 9:26 PM, Jay S. Bryant
 wrote:

Ihar,

I agree that we should do something to enforce using the appropriate
namespace so that we don't have the wrong usage sneak in.

I haven't gotten any rules written yet.  Have had to attend to a family
commitment the last few days.  Hope that I can tackle the namspace changes
next week.

Jay

On 01/08/2015 12:24 PM, Ihar Hrachyshka wrote:

On 01/08/2015 07:03 PM, Doug Hellmann wrote:

I’m not sure that’s something we need to enforce. Liaisons should be
updating projects now as we release libraries, and then we’ll consider
whether we can drop the namespace packages when we plan the next cycle.


Without a hacking rule, there is a chance old namespace usage will sneak
in, and then we'll need to get back to updating imports. I would rather
avoid that and get migration committed with enforcement.

/Ihar

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Future-based api for openstack clients calls, that starts background tasks

2015-01-12 Thread Konstantin Danilov
Boris,

Move from sync http to something like websocket requires a lot of work and
not directly connected
with API issue. When openstack api servers begin to support
websockets - it would be easy to change implementation of monitoring thread
without breaking compatibility.
At the moment periodical pooling from additional thread looks reasonable
for me
and it creates same amount of http requests as all current implementation.

BP is not about improving performance,
but about providing convenient and common API to handle background tasks.

> So we won't need to retrieve 100500 times information about object.
As I sad before - this API creates same amount of load as any code which we
use to check background task currently.
It even can decrease load due to requests aggregation in some cases (but
there a points to discuss).

> As well this pattern doesn't look great.
> I would prefer to see something like:
> vm = novaclient.servers.create(, sync=True)

This is completely different pattern. It is blocking call, which don't
allows you to start two(or more) background tasks
and from same thread and make some calculations while they running in
background.





On Mon, Jan 12, 2015 at 1:42 PM, Boris Pavlovic 
wrote:

> Konstantin,
>
>
> I believe it's better to work on server side, and use some modern approach
> like web sockets for async operations. So we won't need to retrieve 100500
> times information about object. And then use this feature in clients.
>
>  create_future = novaclient.servers.create_async()
>> .
>> vm = create_future.result()
>
>
> As well this pattern doesn't look great.
>
> I would prefer to see something like:
>
>   vm = novaclient.servers.create(, sync=True)
>
>
> Best regards,
> Boris Pavlovic
>
>
>
>
>
> On Mon, Jan 12, 2015 at 2:30 PM, Konstantin Danilov  > wrote:
>
>> Hi all.
>>
>> There a set of openstack api functions which starts background actions
>> and return preliminary results - like 'novaclient.create'. Those functions
>> requires periodically check results and handle timeouts/errors
>> (and often cleanup + restart help to fix an error).
>>
>> Check/retry/cleanup code duplicated over a lot of core projects.
>> As examples - heat, tempest, rally, etc and definitely in many
>> third-party scripts.
>>
>> I propose to provide common higth-level API for such functions, which uses
>> 'futures' (http://en.wikipedia.org/wiki/Futures_and_promises) as a way to
>> present background task.
>>
>> Idea is to add to each background-task-starter function a complimentary
>> call,
>> that returns 'future' object. E.g.
>>
>> create_future = novaclient.servers.create_async()
>> .
>> vm = create_future.result()
>>
>> This allows to unify(and optimize) monitoring cycles, retries, etc.
>> Please found complete BP at
>> https://github.com/koder-ua/os_api/blob/master/README.md
>>
>> Thanks
>> --
>> Kostiantyn Danilov aka koder 
>> Principal software engineer, Mirantis
>>
>> skype:koder.ua
>> http://koder-ua.blogspot.com/
>> http://mirantis.com
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kostiantyn Danilov aka koder.ua
Principal software engineer, Mirantis

skype:koder.ua
http://koder-ua.blogspot.com/
http://mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI

2015-01-12 Thread Eduard Matei
Hi,
Back with another error during image creation with nodepool:
2015-01-12 13:05:17,775 INFO nodepool.image.build.local_01.d-p-c:
Downloading python-daemon-2.0.1.tar.gz (62kB)
2015-01-12 13:05:18,022 INFO nodepool.image.build.local_01.d-p-c:
Traceback (most recent call last):
2015-01-12 13:05:18,023 INFO nodepool.image.build.local_01.d-p-c:
File "", line 20, in 
2015-01-12 13:05:18,023 INFO nodepool.image.build.local_01.d-p-c:
File "/tmp/pip-build-r6RJKq/python-daemon/setup.py", line 27, in 
2015-01-12 13:05:18,024 INFO nodepool.image.build.local_01.d-p-c:
import version
2015-01-12 13:05:18,024 INFO nodepool.image.build.local_01.d-p-c:
File "version.py", line 51, in 
2015-01-12 13:05:18,024 INFO nodepool.image.build.local_01.d-p-c:
import docutils.core
2015-01-12 13:05:18,024 INFO nodepool.image.build.local_01.d-p-c:
ImportError: No module named docutils.core
2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c:
Complete output from command python setup.py egg_info:
2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c:
Traceback (most recent call last):
2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c:
2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c:
File "", line 20, in 
2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c:
2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c:
File "/tmp/pip-build-r6RJKq/python-daemon/setup.py", line 27, in 
2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c:
2015-01-12 13:05:18,025 INFO nodepool.image.build.local_01.d-p-c:
import version
2015-01-12 13:05:18,026 INFO nodepool.image.build.local_01.d-p-c:
2015-01-12 13:05:18,026 INFO nodepool.image.build.local_01.d-p-c:
File "version.py", line 51, in 
2015-01-12 13:05:18,026 INFO nodepool.image.build.local_01.d-p-c:
2015-01-12 13:05:18,026 INFO nodepool.image.build.local_01.d-p-c:
import docutils.core
2015-01-12 13:05:18,026 INFO nodepool.image.build.local_01.d-p-c:
2015-01-12 13:05:18,026 INFO nodepool.image.build.local_01.d-p-c:
ImportError: No module named docutils.core
2015-01-12 13:05:18,026 INFO nodepool.image.build.local_01.d-p-c:
2015-01-12 13:05:18,026 INFO nodepool.image.build.local_01.d-p-c:

2015-01-12 13:05:18,054 INFO nodepool.image.build.local_01.d-p-c:
Command "python setup.py egg_info" failed with error code 1 in
/tmp/pip-build-r6RJKq/python-daemon

Python-daemon pip package fails to install due to ImportError.

Any ideas how to fix this?

Thanks,
Eduard

On Fri, Jan 9, 2015 at 10:00 PM, Patrick East 
wrote:

> Thanks for the links!
>
> After digging around in my configs I figured out the issue, I had a typo
> in my JENKINS_SSH_PUBLIC_KEY_NO_WHITESPACE (copy pasta cut off a
> character...). But I managed to put the right one in the key for nova to
> use so it was able to log in to set up the instance, but didn't end up with
> the right thing in the NODEPOOL_SSH_KEY variable.
>
> -Patrick
>
> On Fri, Jan 9, 2015 at 9:25 AM, Asselin, Ramy  wrote:
>
>>
>>
>> Regarding SSH Keys and logging into nodes, you need to set the
>> NODEPOOL_SSH_KEY variable
>>
>> 1.   I documented my notes here
>> https://github.com/rasselin/os-ext-testing-data/blob/master/etc/nodepool/nodepool.yaml.erb.sample#L48
>>
>> 2.   This is also documented ‘officially’ here:
>> https://github.com/openstack-infra/nodepool/blob/master/README.rst
>>
>> 3.   Also, I had an issue getting puppet to do the right thing with
>> keys, so it gets forced here:
>> https://github.com/rasselin/os-ext-testing/blob/master/puppet/install_master.sh#L197
>>
>>
>>
>> Ramy
>>
>>
>>
>> *From:* Eduard Matei [mailto:eduard.ma...@cloudfounders.com]
>> *Sent:* Friday, January 09, 2015 8:58 AM
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need
>> help setting up CI
>>
>>
>>
>> Thanks Patrick,
>>
>> Indeed it seems the cloud provider was setting up vms on a bridge whose
>> eth was DOWN so the vms could not connect to the outside world so the
>> prepare script was failing.
>>
>> Looking into that.
>>
>>
>>
>> Thanks,
>>
>>
>>
>> Eduard
>>
>>
>>
>> On Fri, Jan 9, 2015 at 6:44 PM, Patrick East <
>> patrick.e...@purestorage.com> wrote:
>>
>>  Ah yea, sorry, should have specified; I am having it run
>> the prepare_node_devstack.sh from the infra repo. I see it adding the same
>> public key to the user specified in my nodepool.yaml. The strange part (and
>> I need to double check.. feel like it can't be right) is that on my master
>> node the nodepool users id_rsa changed at some point in the process.
>>
>>
>>
>>
>>   -Patrick
>>
>>
>>
>> On Fri, Jan 9, 2015 at 8:38 AM, Jeremy Stanley  wrote:
>>
>> On 2015-01-09 08:28:39 -0800 (-0800), Patrick East wrote:
>> [...]
>> > On a related note, I am having issues with the ssh keys. Nodepool
>> > is able to log in to the node to set up the template and create an
>> > image from it, but then fails to l

Re: [openstack-dev] [Policy][Group-based-policy] ODL Policy Driver Specs

2015-01-12 Thread Sachi Gupta
Hi,

Can anyone explain the difference between gbp group-create and gbp 
policy-target-group-create??

I think both these are working same.

Thanks & Regards
Sachi Gupta




From:   Sumit Naiksatam 
To: "OpenStack Development Mailing List (not for usage questions)" 

Date:   11/26/2014 01:35 PM
Subject:Re: [openstack-dev] [Policy][Group-based-policy] ODL 
Policy Driver   Specs



Hi, This GBP spec is currently being worked on:
https://review.openstack.org/#/c/134285/

It will be helpful if you can add "[Policy][Group-based-policy]" in
the subject of your emails, so that the email gets characterized
correctly.

Thanks,
~Sumit.

On Tue, Nov 25, 2014 at 4:27 AM, Sachi Gupta  wrote:
> Hey All,
>
> I need to understand the interaction between the Openstack GBP and the
> Opendaylight GBP project which will be done by ODL Policy driver.
>
> Can someone provide me with specs of ODL Policy driver for making my
> understanding on call flow.
>
>
> Thanks & Regards
> Sachi Gupta
>
> =-=-=
> Notice: The information contained in this e-mail
> message and/or attachments to it may contain
> confidential or privileged information. If you are
> not the intended recipient, any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the message
> and any attachments. Thank you
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack]Openstack installation issue.

2015-01-12 Thread Abhishek Shrivastava
Same as before.

On Mon, Jan 12, 2015 at 5:04 PM, Samta Rangare 
wrote:

> Whats the log now?
>
> On Mon, Jan 12, 2015 at 4:53 PM, Abhishek Shrivastava <
> abhis...@cloudbyte.com> wrote:
>
>> Hi Samta,
>>
>> Thanks for the suggestion but still problem remains the same.
>>
>> On Mon, Jan 12, 2015 at 4:29 PM, Samta Rangare 
>> wrote:
>>
>>> Hey abhishek,
>>>
>>> As a quick fix to this problem, edit this file devstack/lib/keystone +170
>>> in this function
>>> function configure_keystone {
>>> edit this line with adding sudo
>>> sudo cp -p $KEYSTONE_DIR/etc/keystone.conf.sample $KEYSTONE_CONF
>>>
>>> Regards
>>> Samta
>>>
>>>
>>>
>>>
>>> On Mon, Jan 12, 2015 at 4:01 PM, Abhishek Shrivastava <
>>> abhis...@cloudbyte.com> wrote:
>>>
 Hi all,

 I am writing this again because I am getting the same error from past
 week while I'm installing Openstack through devstack on Ubuntu 13.10.

 I am attaching the new log file, please go through it and if anyone can
 provide me the solution please do reply.

 --

 *Thanks & Regards,*
 *Abhishek*


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>>
>> *Thanks & Regards,*
>> *Abhishek*
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

*Thanks & Regards,*
*Abhishek*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][client][IMPORTANT] Making Fuel Client a separate project

2015-01-12 Thread Roman Prykhodchenko
Hi folks!

This is a status update.
Right now the patch for creating a new project on Stackforge is blocked by a 
bug in Zuul [1] which is actually a bug in python-daemon, the patch for this is 
already published [2] and waiting for being approved.
After the patch is merged and all projects and groups are created I will file a 
request to perform initial setup of core groups. Once they are created it will 
be possible to land new patches.
Meanwhile OSCI team is working [3] on adjusting build system to use 
python-fuelclient from PyPi [4].
Stay tuned for further updates.

References:
Zuul’s tests fail with dependencies error 
https://storyboard.openstack.org/#!/story/2000107 

Pin python-daemon<2.0 https://review.openstack.org/#/c/146350/ 
 
Create repositories for python-fuelclient package 
https://bugs.launchpad.net/fuel/+bug/1409673 

python-fuelclient on PyPi https://pypi.python.org/pypi/python-fuelclient 



> 9 січ. 2015 о 15:14 Roman Prykhodchenko  > написав(ла):
> 
> Hi folks,
> 
> according to the Fuel client refactoring plan [1] it’s necessary to move it 
> out to a separate repository on Stackforge.
> 
> The process of doing that consists of two major steps:
> - Landing a patch [2] to project-config for creating a new Stackforge project
> - Creating an initial core group for python-fuelclient
> - Moving all un-merged patches from fuel-web to python-fuelclient gerrit repo
> 
> The first step of this process has already been started so I kindly ask all 
> fuelers to DO NOT MERGE any new patches to fuel-web IF THEY DO touch 
> fuelclient folder.
> After the project is set up I will let everyone know about that and will tell 
> what to do after that so I encourage all interested people to check this 
> thread once in a while.
> 
> 
> # References:
> 
> 1. Re-thinking Fuel Client https://review.openstack.org/#/c/145843 
> 
> 2. Add python-fuelclient to Stackforge 
> https://review.openstack.org/#/c/145843 
> 
> 
> 
> - romcheg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Future-based api for openstack clients calls, that starts background tasks

2015-01-12 Thread Boris Pavlovic
Konstantin,


I believe it's better to work on server side, and use some modern approach
like web sockets for async operations. So we won't need to retrieve 100500
times information about object. And then use this feature in clients.

 create_future = novaclient.servers.create_async()
> .
> vm = create_future.result()


As well this pattern doesn't look great.

I would prefer to see something like:

  vm = novaclient.servers.create(, sync=True)


Best regards,
Boris Pavlovic





On Mon, Jan 12, 2015 at 2:30 PM, Konstantin Danilov 
wrote:

> Hi all.
>
> There a set of openstack api functions which starts background actions
> and return preliminary results - like 'novaclient.create'. Those functions
> requires periodically check results and handle timeouts/errors
> (and often cleanup + restart help to fix an error).
>
> Check/retry/cleanup code duplicated over a lot of core projects.
> As examples - heat, tempest, rally, etc and definitely in many third-party
> scripts.
>
> I propose to provide common higth-level API for such functions, which uses
> 'futures' (http://en.wikipedia.org/wiki/Futures_and_promises) as a way to
> present background task.
>
> Idea is to add to each background-task-starter function a complimentary
> call,
> that returns 'future' object. E.g.
>
> create_future = novaclient.servers.create_async()
> .
> vm = create_future.result()
>
> This allows to unify(and optimize) monitoring cycles, retries, etc.
> Please found complete BP at
> https://github.com/koder-ua/os_api/blob/master/README.md
>
> Thanks
> --
> Kostiantyn Danilov aka koder 
> Principal software engineer, Mirantis
>
> skype:koder.ua
> http://koder-ua.blogspot.com/
> http://mirantis.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack]Openstack installation issue.

2015-01-12 Thread Samta Rangare
Whats the log now?

On Mon, Jan 12, 2015 at 4:53 PM, Abhishek Shrivastava <
abhis...@cloudbyte.com> wrote:

> Hi Samta,
>
> Thanks for the suggestion but still problem remains the same.
>
> On Mon, Jan 12, 2015 at 4:29 PM, Samta Rangare 
> wrote:
>
>> Hey abhishek,
>>
>> As a quick fix to this problem, edit this file devstack/lib/keystone +170
>> in this function
>> function configure_keystone {
>> edit this line with adding sudo
>> sudo cp -p $KEYSTONE_DIR/etc/keystone.conf.sample $KEYSTONE_CONF
>>
>> Regards
>> Samta
>>
>>
>>
>>
>> On Mon, Jan 12, 2015 at 4:01 PM, Abhishek Shrivastava <
>> abhis...@cloudbyte.com> wrote:
>>
>>> Hi all,
>>>
>>> I am writing this again because I am getting the same error from past
>>> week while I'm installing Openstack through devstack on Ubuntu 13.10.
>>>
>>> I am attaching the new log file, please go through it and if anyone can
>>> provide me the solution please do reply.
>>>
>>> --
>>>
>>> *Thanks & Regards,*
>>> *Abhishek*
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
>
> *Thanks & Regards,*
> *Abhishek*
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Future-based api for openstack clients calls, that starts background tasks

2015-01-12 Thread Konstantin Danilov
Hi all.

There a set of openstack api functions which starts background actions
and return preliminary results - like 'novaclient.create'. Those functions
requires periodically check results and handle timeouts/errors
(and often cleanup + restart help to fix an error).

Check/retry/cleanup code duplicated over a lot of core projects.
As examples - heat, tempest, rally, etc and definitely in many third-party
scripts.

I propose to provide common higth-level API for such functions, which uses
'futures' (http://en.wikipedia.org/wiki/Futures_and_promises) as a way to
present background task.

Idea is to add to each background-task-starter function a complimentary
call,
that returns 'future' object. E.g.

create_future = novaclient.servers.create_async()
.
vm = create_future.result()

This allows to unify(and optimize) monitoring cycles, retries, etc.
Please found complete BP at
https://github.com/koder-ua/os_api/blob/master/README.md

Thanks
-- 
Kostiantyn Danilov aka koder 
Principal software engineer, Mirantis

skype:koder.ua
http://koder-ua.blogspot.com/
http://mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack]Openstack installation issue.

2015-01-12 Thread Abhishek Shrivastava
Hi Samta,

Thanks for the suggestion but still problem remains the same.

On Mon, Jan 12, 2015 at 4:29 PM, Samta Rangare 
wrote:

> Hey abhishek,
>
> As a quick fix to this problem, edit this file devstack/lib/keystone +170
> in this function
> function configure_keystone {
> edit this line with adding sudo
> sudo cp -p $KEYSTONE_DIR/etc/keystone.conf.sample $KEYSTONE_CONF
>
> Regards
> Samta
>
>
>
>
> On Mon, Jan 12, 2015 at 4:01 PM, Abhishek Shrivastava <
> abhis...@cloudbyte.com> wrote:
>
>> Hi all,
>>
>> I am writing this again because I am getting the same error from past
>> week while I'm installing Openstack through devstack on Ubuntu 13.10.
>>
>> I am attaching the new log file, please go through it and if anyone can
>> provide me the solution please do reply.
>>
>> --
>>
>> *Thanks & Regards,*
>> *Abhishek*
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

*Thanks & Regards,*
*Abhishek*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Cutoff deadlines for cinder drivers

2015-01-12 Thread Erlon Cruz
Hi guys,

Thanks for answering my questions. I have 2 points:

1 - This (remove drivers without CI) is a way impacting change to be
implemented without exhausting notification and discussion on the mailing
list. I myself was in the meeting but this decision wasn't crystal clear.
There must be other driver maintainers completely unaware of this.

2 - Build a CI infrastructure and have people to maintain a the CI for a
new driver in a 5 weeks frame. Not all companies has the knowledge and
resources necessary to this in such sort period. We should consider a grace
release period, i.e. drivers entering on K, have until L to implement
theirs CIs.

On Mon, Jan 12, 2015 at 4:07 AM, Asselin, Ramy  wrote:

> Feel free to join any of the 3rd party 'mentoring' meetings on IRC
> Freenode #openstack-meeting to help get started, work through issues, etc.
>
> "Third Party meeting for all aspects of Third Party needs: Mondays at 1500
> UTC and Tuesdays at 0800 UTC. Everyone interested in any aspect Third Party
> process is encouraged to attend." [1]
>
> [1] https://wiki.openstack.org/wiki/Meetings/ThirdParty
>
> Ramy
>
> -Original Message-
> From: Mike Perez [mailto:thin...@gmail.com]
> Sent: Sunday, January 11, 2015 6:53 PM
> To: jsbry...@electronicjungle.net; OpenStack Development Mailing List
> (not for usage questions)
> Subject: Re: [openstack-dev] [Cinder] Cutoff deadlines for cinder drivers
>
> On 21:00 Sat 10 Jan , Jay S. Bryant wrote:
> > I think what we discussed was that existing drivers were supposed to
> > have something working by the end of k-2, or at least have something
> > close to working.
> >
> > For new drivers they had to have 3rd party CI working by the end of Kilo.
> >
> > Duncan, correct me if I am wrong.
> >
> > Jay
> > On 01/10/2015 04:52 PM, Mike Perez wrote:
> > >On 14:42 Fri 09 Jan , Ivan Kolodyazhny wrote:
> > >>Hi Erlon,
> > >>
> > >>We've got a thread mailing-list [1] for it and some details in wiki
> [2].
> > >>Anyway, need to get confirmation from our core devs and/or Mike.
> > >>
> > >>[1]
> > >>http://lists.openstack.org/pipermail/openstack-dev/2014-October/0495
> > >>12.html
> > >>[2]
> > >>https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#Testi
> > >>ng_requirements_for_Kilo_release_and_beyond
> > >>
> > >>Regards,
> > >>Ivan Kolodyazhny
> > >>
> > >>On Fri, Jan 9, 2015 at 2:26 PM, Erlon Cruz 
> wrote:
> > >>
> > >>>Hi all, hi cinder core devs,
> > >>>
> > >>>I have read on IRC discussions about a deadline for drivers vendors
> > >>>to have their CI running and voting until kilo-2, but I didn't find
> > >>>any post on this list to confirm this. Can anyone confirm this?
> > >>>
> > >>>Thanks,
> > >>>Erlon
> > >We did discuss and agree in the Cinder meeting that the deadline
> > >would be k-2, but I don't think anyone reached out to the driver
> > >maintainers about the deadline. Duncan had this action item [1],
> perhaps he can speak more about it.
> > >
> > >[1] -
> > >http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-11-19
> > >-16.00.html
> > >
>
> That is correct [1]. However, I don't think there was any warning given to
> existing drivers [2]. If Duncan can confirm this is the case, I would
> recommend fair warning go out for the end of Kilo for existing drivers as
> well.
>
> [1] -
> https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#Deadlines
> [2] -
> http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-11-19-16.00.html
>
> --
> Mike Perez
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack]Openstack installation issue.

2015-01-12 Thread Samta Rangare
Hey abhishek,

As a quick fix to this problem, edit this file devstack/lib/keystone +170
in this function
function configure_keystone {
edit this line with adding sudo
sudo cp -p $KEYSTONE_DIR/etc/keystone.conf.sample $KEYSTONE_CONF

Regards
Samta




On Mon, Jan 12, 2015 at 4:01 PM, Abhishek Shrivastava <
abhis...@cloudbyte.com> wrote:

> Hi all,
>
> I am writing this again because I am getting the same error from past week
> while I'm installing Openstack through devstack on Ubuntu 13.10.
>
> I am attaching the new log file, please go through it and if anyone can
> provide me the solution please do reply.
>
> --
>
> *Thanks & Regards,*
> *Abhishek*
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >