Re: [openstack-dev] [oslo][taskflow] Thoughts on moving taskflow out of openstack/oslo

2018-10-17 Thread Joshua Harlow

Dmitry Tantsur wrote:

On 10/10/18 7:41 PM, Greg Hill wrote:

I've been out of the openstack loop for a few years, so I hope this
reaches the right folks.

Josh Harlow (original author of taskflow and related libraries) and I
have been discussing the option of moving taskflow out of the
openstack umbrella recently. This move would likely also include the
futurist and automaton libraries that are primarily used by taskflow.


Just for completeness: futurist and automaton are also heavily relied on
by ironic without using taskflow.


When did futurist get used??? nice :)

(I knew automaton was, but maybe I knew futurist was to and I forgot, lol).




The idea would be to just host them on github and use the regular
Github features for Issues, PRs, wiki, etc, in the hopes that this
would spur more development. Taskflow hasn't had any substantial
contributions in several years and it doesn't really seem that the
current openstack devs have a vested interest in moving it forward. I
would like to move it forward, but I don't have an interest in being
bound by the openstack workflow (this is why the project stagnated as
core reviewers were pulled on to other projects and couldn't keep up
with the review backlog, so contributions ground to a halt).

I guess I'm putting it forward to the larger community. Does anyone
have any objections to us doing this? Are there any non-obvious
technicalities that might make such a transition difficult? Who would
need to be made aware so they could adjust their own workflows?

Or would it be preferable to just fork and rename the project so
openstack can continue to use the current taskflow version without
worry of us breaking features?

Greg


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][taskflow] Thoughts on moving taskflow out of openstack/oslo

2018-10-15 Thread Joshua Harlow
I'm ok with trying this move out and seeing how it goes (maybe it will 
be for the better, idk).


-Josh


On 10/10/2018 10:41 AM, Greg Hill wrote:
I've been out of the openstack loop for a few years, so I hope this 
reaches the right folks.


Josh Harlow (original author of taskflow and related libraries) and I 
have been discussing the option of moving taskflow out of the 
openstack umbrella recently. This move would likely also include the 
futurist and automaton libraries that are primarily used by taskflow. 
The idea would be to just host them on github and use the regular 
Github features for Issues, PRs, wiki, etc, in the hopes that this 
would spur more development. Taskflow hasn't had any substantial 
contributions in several years and it doesn't really seem that the 
current openstack devs have a vested interest in moving it forward. I 
would like to move it forward, but I don't have an interest in being 
bound by the openstack workflow (this is why the project stagnated as 
core reviewers were pulled on to other projects and couldn't keep up 
with the review backlog, so contributions ground to a halt).


I guess I'm putting it forward to the larger community. Does anyone 
have any objections to us doing this? Are there any non-obvious 
technicalities that might make such a transition difficult? Who would 
need to be made aware so they could adjust their own workflows?


Or would it be preferable to just fork and rename the project so 
openstack can continue to use the current taskflow version without 
worry of us breaking features?


Greg



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How to debug no valid host failures with placement

2018-08-02 Thread Joshua Harlow

Storage space is a concern; really?

If it really is, then keep X of them for some definition of X (days, 
number, hours, other)? Offload the snapshot asynchronously if 
snapshotting during requests is a problem.


We have the power! :)

Chris Friesen wrote:

On 08/01/2018 11:34 PM, Joshua Harlow wrote:


And I would be able to say request the explanation for a given request id
(historical even) so that analysis could be done post-change and
pre-change (say
I update the algorithm for selection) so that the effects of
alternations to
said decisions could be determined.


This would require storing a snapshot of all resources prior to
processing every request...seems like that could add overhead and
increase storage consumption.

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How to debug no valid host failures with placement

2018-08-01 Thread Joshua Harlow
If I could, I would have something *like* the EXPLAIN syntax for looking 
at a sql query, but instead of telling me the query plan for a sql 
query, it would tell me the decisions (placement plan?) that resulted in 
a given resource being placed at a certain location.


And I would be able to say request the explanation for a given request 
id (historical even) so that analysis could be done post-change and 
pre-change (say I update the algorithm for selection) so that the 
effects of alternations to said decisions could be determined.


If it could also have a front-end like what is at http://sorting.at/ 
(press the play button) that'd be super sweet also (but not for sorting, 
but instead for placement, which if u squint at that webpage could have 
something similar built).


My 3 cents, ha

-Josh

Chris Friesen wrote:

On 08/01/2018 11:32 AM, melanie witt wrote:


I think it's definitely a significant issue that troubleshooting "No
allocation
candidates returned" from placement is so difficult. However, it's not
straightforward to log detail in placement when the request for
allocation
candidates is essentially "SELECT * FROM nodes WHERE cpu usage <
needed and disk
usage < needed and memory usage < needed" and the result is returned
from the API.


I think the only way to get useful info on a failure would be to break
down the huge SQL statement into subclauses and store the results of the
intermediate queries. So then if it failed placement could log something
like:

hosts with enough CPU: 
hosts that also have enough disk: 
hosts that also have enough memory: 
hosts that also meet extra spec host aggregate keys: 
hosts that also meet image properties host aggregate keys: 
hosts that also have requested PCI devices: 

And maybe we could optimize the above by only emitting logs where the
list has a length less than X (to avoid flooding the logs with hostnames
in large clusters).

This would let you zero in on the things that finally caused the list to
be whittled down to nothing.

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [designate] [heat] [python3] deadlock with eventlet and ThreadPoolExecutor in py3.7

2018-07-25 Thread Joshua Harlow
So the only diff is that GreenThreadPoolExecutor was customized to work 
for eventlet (with a similar/same api as ThreadPoolExecutor); as for 
performance I would expect (under eventlet) that GreenThreadPoolExecutor 
would have better performance because it can use the native eventlet 
green objects (which it does try to use) instead of having to go threw 
the layers that ThreadPoolExecutor would have to use to achieve the same 
(and in this case as you found out it looks like those layers are not 
patched correctly in the newest ThreadPoolExecutor).


Otherwise yes, under eventlet imho swap out the executor (assuming you 
can do this) and under threading swap in threadpool executor (ideally if 
done correctly the same stuff should 'just work').


Corey Bryant wrote:

Josh,

Thanks for the input. GreenThreadPoolExecutor does not have the deadlock
issue, so that is promising (at least with futurist 1.6.0).

Does ThreadPoolExecutor have better performance than
GreenThreadPoolExecutor? Curious if we could just swap out
ThreadPoolExecutor for GreenThreadPoolExecutor.

Thanks,
Corey

On Wed, Jul 25, 2018 at 12:54 PM, Joshua Harlow mailto:harlo...@fastmail.com>> wrote:

Have you tried the following instead of threadpoolexecutor (which
honestly should work as well, even under eventlet + eventlet patching).


https://docs.openstack.org/futurist/latest/reference/index.html#futurist.GreenThreadPoolExecutor

<https://docs.openstack.org/futurist/latest/reference/index.html#futurist.GreenThreadPoolExecutor>

If you have the ability to specify which executor your code is
using, and you are running under eventlet I'd give preference to the
green thread pool executor under that situation (and if not running
under eventlet then prefer the threadpool executor variant).

As for @tomoto question; honestly openstack was created before
asyncio was a thing so that was a reason and assuming eventlet
patching is actually working then all the existing stdlib stuff
should keep on working under eventlet (including
concurrent.futures); otherwise eventlet.monkey_patch isn't working
and that's breaking the eventlet api. If their contract is that only
certain things work when monkey patched, that's fair, but that needs
to be documented somewhere (honestly it's time imho to get the hell
off eventlet everywhere but that likely requires rewrites of a lot
of things, oops...).

-Josh

Corey Bryant wrote:

Hi All,

I'm trying to add Py3 packaging support for Ubuntu Rocky and
while there
are a lot of issues involved with supporting Py3.7, this is one
of the
big ones that I could use a hand with.

With py3.7, there's a deadlock when eventlet monkeypatch of stdlib
thread modules is combined with use of ThreadPoolExecutor. I
know this
affects at least designate. The same or similar also affects heat
(though I've not dug into the code the traceback after canceling
tests
matches that seen with designate). And it may affect other
projects that
I haven't touched yet.

How to recreate [1]:
* designate: Add a tox.ini py37 target and run

designate.tests.test_workers.test_processing.TestProcessingExecutor.test_execute_multiple_tasks
* heat: Add a tox.ini py37 target and run tests
* general: Run bpo34173-recreate.py
<https://bugs.python.org/file47709/bpo34173-recreate.py
<https://bugs.python.org/file47709/bpo34173-recreate.py>> from issue
34173 (see below).
[1] ubuntu cosmic has py3.7

In issue 508 (see below) @tomoto asks "Eventlet and asyncio
solve same
problem. Why would you want concurrent.futures and eventlet in same
application?"

I told @tomoto that I'd seek input to that question from upstream. I
know there've been efforts to move away from eventlet but I just
don't
have the knowledge to  provide a good answer to him.

Here are the bugs/issues I currently have open for this:
https://github.com/eventlet/eventlet/issues/508
<https://github.com/eventlet/eventlet/issues/508>
<https://github.com/eventlet/eventlet/issues/508
<https://github.com/eventlet/eventlet/issues/508>>
https://bugs.launchpad.net/designate/+bug/1782647
<https://bugs.launchpad.net/designate/+bug/1782647>
<https://bugs.launchpad.net/designate/+bug/1782647
<https://bugs.launchpad.net/designate/+bug/1782647>>
https://bugs.python.org/issue34173
<https://bugs.python.org/issue34173>
<https://bugs.python.org/issue34173
<https://bugs.python.org/issue34173>>

Any help with this would be greatly appreciated!

Thanks,
Corey



Re: [openstack-dev] [all] [designate] [heat] [python3] deadlock with eventlet and ThreadPoolExecutor in py3.7

2018-07-25 Thread Joshua Harlow
Have you tried the following instead of threadpoolexecutor (which 
honestly should work as well, even under eventlet + eventlet patching).


https://docs.openstack.org/futurist/latest/reference/index.html#futurist.GreenThreadPoolExecutor

If you have the ability to specify which executor your code is using, 
and you are running under eventlet I'd give preference to the green 
thread pool executor under that situation (and if not running under 
eventlet then prefer the threadpool executor variant).


As for @tomoto question; honestly openstack was created before asyncio 
was a thing so that was a reason and assuming eventlet patching is 
actually working then all the existing stdlib stuff should keep on 
working under eventlet (including concurrent.futures); otherwise 
eventlet.monkey_patch isn't working and that's breaking the eventlet 
api. If their contract is that only certain things work when monkey 
patched, that's fair, but that needs to be documented somewhere 
(honestly it's time imho to get the hell off eventlet everywhere but 
that likely requires rewrites of a lot of things, oops...).


-Josh

Corey Bryant wrote:

Hi All,

I'm trying to add Py3 packaging support for Ubuntu Rocky and while there
are a lot of issues involved with supporting Py3.7, this is one of the
big ones that I could use a hand with.

With py3.7, there's a deadlock when eventlet monkeypatch of stdlib
thread modules is combined with use of ThreadPoolExecutor. I know this
affects at least designate. The same or similar also affects heat
(though I've not dug into the code the traceback after canceling tests
matches that seen with designate). And it may affect other projects that
I haven't touched yet.

How to recreate [1]:
* designate: Add a tox.ini py37 target and run
designate.tests.test_workers.test_processing.TestProcessingExecutor.test_execute_multiple_tasks
* heat: Add a tox.ini py37 target and run tests
* general: Run bpo34173-recreate.py
 from issue
34173 (see below).
[1] ubuntu cosmic has py3.7

In issue 508 (see below) @tomoto asks "Eventlet and asyncio solve same
problem. Why would you want concurrent.futures and eventlet in same
application?"

I told @tomoto that I'd seek input to that question from upstream. I
know there've been efforts to move away from eventlet but I just don't
have the knowledge to  provide a good answer to him.

Here are the bugs/issues I currently have open for this:
https://github.com/eventlet/eventlet/issues/508

https://bugs.launchpad.net/designate/+bug/1782647

https://bugs.python.org/issue34173 

Any help with this would be greatly appreciated!

Thanks,
Corey

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [StarlingX] StarlingX code followup discussions

2018-06-01 Thread Joshua Harlow

Slightly off topic but,

Have you by any chance looked at what kata has forked for qemu:

https://github.com/kata-containers/qemu/tree/qemu-lite-2.11.0

I'd be interested in an audit of that code for similar reasons to this 
libvirt fork (hard to know from my view point if there are new issues in 
that code like the ones you are finding in libvirt).


Kashyap Chamarthy wrote:

On Tue, May 22, 2018 at 01:54:59PM -0500, Dean Troyer wrote:

StarlingX (aka STX) was announced this week at the summit, there is a
PR to create project repos in Gerrit at [0]. STX is basically Wind


 From a cursory look at the libvirt fork, there are some questionable
choices.  E.g. the config code (libvirt/src/qemu/qemu.conf) is modified
such that QEMU is launched as 'root'.  That means a bug in QEMU ==
instant host compromise.

All Linux distributions (that matter) configure libvirt to launch QEMU
as a regular user ('qemu').  E.g. from Fedora's libvirt RPM spec file:

 libvirt.spec:%define qemu_user  qemu
 libvirt.spec:   --with-qemu-user=%{qemu_user} \

 * * *

There are multiple other such issues in the forked libvirt code.

[...]



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] A culture change (nitpicking)

2018-05-29 Thread Joshua Harlow

Jonathan D. Proulx wrote:

On Tue, May 29, 2018 at 10:52:04AM -0400, Mohammed Naser wrote:

:On Tue, May 29, 2018 at 10:43 AM, Artom Lifshitz  wrote:
:>   One idea would be that, once the meat of the patch
:>  has passed multiple rounds of reviews and looks good, and what remains
:>  is only nits, the reviewer themselves take on the responsibility of
:>  pushing a new patch that fixes the nits that they found.

Doesn't the above suggestion sufficiently address the concern below?

:I'd just like to point out that what you perceive as a 'finished
:product that looks unprofessional' might be already hard enough for a
:contributor to achieve.  We have a lot of new contributors coming from
:all over the world and it is very discouraging for them to have their
:technical knowledge and work be categorized as 'unprofessional'
:because of the language barrier.
:
:git-nit and a few minutes of your time will go a long way, IMHO.

As very intermittent contributor and native english speaker with
relatively poor spelling and typing I'd be much happier with a
reviewer pushing a patch that fixes nits rather than having a ton of
inline comments that point them out.

maybe we're all saying the same thing here?


https://sep.yimg.com/ay/computergear/i-write-code-computer-t-shirt-13.gif

I am the same ;)



-JOn

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [StarlingX] StarlingX code followup discussions

2018-05-22 Thread Joshua Harlow

Also I am concerned that the repo just seems to have mega-commits like:

https://github.com/starlingx-staging/stx-glance/commit/1ec64167057e3368f27a1a81aca294b771e79c5e

https://github.com/starlingx-staging/stx-nova/commit/71acfeae0d1c59fdc77704527d763bd85a276f9a 
(not so mega)


https://github.com/starlingx-staging/stx-glance/commit/1ec64167057e3368f27a1a81aca294b771e79c5e

I am very confused now as well; it feels a lot like a code dump (which I 
get and it's nice to see companies patches, but it seems odd that this 
would ever be put anywhere official and expect?/hope? people to dissect 
and extract code that starlingx obviously couldn't put the manpower 
behind to do the same).


Brian Haley wrote:

On 05/22/2018 04:57 PM, Jay Pipes wrote:

Warning: strong opinions ahead.

On 05/22/2018 02:54 PM, Dean Troyer wrote:

Developers will need to re-create a repo locally in
order to work or test the code and create reviews (there are more git
challenges here). It would be challenging to do functional testing on
the rest of STX in CI without access to all of the code.


Please don't take this the wrong way, Dean, but you aren't seriously
suggesting that anyone outside of Windriver/Intel would ever
contribute to these repos are you?

What motivation would anyone outside of Windriver/Intel -- who must
make money on this effort otherwise I have no idea why they are doing
it -- have to commit any code at all to StarlingX?


I read this the other way - the goal is to get all the forked code from
StarlingX into upstream repos. That seems backwards from how this should
have been done (i.e. upstream first), and I don't see how a project
would prioritize that over other work.


I'm truly wondering why was this even open-sourced to begin with? I'm
as big a supporter of open source as anyone, but I'm really struggling
to comprehend the business, technical, or marketing decisions behind
this action. Please help me understand. What am I missing?


I'm just as confused.

-Brian



My personal opinion is that I don't think that any products,
derivatives or distributions should be hosted on openstack.org
infrastructure.

Are any of the distributions of OpenStack listed at
https://www.openstack.org/marketplace/distros/ hosted on openstack.org
infrastructure? No. And I think that is completely appropriate.

Best,
-jay

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] proposing Ken Giusti for oslo-core

2018-03-26 Thread Joshua Harlow

+1

Ben Nemec wrote:

+1!

On 03/26/2018 10:52 AM, Doug Hellmann wrote:

Ken has been managing oslo.messaging for ages now but his participation
in the team has gone far beyond that single library. He regularly
attends meetings, including the PTG, and has provided input into several
of our team decisions recently.

I think it's time we make him a full member of the oslo-core group.

Please respond here with a +1 or -1 to indicate your opinion.

Thanks,
Doug

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Zuul project evolution

2018-03-16 Thread Joshua Harlow

Awesome!

Might IMHO be useful to also start doing this with other projects.

James E. Blair wrote:

Hi,

To date, Zuul has (perhaps rightly) often been seen as an
OpenStack-specific tool.  That's only natural since we created it
explicitly to solve problems we were having in scaling the testing of
OpenStack.  Nevertheless, it is useful far beyond OpenStack, and even
before v3, it has found adopters elsewhere.  Though as we talk to more
people about adopting it, it is becoming clear that the less experience
they have with OpenStack, the more likely they are to perceive that Zuul
isn't made for them.

At the same time, the OpenStack Foundation has identified a number of
strategic focus areas related to open infrastructure in which to invest.
CI/CD is one of these.  The OpenStack project infrastructure team, the
Zuul team, and the Foundation staff recently discussed these issues and
we feel that establishing Zuul as its own top-level project with the
support of the Foundation would benefit everyone.

It's too early in the process for me to say what all the implications
are, but here are some things I feel confident about:

* The folks supporting the Zuul running for OpenStack will continue to
   do so.  We love OpenStack and it's just way too fun running the
   world's most amazing public CI system to do anything else.

* Zuul will be independently promoted as a CI/CD tool.  We are
   establishing our own website and mailing lists to facilitate
   interacting with folks who aren't otherwise interested in OpenStack.
   You can expect to hear more about this over the coming months.

* We will remain just as open as we have been -- the "four opens" are
   intrinsic to what we do.

As a first step in this process, I have proposed a change[1] to remove
Zuul from the list of official OpenStack projects.  If you have any
questions, please don't hesitate to discuss them here, or privately
contact me or the Foundation staff.

-Jim

[1] https://review.openstack.org/552637

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [oslo] new unified limit library

2018-03-12 Thread Joshua Harlow

The following may give u some more insight into delimiter,

https://review.openstack.org/#/c/284454/

-Josh

Lance Bragstad wrote:

I missed the document describing the process for this sort of thing [0].
So I'm back tracking a bit to go through a more formal process.

[0]
http://specs.openstack.org/openstack/oslo-specs/specs/policy/new-libraries.html

# Proposed new library oslo.limit

This is a proposal to create a new library dedicated to enabling more
consistent quota and limit enforcement across OpenStack.

## Proposed library mission

Enforcing quotas and limits across OpenStack has traditionally been a
tough problem to solve. Determining enforcement requires quota knowledge
from the service along with information about the project owning the
resource. Up until the Queens release, quota calculation and enforcement
has been left to the services to implement, forcing them to understand
complexities of keystone project structure. During the Pike and Queens
PTG, there were several productive discussions towards redesigning the
current approach to quota enforcement. Because keystone is the authority
of project structure, it makes sense to allow keystone to hold the
association between a resource limit and a project. This means services
still need to calculate quota and usage, but the problem should be
easier for services to implement since developers shouldn't need to
re-implement possible hierarchies of projects and their associated
limits. Instead, we can offload some of that work to a common library
for services to consume that handles enforcing quota calculation based
on limits associated to projects in keystone. This proposal is to have a
new library called oslo.limit that fills that need.

## Consuming projects

The services consuming this work will be any service that currently
implements a quota system, or plans to implement one. Since keystone
already supports unified limits and association of limits to projects,
the implementation for consuming projects is easier. instead of having
to re-write that implementation, developers need to ensure quota
calculation to passed to the oslo.limit library somewhere in the API's
validation layer. The pattern described here is very similar to the
pattern currently used by services that leverage oslo.policy for
authorization decisions.

## Alternative libraries

It looks like there was an existing library that attempted to solve some
of these problems, called delimiter [1]. It looks like delimiter could
be used to talk to keystone about quota enforcement, where as the
existing approach with oslo.limit would be to use keystone directly.
Someone more familiar with the library (harlowja?) can probably shed
more light on it's intended uses (I couldn't find much documentation),
but the presentation linked in a previous note was helpful.

[1] https://github.com/openstack/delimiter

## Proposed adoption model/plan

The unified limit API [2] in keystone is currently marked as
experimental, but the keystone team is actively collecting and
addressing feedback that will result in stabilizing the API.
Stabilization changes that effect the oslo.limit library will also be
addressed before version 1.0.0 is released. From there, we can look to
incorporate the library into various services that either have an
existing quota implementation, or services that have a quota requirement
but no implementation.

This should help us refine the interfaces between services and
oslo.limit, while providing a facade to handle complexities of project
hierarchies. This should enable adoption by simplifying the process and
making it easier for quota to be implemented in a consistent way across
services.

[2]
https://docs.openstack.org/keystone/latest/admin/identity-unified-limits.html

## Reviewer activity

At first thought, it makes sense to model the reviewer structure after
the oslo.policy library, where the core team consists of people not only
interested in limits and quota, but also people familiar with the
keystone implementation of the unified limits API.

## Implementation

### Primary Authors:

   Lance Bragstad (lbrags...@gmail.com) lbragstad
   You?

### Other contributors:

   You?

## Work Items

* Create a new library called oslo.limit
* Create a core group for the project
* Define the minimum we need to enforce quota calculations in oslo.limit
* Propose an implementation that allows services to test out quota
enforcement via unified limits

## References

Rocky PTG Etherpad for unified limits:
https://etherpad.openstack.org/p/unified-limits-rocky-ptg

## Revision History

Introduced in Rocky


On 03/07/2018 11:55 PM, Joshua Harlow wrote:

So the following was a prior effort:

https://github.com/openstack/delimiter

Maybe just continue down the path of that and/or take that whole repo
over and iterate (or adjust the prior code, or ...)?? Or if not that's
ok to, ya'll get to decide.

https://www.slideshare.net/vilobh/delimiter-openstack-cross-project-quota-library-proposal

Re: [openstack-dev] [oslo] Oslo PTG Summary

2018-03-08 Thread Joshua Harlow


Can we get some of those doc links opened.

'You need permission to access this published document.' I am getting 
for a few of them :(


Ben Nemec wrote:

Hi,

Here's my summary of the discussions we had in the Oslo room at the PTG.
Please feel free to reply with any additions if I missed something or
correct anything I've misrepresented.

oslo.config drivers for secret management
-

The oslo.config implementation is in progress, while the Castellan
driver still needs to be written. We want to land this early in Rocky as
it is a significant change in architecture for oslo.config and we want
it to be well-exercised before release.

There are discussions with the TripleO team around adding support for
this feature to its deployment tooling and there will be a functional
test job for the Castellan driver with Custodia.

There is a weekly meeting in #openstack-meeting-3 on Tuesdays at 1600
UTC for discussion of this feature.

oslo.config driver implementation: https://review.openstack.org/#/c/513844
spec:
https://specs.openstack.org/openstack/oslo-specs/specs/queens/oslo-config-drivers.html

Custodia key management support for Castellan:
https://review.openstack.org/#/c/515190/

"stable" libraries
--

Some of the Oslo libraries are in a mature state where there are very
few, if any, meaningful changes to them. With the removal of the
requirements sync process in Rocky, we may need to change the release
process for these libraries. My understanding was that there were no
immediate action items for this, but it was something we need to be
aware of.

dropping support for mox3
-

There was some concern that no one from the Oslo team is actually in a
position to support mox3 if something were to break (such as happened in
some libraries with Python 3.6). Since there is a community goal to
remove mox from all OpenStack projects in Rocky this will hopefully not
be a long-term problem, but there was some discussion that if projects
needed to keep mox for some reason that they would be asked to provide a
maintainer for mox3. This topic is kind of on hold pending the outcome
of the community goal this cycle.

automatic configuration migration on upgrade


There is a desire for oslo.config to provide a mechanism to
automatically migrate deprecated options to their new location on
version upgrades. This is a fairly complex topic that I can't cover
adequately in a summary email, but there is a spec proposed at
https://review.openstack.org/#/c/520043/ and POC changes at
https://review.openstack.org/#/c/526314/ and
https://review.openstack.org/#/c/526261/

One outcome of the discussion was that in the initial version we would
not try to handle complex migrations, such as the one that happened when
we combined all of the separate rabbit connection opts into a single
connection string. To start with we will just raise a warning to the
user that they need to handle those manually, but a templated or
hook-based method of automating those migrations could be added as a
follow-up if there is sufficient demand.

oslo.messaging plans


There was quite a bit discussed under this topic. I'm going to break it
down into sub-topics for clarity.

oslo.messaging heartbeats
=

Everyone seemed to be in favor of this feature, so we anticipate
development moving forward in Rocky. There is an initial patch proposed
at https://review.openstack.org/546763

We felt that it should be possible to opt in and out of the feature, and
that the configuration should be done at the application level. This
should _not_ be an operator decision as they do not have the knowledge
to make it sanely.

There was also a desire to have a TTL for messages.

bug cleanup
===

There are quite a few launchpad bugs open against oslo.messaging that
were reported against old, now unsupported versions. Since we have the
launchpad bug expirer enabled in Oslo the action item proposed for such
bugs was to mark them incomplete and ask the reporter to confirm that
they still occur against a supported version. This way bugs that don't
reproduce or where the reporter has lost interest will eventually be
closed automatically, but bugs that do still exist can be updated with
more current information.

deprecations


The Pika driver will be deprecated in Rocky. To our knowledge, no one
has ever used it and there are no known benefits over the existing
Rabbit driver.

Once again, the ZeroMQ driver was proposed for deprecation as well. The
CI jobs for ZMQ have been broken for a while, and there doesn't seem to
be much interest in maintaining them. Furthermore, the breakage seems to
be a fundamental problem with the driver that would require non-trivial
work to fix.

Given that ZMQ has been a consistent pain point in oslo.messaging over
the past few years, it was proposed that if someone does step 

Re: [openstack-dev] [tc] [all] TC Report 18-10

2018-03-08 Thread Joshua Harlow

Thierry Carrez wrote:

Joshua Harlow wrote:

Thierry Carrez wrote:

The PTG has always been about taking the team discussions that happened
at the Ops Summit / Design Summit to have them in a more productive
environment.

I am just going to say it but can we *please* stop distinguishing
between ops and devs (a ops summit, like why); the fact that these
emails even continue to have the word op or dev or ops communicate with
devs and then devs go do something that may work (hint this kind of
feedback loop is wrong) for ops pisses me off. The world has moved
beyond this kind of separation and openstack needs to as well... IMHO
projects that still rely on this kind of interaction are dead in the
water. If you aren't as a developer at least trying to operate even a
small openstack cloud (even a personal one) then you really shouldn't be
continuing as a developer in openstack...


I totally agree with you. Did you read my email until the end ? See:


[...]
Oh, and in the above paragraphs, I'm not distinguishing "devs" from
"ops". This applies to all teams, to any contributor engaged in making
OpenStack a reality. Having the Public Cloud WG team meet at the PTG was
great, and we should definitely have ANY OpenStack team wanting to meet
and get things done at future PTGs.

My mention above of "Ops Summit" / "Design Summit" was pointing to the
old names of the events, which "Forum" and "PTG" were specifically
designed to replace, avoiding the unnecessary split.



Ya, my mention was more of just the whole continued mention in this 
whole mailing list around ops and devs and the separation and ... not 
just really this email (or its thread); it's a common theme around here 
that IMHO needs to die in a fire.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-10

2018-03-08 Thread Joshua Harlow

Thierry Carrez wrote:

Matt Riedemann wrote:

I don't get the inward/outward thing. First two days of the old design
summit (ops summit?) format was all cross-project stuff (docs, upgrades,
testing, ops feedback, etc). That's the same as what happens at the PTG
now too. The last three days of the old design summit (and now PTG) are
vertical project discussion for the most part, but Thursday has also
become a de-facto cross-project day for a lot of teams (nova/cinder,
nova/neutron, nova/ironic all happened on Thursday). I'm not sure what
is happening at the Forum events that is so wildly different, or more
productive, than what we can do at the PTG - and arguably do it better
at the PTG because of fewer distractions to be giving talks, talking to
customers, and having time-boxed 40 minute slots.


The PTG has always been about taking the team discussions that happened
at the Ops Summit / Design Summit to have them in a more productive
environment.


I am just going to say it but can we *please* stop distinguishing 
between ops and devs (a ops summit, like why); the fact that these 
emails even continue to have the word op or dev or ops communicate with 
devs and then devs go do something that may work (hint this kind of 
feedback loop is wrong) for ops pisses me off. The world has moved 
beyond this kind of separation and openstack needs to as well... IMHO 
projects that still rely on this kind of interaction are dead in the 
water. If you aren't as a developer at least trying to operate even a 
small openstack cloud (even a personal one) then you really shouldn't be 
continuing as a developer in openstack...




Beyond the suboptimal productivity (due to too many distractions / other
commitments), the problem with the old Design Summit was that it
prevented team members from making the best use of the Summit event. You
would travel to a place where all our community gets together, only to
isolate yourself with your teammates trying to get stuff done. That was
silly. You should use the time there to engage *outside* of your team.
And by that I don't mean inter-team work, or participating to other
groups like SIGs or horizontal teams. I mean giving talks, presenting
the work you do (and how you do it) to newcomers, watching talks,
engaging with happy users, learning about the state of our ecosystem,
and discussing cross-community issues with a larger section of our
community (at the Forum).

The context switch between this inward work (work with your team, or
work within any transversal work group you're interested in), and this
outward work (engaging with other groups you're not a part of, listening
to newcomers) is expensive. It's hard to take the time to *listen* when
you try to get your work for the next 6 months organized and done.

Oh, and in the above paragraphs, I'm not distinguishing "devs" from
"ops". This applies to all teams, to any contributor engaged in making
OpenStack a reality. Having the Public Cloud WG team meet at the PTG was
great, and we should definitely have ANY OpenStack team wanting to meet
and get things done at future PTGs.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [oslo] new unified limit library

2018-03-07 Thread Joshua Harlow

So the following was a prior effort:

https://github.com/openstack/delimiter

Maybe just continue down the path of that and/or take that whole repo 
over and iterate (or adjust the prior code, or ...)?? Or if not that's 
ok to, ya'll get to decide.


https://www.slideshare.net/vilobh/delimiter-openstack-cross-project-quota-library-proposal

Lance Bragstad wrote:

Hi all,

Per the identity-integration track at the PTG [0], I proposed a new oslo
library for services to use for hierarchical quota enforcement [1]. Let
me know if you have any questions or concerns about the library. If the
oslo team would like, I can add an agenda item for next weeks oslo
meeting to discuss.

Thanks,

Lance

[0] https://etherpad.openstack.org/p/unified-limits-rocky-ptg
[1] https://review.openstack.org/#/c/550491/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [automaton] How to extend automaton?

2018-02-13 Thread Joshua Harlow
As far a 1, I'd recommend just use functools.partial or make an object 
with all the extra stuff u want and have that object provide a __call__ 
method.


As far as 2, you might have to subclass the FSM baseclass and add those 
into the internal data-structure (same for 3 I think); ie this one @ 
https://github.com/openstack/automaton/blob/master/automaton/machines.py#L186-L191


Of course feel free to do it differently and submit a patch that folks 
(myself and others) can review.


-Josh

Kwan, Louie wrote:

https://github.com/openstack/automaton

Friendly state machines for python.

A few questions about automaton.

1.I would like to know can we addition parameters on on_enter or on_exit
callbacks. Right now, it seems it only allows state and triggered_event.

a.I have many FSM running for different objects and it is much easier if
I can pass on the some sort of ID back to the callbacks.

2.Can we or how can we store extra attribute like last state change
*timestamp*?

3.Can we store additional identify info for the FSM object? Would like
to add an */UUID/*

Thanks.

Louie

def print_on_enter(new_state, triggered_event):

print("Entered '%s' due to '%s'" % (new_state, triggered_event))

def print_on_exit(old_state, triggered_event):

print("Exiting '%s' due to '%s'" % (old_state, triggered_event))

# This will contain all the states and transitions that our machine will

# allow, the format is relatively simple and designed to be easy to use.

state_space = [

{

'name': 'stopped',

'next_states': {

# On event 'play' transition to the 'playing' state.

'play': 'playing',

'open_close': 'opened',

'stop': 'stopped',

},

'on_enter': print_on_enter,

'on_exit': print_on_exit,

},

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Policy regarding template customisation

2018-01-30 Thread Joshua Harlow
Yup, I am hoping to avoid all of these kinds of customizations if 
possible... But if we have to we'll probably have to make something like 
that.


Or we'll just have to render out files for each host and serve it from 
the same REST endpoint, ya da ya da...


-Josh

Michał Jastrzębski wrote:

However I think issue
here is about files in /etc/kolla/config - so config overrides.

I think one potential solution would be to have some sort of ansible
task that would translate ansible vars to ini format and lay down
files in /etc/kolla/config, but I think that's beyond scope of
Kolla-Ansible.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Policy regarding template customisation

2018-01-30 Thread Joshua Harlow

I'm ok with #2,

Though I would like to show an alternative that we have been 
experimenting with that avoids the whole needs for a globals.yml and 
such files in the first place (and feels more naturally inline with how 
ansible works IMHO).


So short explanation first; we have this yaml format that describes all 
of our clouds and there settings and such (and which servers belong in 
which cloud and so on and so forth). We have then setup a REST server 
(small gunicorn based one) that renders/serves this format into other 
formats.


One of those other formats is one that is compatible with ansibles 
concept of dynamic inventory [1] and that is the one we are trying to 
send into kolla-ansible to get it to configure all the things (via 
typical mechanisms such as hostvars and groupvars).


An example of this rendering:

https://gist.github.com/harlowja/9d7b57571a2290c315fc9a4bf2957dac (this 
is dynamically generated from the other format, which is git version 
controlled...).


The goal here is that we can just render all the needed variables and 
such for kolla-ansible (at a per-host basis if we have to) and avoid the 
need for having a special globals.yml (per-cloud/environment) and 
per-host special files in the first place.


Was this kind of approach ever thought of?

Perhaps I can go into more detail if it seems like one others may want 
to follow


[1]: http://docs.ansible.com/ansible/latest/intro_dynamic_inventory.html

Paul Bourke wrote:

Hi all,

I'd like to revisit our policy of not templating everything in
kolla-ansible's template files. This is a policy that was set in place
very early on in kolla-ansible's development, but I'm concerned we
haven't been very consistent with it. This leads to confusion for
contributors and operators - "should I template this and submit a patch,
or do I need to start using my own config files?".

The docs[0] are currently clear:

"The Kolla upstream community does not want to place key/value pairs in
the Ansible playbook configuration options that are not essential to
obtaining a functional deployment."

In practice though our templates contain many options that are not
necessary, and plenty of patches have merged that while very useful to
operators, are not necessary to an 'out of the box' deployment.

So I'd like us to revisit the questions:

1) Is kolla-ansible attempting to be a 'batteries included' tool, which
caters to operators via key/value config options?

2) Or, is it to be a solid reference implementation, where any degree of
customisation implies a clear 'bring your own configs' type policy.

If 1), then we should potentially:

* Update ours docs to remove the referenced paragraph
* Look at reorganising files like globals.yml into something more
maintainable.

If 2),

* We should make it clear to reviewers that patches templating options
that are non essential should not be accepted.
* Encourage patches to strip down existing config files to an absolute
minimum.
* Make this policy more clear in docs / templates to avoid frustration
on the part of operators.

Thoughts?

Thanks,
-Paul

[0]
https://docs.openstack.org/kolla-ansible/latest/admin/deployment-philosophy.html#why-not-template-customization


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Switching to longer development cycles

2017-12-21 Thread Joshua Harlow
My 3 cents are that this isn't something that u get asked to put in from 
operators, it's something that is built in from the start. Just look at 
other workflow systems (which is really what nova/cinder/neutron... 
are); they don't try to add this functionality in later (or they 
shouldn't at least) but c'est la vie...


With that stated I would agree this is a community-wide goal (and a very 
very hard one since every single project 'forgot' to build this in from 
the start).


https://raw.githubusercontent.com/spotify/luigi/master/doc/visualiser_front_page.png 
(another example of another projects UI that does something similar).


Matt Riedemann wrote:

On 12/19/2017 2:29 PM, Joshua Harlow wrote:

* Clear workflow state (and transition) 'machine' that is followed in
code and can be used by operators/others such as UI developers to get
a view on what nova is or is not doing (may fit under the broad topic
of observability?)

Take for example
http://flower.readthedocs.io/en/latest/screenshots.html and ask
yourself why nova-compute (or nova-conductor or nova-scheduler...)
doesn't have an equivalent kind of 'viewer' (and no it doesn't need to
be flower, that's just an example...)


OK...first I've heard of this too. Is this something that the majority
of people deploying, operating and/or using Nova are asking for as a
priority?

Also, this doesn't just seem like a nova thing - this smells like a
community-wide goal.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][mistral][vitrage][octavia][taskflow][watcher] Networkx version 2.0

2017-12-21 Thread Joshua Harlow
I don't expect it to be a big thing for taskflow, I'll mess around this 
weekend; afaik most of the changes in networkx 2.0 were around making 
all the things iterators.


Matthew Thode wrote:

On 17-12-20 15:51:17, Afek, Ifat (Nokia - IL/Kfar Sava) wrote:

Hi,

There is an open bug in launchpad about the new release of Networkx 2.0, that 
is backward incompatible with versions 1.x [1].
Is there a plan to change the Networkx version in the global requirements in 
Queens? We need to make some code refactoring in Vitrage, and I’m trying to 
understand how urgent it is.

[1] https://bugs.launchpad.net/diskimage-builder/+bug/1718576



Mistral, Vitrage, Octavia, Taskflow, Watcher

Those are the projects using NetworkX that'd need to be updated.
http://codesearch.openstack.org/?q=networkx=nope=.*requirements.*=

I'm open to uncapping networkx if these projects have buyin.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Switching to longer development cycles

2017-12-20 Thread Joshua Harlow

Zane Bitter wrote:

On 19/12/17 22:15, Fei Long Wang wrote:



On 20/12/17 09:29, Joshua Harlow wrote:

Zane Bitter wrote:

Getting off-topic, but since you walked in to this one ;)

On 14/12/17 21:43, Matt Riedemann wrote:

What are the real problems that the Nova team is not working on and
apparently is a priority to everyone else in the OpenStack ecosystem
but we are somehow oblivious?


* Providing a secure channel to get credentials to guests so that
applications running on them can authenticate to OpenStack APIs.

* Providing reliable, user-space notifications of events that may
require application or application-operator intervention (e.g. VM
reboot, hypervisor died, ).


I'll add on.

* Clear workflow state (and transition) 'machine' that is followed in
code and can be used by operators/others such as UI developers to get
a view on what nova is or is not doing (may fit under the broad topic
of observability?)

Take for example
http://flower.readthedocs.io/en/latest/screenshots.html and ask
yourself why nova-compute (or nova-conductor or nova-scheduler...)
doesn't have an equivalent kind of 'viewer' (and no it doesn't need to
be flower, that's just an example...)



That's one of the changes we (at least me and Rob Cresswell) would like
to improve in Horizon. The idea is services (Nova, Cinder, Nutron, etc)
putting notifications into Zaqar queue and then Horizon can easily get
the resources status instead of doing stupid polling. And it could be
easily doing the thing shown on above link.


What you're talking about is really an expansion of my second item
above. What Josh means is using the taskflow library to co-ordinate all
of the steps involved internally to Nova to boot an instance
(scheduling, attaching volumes & ports, actually creating the VM, ) -
it's not really a user-space thing.


I'm fine also with not using taskflow, just use something (vs nothing) ;)



To answer the question from your other mail about user-space notifications:


We (Zaqar team) have been asked many times about this area but without a
support from more services, it's hard. I know Heat has some best
practice using Zaqar, is it possible to "copy" it to Nova/Cinder/Neutron?


Sure, it would be dead easy, but Nova cores have made it abundantly
clear that under no circumstances would they ever accept any code like
this in Nova. Hence it's on this list.

cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Switching to longer development cycles

2017-12-19 Thread Joshua Harlow

Zane Bitter wrote:

Getting off-topic, but since you walked in to this one ;)

On 14/12/17 21:43, Matt Riedemann wrote:

What are the real problems that the Nova team is not working on and
apparently is a priority to everyone else in the OpenStack ecosystem
but we are somehow oblivious?


* Providing a secure channel to get credentials to guests so that
applications running on them can authenticate to OpenStack APIs.

* Providing reliable, user-space notifications of events that may
require application or application-operator intervention (e.g. VM
reboot, hypervisor died, ).


I'll add on.

* Clear workflow state (and transition) 'machine' that is followed in 
code and can be used by operators/others such as UI developers to get a 
view on what nova is or is not doing (may fit under the broad topic of 
observability?)


Take for example http://flower.readthedocs.io/en/latest/screenshots.html 
and ask yourself why nova-compute (or nova-conductor or 
nova-scheduler...) doesn't have an equivalent kind of 'viewer' (and no 
it doesn't need to be flower, that's just an example...)




- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Community managed tech/dev blog: Call for opinions and ideas

2017-11-29 Thread Joshua Harlow

Thierry Carrez wrote:

Jimmy McArthur wrote:

Thierry Carrez wrote:

Historically blog.o.o used to be our only blog outlet, so almost
anything would go in:

"OpenStack Events Sponsorship Webinar"
"New Foundation Gold Members&  Corporate Sponsors"
"HP Announces Private Beta Program for OpenStack Cloud"
"2016 OpenStack T-Shirt Design Contest"

What Josh wants is a curated technical blog, so if we reused blog.o.o
for this (and I think it's a good idea), we'd likely want to have a bit
more rules on what's appropriate.

Agreed. It's almost solely used for developer digest now and isn't
frequently updated. Most of the promotion of sponsors and news goes into
o.o/News, SuperUser, or one of our other marketing channels. It's a good
time for the community to repurpose it :)


Perfect, we have a plan.

Before we make it tech-only and boring, let us all take a minute to
remember the OpenStack jingle:

https://www.openstack.org/blog/2011/07/openstack-the-best-sounding-cloud/



Errr, ummm, woah, lol

We might need to work on our DJ skills, lol

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oaktree] Follow up to Multi-cloud Management in OpenStack Summit session

2017-11-28 Thread Joshua Harlow

Small side-question,

Why would this just be limited to openstack clouds?

Would it be?

Monty Taylor wrote:

Hey everybody!

https://etherpad.openstack.org/p/sydney-forum-multi-cloud-management

I've CC'd everyone who listed interest directly, just in case you're not
already on the openstack-dev list. If you aren't, and you are in fact
interested in this topic, please subscribe and make sure to watch for
[oaktree] subject headings.

We had a great session in Sydney about the needs of managing resources
across multiple clouds. During the session I pointed out the work that
had been started in the Oaktree project [0][1] and offered that if the
people who were interested in the topic thought we'd make progress best
by basing the work on oaktree, that we should bootstrap a new core team
and kick off some weekly meetings. This is, therefore, the kickoff email
to get that off the ground.

All of the below is thoughts from me and a description of where we're at
right now. It should all be considered up for debate, except for two
things:

- gRPC API
- backend implementation based on shade

As those are the two defining characteristics of the project. For those
who weren't in the room, justifications for those two characteristics are:

gRPC API


There are several reasons why gRPC.

* Make it clear this is not a competing REST API.

OpenStack has a REST API already. This is more like a 'federation' API
that knows how to talk to one or more clouds (similar to the kubernetes
federation API)

* Streaming and async built in

One of the most costly things in using the OpenStack API is polling.
gRPC is based on HTTP/2 and thus supports streaming and other exciting
things. This means an oaktree running in or on a cloud can do its
polling loops over the local network and the client can just either wait
on a streaming call until the resource is ready, or can fire an async
call and deal with it later on a notification channel.

* Network efficiency

Protobuf over HTTP/2 is a super-streamlined binary protocol, which
should actually be really nice for our friends in Telco land who are
using OpenStack for Edge-related tasks in 1000s of sites. All those
roundtrips add up at scale.

* Multi-language out of the box

gRPC allows us to directly generate consistent consumption libs for a
bunch of languages - or people can grab the proto files and integrate
those into their own build if they prefer.

* The cool kids are doing it

To be fair, Jay Pipes and I tried to push OpenStack to use Protobuf
instead of JSON for service-to-service communication back in 2010 - so
it's not ACTUALLY a new idea... but with Google pushing it and support
from the CNCF, gRPC is actually catching on broadly. If we're writing a
new thing, let's lean forward into it.

Backend implementation in shade
---

If the service is defined by gRPC protos, why not implement the service
itself in Go or C++?

* Business logic to deal with cloud differences

Adding a federation API isn't going to magically make all of those
clouds work the same. We've got that fairly well sorted out in shade and
would need to reimplement basically all of shade in other other language.

* shade is battle tested at scale

shade is what Infra's nodepool uses. In terms of high-scale API
consumption, we've learned a TON of lessons. Much of the design inside
of shade is the result of real-world scaling issues. It's Open Source,
so we could obviously copy all of that elsewhere - but why? It exists
and it works, and oaktree itself should be a scale-out shared-nothing
kind of service anyway.

The hard bits here aren't making API calls to 3 different clouds, the
hard bits are doing that against 3 *different* clouds and presenting the
results sanely and consistently to the original user.

Proposed Structure
==

PTL
---

As the originator of the project, I'll take on the initial PTL role.
When the next PTL elections roll around, we should do a real election.

Initial Core Team
-

oaktree is still small enough that I don't think we need to be super
protective - so I think if you're interested in working on it and you
think you'll have the bandwidth to pay attention, let me know and I'll
add you to the team.

General rules of thumb I try to follow on top of normal OpenStack
reviewing guidelines:

* Review should mostly be about suitability of design/approach. Style
issues should be handled by pep8/hacking (with one exception, see
below). Functional issues should be handled with tests. Let the machines
be machines and humans be humans.

* Use followup patches to fix minor things rather than causing an
existing patch to get re-spun and need to be re-reviewed.

The one style exception ... I'm a big believer in not using visual
indentation - but I can't seem to get pep8 or hacking to complain about
its use. This isn't just about style - visual indentation causes more
lines to be touched during a refactor than are necessary making the

Re: [openstack-dev] [oaktree] Follow up to Multi-cloud Management in OpenStack Summit session

2017-11-28 Thread Joshua Harlow

Monty Taylor wrote:

On 11/28/2017 06:05 PM, Joshua Harlow wrote:
 > So just curious.
 >
 > I didn't think shade had any federation logic in it; so I assume it will
 > start getting some?

It's possible that we're missing each other on the definition of the
word 'federation' ... but shade's entire purpose in life is to allow
sane use of multiple clouds from the same application.


Ya I think u got it, shade is like what I would call the rubber hits the 
road part of federation; so it will be interesting to see how such 
rubber can be used to build what I would call the higher level 
federation (without screwing it up, lol).




 > Has there been any prelim. design around what the APIs of this would be
 > and how they would work and how they would return data from X other
 > clouds in a uniform manner? (I'd really be interested in how a high
 > level project is going to combine various resources from other clouds in
 > a way that doesn't look like crap).

(tl;dr - yes)

Ah - I grok what you're saying now. Great question!

There are (at least) four sides to this.

* Creating a resource in a specific location (boot a VM in OVH BHS1)
* Fetching resources from a specific location (show me the image in
vexxhost)

* Creating a resource everywhere (upload an image to all cloud regions)
* Fetching resources from all locations (show me all my VMs)

The first two are fully handled, as you might imagine, although the
mechanism is slightly different in shade and oaktree (I'll get back to
that in a sec)

Creating everywhere isn't terribly complex - when I need to do that
today it's a simple loop:

for cloud in shade.openstack_clouds():
cloud.create_image('my-image', filename='my-image.qcow2')


Ya, scatter/gather (with some kind of new grpc streaming response..)



But we can (and should and will) add some syntactic sugar to make that
easier. Like (*waving hands*)

all_clouds = shade.everwhere()
all_clouds.create_image('my-image', filename='my-image.qcow2')


Might as well just start to call it scatter/gather, lol



It's actually more complex than that, because Rackspace wants a VHD and
OVH wants a RAW but can take a qcow2 as well... but this is an email, so
for now let's assume that we can handle the general 'create everywhere'
with a smidge of meta programming, some explicit overrides for the
resources that need extra special things - and probably something like
concurrent.futures.ThreadPoolExecutor.

The real fun, as you hint at, comes when we want to read from everywhere.

To prep for this (and inspired specifically be this use-case), shade now
adds a "location" field to every resource it returns. That location
field contains cloud, region, domain and project information - so that
in a list of server objects from across 14 regions of 6 clouds all the
info about who and what they are is right there in the object.

When we shift to the oaktree gRPC interface, we carry over the Location
concept:


http://git.openstack.org/cgit/openstack/oaktreemodel/tree/oaktreemodel/common.proto#n31


which we keep on all of the resources:


http://git.openstack.org/cgit/openstack/oaktreemodel/tree/oaktreemodel/image.proto#n49


So listing all the things should work the same way as the above
list-from-everywhere method.

The difference I mentioned earlier in how shade and oaktree present the
location interface is that in shade there is a an OpenStackCloud object
per cloud-region, and as a user you select which cloud you operate on
via instantiating an OpenStackCloud pointed at the right thing. We need
to add the AllTheClouds meta object for the shade interface.

In oaktree, there is the one oaktree instance and it contains
information about all of your cloud-regions, so Locations and Filters
become a parameters on operations.

 > Will this thing also have its own database (or something like a DB)?

It's an open question. Certainly not at the moment or in the near future
- there's no need for one, as the constituent OpenStack clouds are the
actual source of truth, the thing we need is caching rather than data
that is canonical itself.


That's fine, it prob only becomes a problem if there is a need for some 
kind of cross cloud consistency requirements (which ideally this whole 
thing would strongly avoid)




This will almost certainly change as we work on the auth story, but the
specifics of that are ones that need to be sorted out collectively -
preferably with operators involved.

 > I can imagine if there is a `create_many_servers` call in oaktree that
 > it will need to have some sort of lock taken by the process doing this
 > set of XYZ calls (in the right order) so that some other
 > `create_many_servers` call doesn't come in and screw everything the
 > prior one up... Or maybe cross-cloud consistency issues aren't a
 > concern... What's the thoughts here?
That we have already, actually, and you've even landed code in it. :)
shade executes all of its remote oper

Re: [openstack-dev] [oaktree] Follow up to Multi-cloud Management in OpenStack Summit session

2017-11-28 Thread Joshua Harlow

So just curious.

I didn't think shade had any federation logic in it; so I assume it will 
start getting some?


Has there been any prelim. design around what the APIs of this would be 
and how they would work and how they would return data from X other 
clouds in a uniform manner? (I'd really be interested in how a high 
level project is going to combine various resources from other clouds in 
a way that doesn't look like crap).


Will this thing also have its own database (or something like a DB)?

I can imagine if there is a `create_many_servers` call in oaktree that 
it will need to have some sort of lock taken by the process doing this 
set of XYZ calls (in the right order) so that some other 
`create_many_servers` call doesn't come in and screw everything the 
prior one up... Or maybe cross-cloud consistency issues aren't a 
concern... What's the thoughts here?


What happens in the above if a third user Y is creating resources in one 
of those clouds outside the view of oaktree... ya da ya da... What 
happens if they are both targeting the same tenant...


Perhaps a decent idea to start some kind of etherpad to start listing 
these questions (and at least think about them a wee bit) down?


Monty Taylor wrote:

Hey everybody!

https://etherpad.openstack.org/p/sydney-forum-multi-cloud-management

I've CC'd everyone who listed interest directly, just in case you're not
already on the openstack-dev list. If you aren't, and you are in fact
interested in this topic, please subscribe and make sure to watch for
[oaktree] subject headings.

We had a great session in Sydney about the needs of managing resources
across multiple clouds. During the session I pointed out the work that
had been started in the Oaktree project [0][1] and offered that if the
people who were interested in the topic thought we'd make progress best
by basing the work on oaktree, that we should bootstrap a new core team
and kick off some weekly meetings. This is, therefore, the kickoff email
to get that off the ground.

All of the below is thoughts from me and a description of where we're at
right now. It should all be considered up for debate, except for two
things:

- gRPC API
- backend implementation based on shade

As those are the two defining characteristics of the project. For those
who weren't in the room, justifications for those two characteristics are:

gRPC API


There are several reasons why gRPC.

* Make it clear this is not a competing REST API.

OpenStack has a REST API already. This is more like a 'federation' API
that knows how to talk to one or more clouds (similar to the kubernetes
federation API)

* Streaming and async built in

One of the most costly things in using the OpenStack API is polling.
gRPC is based on HTTP/2 and thus supports streaming and other exciting
things. This means an oaktree running in or on a cloud can do its
polling loops over the local network and the client can just either wait
on a streaming call until the resource is ready, or can fire an async
call and deal with it later on a notification channel.

* Network efficiency

Protobuf over HTTP/2 is a super-streamlined binary protocol, which
should actually be really nice for our friends in Telco land who are
using OpenStack for Edge-related tasks in 1000s of sites. All those
roundtrips add up at scale.

* Multi-language out of the box

gRPC allows us to directly generate consistent consumption libs for a
bunch of languages - or people can grab the proto files and integrate
those into their own build if they prefer.

* The cool kids are doing it

To be fair, Jay Pipes and I tried to push OpenStack to use Protobuf
instead of JSON for service-to-service communication back in 2010 - so
it's not ACTUALLY a new idea... but with Google pushing it and support
from the CNCF, gRPC is actually catching on broadly. If we're writing a
new thing, let's lean forward into it.

Backend implementation in shade
---

If the service is defined by gRPC protos, why not implement the service
itself in Go or C++?

* Business logic to deal with cloud differences

Adding a federation API isn't going to magically make all of those
clouds work the same. We've got that fairly well sorted out in shade and
would need to reimplement basically all of shade in other other language.

* shade is battle tested at scale

shade is what Infra's nodepool uses. In terms of high-scale API
consumption, we've learned a TON of lessons. Much of the design inside
of shade is the result of real-world scaling issues. It's Open Source,
so we could obviously copy all of that elsewhere - but why? It exists
and it works, and oaktree itself should be a scale-out shared-nothing
kind of service anyway.

The hard bits here aren't making API calls to 3 different clouds, the
hard bits are doing that against 3 *different* clouds and presenting the
results sanely and consistently to the original user.

Proposed Structure
==

PTL
---

As the originator 

Re: [openstack-dev] [all] Community managed tech/dev blog: Call for opinions and ideas

2017-11-27 Thread Joshua Harlow

How does one get a login or submit things to openstack.org/blog?

Is there any editor actively seeking out things (and reviewing) for 
folks to write (a part time job I would assume?).


Josh

Allison Price wrote:

I agree with Jimmy that openstack.org/blog <http://openstack.org/blog>
would be a great location for content like this, especially since the
main piece of content is the OpenStack-dev ML digest that Mike creates
weekly. Like Flavio mentioned, Superuser is another resource we can
leverage for tutorials or new features. Both the blog and Superuser are
Wordpress, enabling contributions from anyone with a login and content
to share.

Superuser and the OpenStack blog is already syndicated with Planet
OpenStack for folks who have already subscribed to the Planet OpenStack
feed.

Allison

Allison Price
OpenStack Foundation Marketing
alli...@openstack.org <mailto:alli...@openstack.org>




On Nov 27, 2017, at 1:44 PM, Joshua Harlow <harlo...@fastmail.com
<mailto:harlo...@fastmail.com>> wrote:

Doug Hellmann wrote:

Excerpts from Joshua Harlow's message of 2017-11-27 10:54:02 -0800:

Flavio Percoco wrote:

Greetings,

Last Thursday[0], at the TC office hours, we brainstormed a bit around
the idea
of having a tech blog. This idea came first from Joshua Harlow and it
was then
briefly discussed at the summit too.

The idea, we have gathered, is to have a space where the community
could
write
technical posts about OpenStack. The idea is not to have an aggregator
(that's
what our planet[1] is for) but a place to write original and curated
content.
During the conversation, we argued about what kind of content would be
acceptable for this platform. Here are some ideas of things we could
have there:

- Posts that are dev-oriented (e.g: new functions on an oslo lib)
- Posts that facilitate upstream development (e.g: My awesome dev
setup)
- Deep dive into libvirt internals
- ideas?

As Chris Dent pointed out on that conversation, we should avoid
making this
place a replacement for things that would otherwise go on the mailing
list -
activity reports, for example. Having dev news in this platform, we
would
overlap with things that go already on the mailing list and, arguably,
we would
be defeating the purpose of the platform. But, there might be room for
both(?)

Ultimately, we should avoid topics promoting new features in
services as
that's what
superuser[2] is for.

So, what are your thoughts about this? What kind of content would you
rather
have posted here? Do you like the idea at all?

Yes, I like it :)

I want a place that is like http://blog.kubernetes.io/

With say an editor that solicits (and backlogs topics and stories and
such) various developers/architects at various companies and creates a
actually human curated place for developers and technology and
architecture to be spot-lighted.

To me personal blogs can be used for this, sure, but that sort of misses
the point of having a place that is targeted for this (and no I don't
really care about finding and subscribing to 100+ random joe blogs that
I will never look at more than once). Ideally that place would not
become `elitist` as some others have mentioned in this thread (ie, don't
pick an elitist editor? lol).

The big desire for me is to actually have a editor (a person or people)
involved that is keeping such a blog going and editing it and curating
it and ensuring it gets found in google searches and is *developer*
focused...


Are you volunteering? :-)


You don't want me to be an editor ;)



Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org
<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org
<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Community managed tech/dev blog: Call for opinions and ideas

2017-11-27 Thread Joshua Harlow

Jay Pipes wrote:

On 11/27/2017 01:54 PM, Joshua Harlow wrote:

I want a place that is like http://blog.kubernetes.io/


You know that's not for developers (only) of k8s, right? That's for
users, marketers, product managers, etc.


We don't have to exactly copy it (that was not the intention).

But looking at the following it seems like its slightly more than that.

'This post is written by Kelsey Hightower, Staff Developer Advocate at 
Google, and Sandra Guo, Product Manager at Google.'


'Today's post is by Lantao Liu, Software Engineer at Google, and Mike 
Brown, Open Source Developer Advocate at IBM.'


'Editor's note: this post is part of a series of in-depth articles on 
what's new in Kubernetes 1.8. Today’s post comes from Ahmet Alp Balkan, 
Software Engineer, Google.'


'Editor's note: Today’s post by Frank Budinsky, Software Engineer, IBM, 
Andra Cismaru, Software Engineer, Google, and Israel Shalom, Product 
Manager, Google, is the second post in a three-part series on Istio. It 
offers a closer look at request routing and policy management.'


^ A lot of engineers in that (not just product folks, though there are 
some yes..).




So, when you say *for developers*, what exactly do you mean? Are you
referring to developers of OpenStack projects or are you referring to
*users* of cloud services -- i.e. application developers?


People like jay and josh and dims and ..., or future jay (jr) and josh 
(jr) folks...



Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Community managed tech/dev blog: Call for opinions and ideas

2017-11-27 Thread Joshua Harlow

Doug Hellmann wrote:

Excerpts from Joshua Harlow's message of 2017-11-27 10:54:02 -0800:

Flavio Percoco wrote:

Greetings,

Last Thursday[0], at the TC office hours, we brainstormed a bit around
the idea
of having a tech blog. This idea came first from Joshua Harlow and it
was then
briefly discussed at the summit too.

The idea, we have gathered, is to have a space where the community could
write
technical posts about OpenStack. The idea is not to have an aggregator
(that's
what our planet[1] is for) but a place to write original and curated
content.
During the conversation, we argued about what kind of content would be
acceptable for this platform. Here are some ideas of things we could
have there:

- Posts that are dev-oriented (e.g: new functions on an oslo lib)
- Posts that facilitate upstream development (e.g: My awesome dev setup)
- Deep dive into libvirt internals
- ideas?

As Chris Dent pointed out on that conversation, we should avoid making this
place a replacement for things that would otherwise go on the mailing
list -
activity reports, for example. Having dev news in this platform, we would
overlap with things that go already on the mailing list and, arguably,
we would
be defeating the purpose of the platform. But, there might be room for
both(?)

Ultimately, we should avoid topics promoting new features in services as
that's what
superuser[2] is for.

So, what are your thoughts about this? What kind of content would you
rather
have posted here? Do you like the idea at all?

Yes, I like it :)

I want a place that is like http://blog.kubernetes.io/

With say an editor that solicits (and backlogs topics and stories and
such) various developers/architects at various companies and creates a
actually human curated place for developers and technology and
architecture to be spot-lighted.

To me personal blogs can be used for this, sure, but that sort of misses
the point of having a place that is targeted for this (and no I don't
really care about finding and subscribing to 100+ random joe blogs that
I will never look at more than once). Ideally that place would not
become `elitist` as some others have mentioned in this thread (ie, don't
pick an elitist editor? lol).

The big desire for me is to actually have a editor (a person or people)
involved that is keeping such a blog going and editing it and curating
it and ensuring it gets found in google searches and is *developer*
focused...


Are you volunteering? :-)


You don't want me to be an editor ;)



Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Community managed tech/dev blog: Call for opinions and ideas

2017-11-27 Thread Joshua Harlow

Flavio Percoco wrote:

Greetings,

Last Thursday[0], at the TC office hours, we brainstormed a bit around
the idea
of having a tech blog. This idea came first from Joshua Harlow and it
was then
briefly discussed at the summit too.

The idea, we have gathered, is to have a space where the community could
write
technical posts about OpenStack. The idea is not to have an aggregator
(that's
what our planet[1] is for) but a place to write original and curated
content.
During the conversation, we argued about what kind of content would be
acceptable for this platform. Here are some ideas of things we could
have there:

- Posts that are dev-oriented (e.g: new functions on an oslo lib)
- Posts that facilitate upstream development (e.g: My awesome dev setup)
- Deep dive into libvirt internals
- ideas?

As Chris Dent pointed out on that conversation, we should avoid making this
place a replacement for things that would otherwise go on the mailing
list -
activity reports, for example. Having dev news in this platform, we would
overlap with things that go already on the mailing list and, arguably,
we would
be defeating the purpose of the platform. But, there might be room for
both(?)

Ultimately, we should avoid topics promoting new features in services as
that's what
superuser[2] is for.

So, what are your thoughts about this? What kind of content would you
rather
have posted here? Do you like the idea at all?


Yes, I like it :)

I want a place that is like http://blog.kubernetes.io/

With say an editor that solicits (and backlogs topics and stories and 
such) various developers/architects at various companies and creates a 
actually human curated place for developers and technology and 
architecture to be spot-lighted.


To me personal blogs can be used for this, sure, but that sort of misses 
the point of having a place that is targeted for this (and no I don't 
really care about finding and subscribing to 100+ random joe blogs that 
I will never look at more than once). Ideally that place would not 
become `elitist` as some others have mentioned in this thread (ie, don't 
pick an elitist editor? lol).


The big desire for me is to actually have a editor (a person or people) 
involved that is keeping such a blog going and editing it and curating 
it and ensuring it gets found in google searches and is *developer* 
focused...




[0]
http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-11-23.log.html#t2017-11-23T15:01:25

[1] http://planet.openstack.org/
[2] http://superuser.openstack.org/

Flavio

--
@flaper87
Flavio Percoco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Keynote: Governance and Trust -- from Company Led to Community Led - Sarah Novotny - YouTube

2017-11-15 Thread Joshua Harlow

Doug Hellmann wrote:

This keynote talk about the evolution of governance in the kubernetes community 
and the CNCF more broadly discusses some interesting parallels with our own 
history.

https://m.youtube.com/watch?feature=youtu.be=Apw_fuTEhyA


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Neat! Thanks for sharing,

So out of curiosity, why aren't we (as a whole, or part of it?) just 
advocating (pushing for?) merging these two communities (ours and theirs).


Call it the MegaCNCF if that helps.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Upstream LTS Releases

2017-11-15 Thread Joshua Harlow
Just a thought, cause I have known/do know what Mathieu is talking about 
and find the disconnect still oddly weird. Why aren't developer people 
from other companies coming into to where Mathieu works (or where I 
work) and seeing how it really works down on the ground here.


I mean if we still have this weird disconnect; why aren't we like 
starting some kind of temp. assignments from developers that wish to 
learn into the actual companies that are struggling; call it a working 
vacation or something if that helps your management buy into it.


After all if its a community, and we should be trying to break down 
walls as much as we can...


Just a thought...

-Josh

Mathieu Gagné wrote:

Some clarifications below.

On Wed, Nov 15, 2017 at 4:52 AM, Bogdan Dobrelya  wrote:

Thank you Mathieu for the insights!


To add details to what happened:
* Upgrade was never made a #1 priority. It was a one man show for far
too long. (myself)


I suppose that confirms that upgrades is very nice to have in production
deployments, eventually, maybe... (please read below to continue)


* I also happen to manage and work on other priorities.
* Lot of work made to prepare for multiple versions support in our
deployment tools. (we use Puppet)
* Lot of work in the packaging area to speedup packaging. (we are
still using deb packages but with virtualenv to stay Puppet
compatible)
* We need to forward-port private patches which upstream won't accept
and/or are private business logic.


... yet long time maintaining and landing fixes is the ops' *reality* and
pain #1. And upgrades are only pain #2. LTS can not directly help with #2,
but only indirectly, if the vendors' downstream teams could better cooperate
with #1 and have more time and resources to dedicate for #2, upgrades
stories for shipped products and distros.


We do not have a vendor. (anymore, if you consider Ubuntu
cloud-archive as a vendor)
We package and deploy ourselves.


Let's please to not lower the real value of LTS branches and not substitute
#1 with #2. This topic is not about bureaucracy and policies, it is about
how could the community help vendors to cooperate over maintaining of
commodity things, with as less bureaucracy as possible, to ease the
operators pains in the end.


* Our developer teams didn't have enough free cycles to work right
away on the upgrade. (this means delays)
* We need to test compatibility with 3rd party systems which takes
some time. (and make them compatible)


This confirms perhaps why it is vital to only run 3rd party CI jobs for LTS
branches?


For us, 3rd party systems are internal systems outside our control or
realm of influence.
They are often in-house systems that the outside world would care very
little about.


* We need to update systems ever which we don't have full control.
This means serious delays when it comes to deployment.
* We need to test features/stability during some time in our dev
environment.
* We need to test features/stability during some time in our
staging/pre-prod environment.
* We need to announce and inform our users at least 2 weeks in advance
before performing an upgrade.
* We choose to upgrade one service at a time (in all regions) to avoid
a huge big bang upgrade. (this means more maintenance windows to plan
and you can't stack them too much)
* We need to swiftly respond to bug discovered by our users. This
means change of priorities and delay in other service upgrades.
* We will soon need to upgrade operating systems to support latest
OpenStack versions. (this means we have to stop OpenStack upgrades
until all nodes are upgraded)


It seems that the answer to the question sounded, "Why upgrades are so
painful and take so much time for ops?" is "as upgrades are not the
priority. Long Time Support and maintenance are".


The cost of performing an upgrading is both time and resources
consuming which are both limited.
And you need to sync the world around you to make it happen. It's not
a one man decision/task.

When you remove all the external factors, dependencies, politics,
etc., upgrading can take an afternoon from A to Z from some projects.
We do have an internal cloud for our developers that lives in a
vacuum. Let me tell you that it's not very easy upgrade it. We are
talking about hours/days, not years.

So if I can only afford to upgrade once per year, what are my options?

--
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] How do you use the instance IP filter?

2017-10-28 Thread Joshua Harlow

Matt Riedemann wrote:

On 10/26/2017 10:56 PM, Joshua Harlow wrote:

Just the paranoid person in me, but is it safe to say that the filter
that you are showing here does not come from user text?

Ie these two lines don't come from a user input directly (without
going through some filter) do they?

https://github.com/openstack/nova/blob/16.0.0/nova/compute/api.py#L2458-L2459


From reading it seems like perhaps they do come at least partially
from a user, so I am hoping that its not possible for a user to
present a 'ip' that is really a complicated regex that takes a long
time to compile (and therefore can DOS the nova-api component); but I
don't know the surrounding code so I might be wrong...

Just wondering :-/

-Josh


We have schema validation on the ip filter but it's just checking that
it can actually compile it:

https://github.com/openstack/nova/blob/16.0.0/nova/api/validation/validators.py#L35


So yeah, probably a potential problem like you pointed out.



Ya, would seem so, especially if large user strings can get compiled.

Just a reference/useful tidbit but in the `re.py` module there is a 
cache of the last 512 patterns compiled (suprise! i don't think a lot of 
people know about it, ha), so assuming that users can present arbitrary 
(and/or pretty big) input to the REST api of nova then that cache could 
pretty large (depending on the allowable request max size) and/or could 
also be thrashed pretty quickly (also note that regex compiling jumps 
into C code afaik, so that probably locks up other greenthreads).


The cache layer fyi:

https://github.com/python/cpython/blob/3.6/Lib/re.py#L281-L312

Just a thought but it might just be a good idea to remove this validator 
and never again do user provided regex patterns/input and such in 
general (to avoid cache thrashing and various other ReDoS or ReDoS-like 
problems).


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova][neutron] How do you use the instance IP filter?

2017-10-26 Thread Joshua Harlow

Further things that someone may want to read/try (if the below is true),

https://en.wikipedia.org/wiki/ReDoS

Joshua Harlow wrote:

Just the paranoid person in me, but is it safe to say that the filter
that you are showing here does not come from user text?

Ie these two lines don't come from a user input directly (without going
through some filter) do they?

https://github.com/openstack/nova/blob/16.0.0/nova/compute/api.py#L2458-L2459


 From reading it seems like perhaps they do come at least partially from
a user, so I am hoping that its not possible for a user to present a
'ip' that is really a complicated regex that takes a long time to
compile (and therefore can DOS the nova-api component); but I don't know
the surrounding code so I might be wrong...

Just wondering :-/

-Josh

Matt Riedemann wrote:

Nova has had this long-standing known performance issue if you're
filtering a large number of instances by IP. The instance IPs are stored
in a JSON blob in the database so we don't do filtering in SQL. We pull
the instances out of the database, deserialize the JSON and then apply a
regex filter match in the nova-api python code.

At the Queens PTG we talked about possible ways to fix this and came up
with this nova spec:

https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/improve-filter-instances-by-ip-performance.html



The idea is to have nova get ports from neutron and apply the IP filter
in neutron to whittle down the ports, then from that list of ports get
the instances to pull out of the nova database.

One issue that has come up with this is neutron does not currently
support regex filters when listing ports. There is an RFE for adding
that:

https://bugs.launchpad.net/neutron/+bug/1718605

The proposed neutron implementation is to just do SQL LIKE substring
matching in the database.

However, one issue that has come up is that the compute API accepts a
python regex filter and uses re.match():

https://github.com/openstack/nova/blob/16.0.0/nova/compute/api.py#L2469

At least one good thing about that is match() only matches from the
beginning of the string unlike search().

So for example I can filter on "192.16.*[1-5]$" if I wanted to, but
that's not going to work with just a LIKE substring filter in SQL.

The question is, does anyone actually do more than basic substring
matching with the IP filter today? Because if we started using neutron,
that behavior would be broken. We've never actually documented the match
restrictions on the IP filter, but that's not a good reason to break it.

One option is to make this configurable such that deployments which rely
on the complicated pattern matching can just use the existing nova code
despite performance issues. However, that's not interoperable, I hate
config-driven API behavior, and it would mean maintaining two code paths
in nova, which is also terrible.

I was trying to think of a way to determine if the IP filter passed to
nova is basic or a complicated pattern match and let us decide that way,
but I'm not sure if there are good ways to detect that - maybe by simply
looking for special characters like (, ), - and $? But then there is []
and we have an IPv6 filter, so that gets messy too...

For now I'd just like to know if people rely on the regex match or not.
Other ideas on how to handle this are appreciated.




___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] How do you use the instance IP filter?

2017-10-26 Thread Joshua Harlow
Just the paranoid person in me, but is it safe to say that the filter 
that you are showing here does not come from user text?


Ie these two lines don't come from a user input directly (without going 
through some filter) do they?


https://github.com/openstack/nova/blob/16.0.0/nova/compute/api.py#L2458-L2459

From reading it seems like perhaps they do come at least partially from 
a user, so I am hoping that its not possible for a user to present a 
'ip' that is really a complicated regex that takes a long time to 
compile (and therefore can DOS the nova-api component); but I don't know 
the surrounding code so I might be wrong...


Just wondering :-/

-Josh

Matt Riedemann wrote:

Nova has had this long-standing known performance issue if you're
filtering a large number of instances by IP. The instance IPs are stored
in a JSON blob in the database so we don't do filtering in SQL. We pull
the instances out of the database, deserialize the JSON and then apply a
regex filter match in the nova-api python code.

At the Queens PTG we talked about possible ways to fix this and came up
with this nova spec:

https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/improve-filter-instances-by-ip-performance.html


The idea is to have nova get ports from neutron and apply the IP filter
in neutron to whittle down the ports, then from that list of ports get
the instances to pull out of the nova database.

One issue that has come up with this is neutron does not currently
support regex filters when listing ports. There is an RFE for adding that:

https://bugs.launchpad.net/neutron/+bug/1718605

The proposed neutron implementation is to just do SQL LIKE substring
matching in the database.

However, one issue that has come up is that the compute API accepts a
python regex filter and uses re.match():

https://github.com/openstack/nova/blob/16.0.0/nova/compute/api.py#L2469

At least one good thing about that is match() only matches from the
beginning of the string unlike search().

So for example I can filter on "192.16.*[1-5]$" if I wanted to, but
that's not going to work with just a LIKE substring filter in SQL.

The question is, does anyone actually do more than basic substring
matching with the IP filter today? Because if we started using neutron,
that behavior would be broken. We've never actually documented the match
restrictions on the IP filter, but that's not a good reason to break it.

One option is to make this configurable such that deployments which rely
on the complicated pattern matching can just use the existing nova code
despite performance issues. However, that's not interoperable, I hate
config-driven API behavior, and it would mean maintaining two code paths
in nova, which is also terrible.

I was trying to think of a way to determine if the IP filter passed to
nova is basic or a complicated pattern match and let us decide that way,
but I'm not sure if there are good ways to detect that - maybe by simply
looking for special characters like (, ), - and $? But then there is []
and we have an IPv6 filter, so that gets messy too...

For now I'd just like to know if people rely on the regex match or not.
Other ideas on how to handle this are appreciated.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Kolla] Concerns about containers images in DockerHub

2017-10-19 Thread Joshua Harlow

Cool thanks,

I'll have to watch for this and see how it goes.

I thought there was some specific things that changed every time a 
dockerfile was rendered (which would make it different each run) but I 
may have been seeing things (or it's been fixed).


Michał Jastrzębski wrote:

On 19 October 2017 at 13:32, Joshua Harlow<harlo...@fastmail.com>  wrote:

This reminded me of something I wanted to ask.

Is it true to state that only way to get 'fully' shared-base layers is to
have `kolla-build` build all the projects (that a person/company/other may
use) in one invocation? (in part because of the jinja2 template generation
which would cause differences in dockerfiles?...)


Well jinja2 should render same dockerfile no matter when you call it,
so it should be fine. Alternatively you can run something like
kolla-build nova --skip-parents  - this call will try to build all
images with "nova" in them while not rebuilding openstack-base and
base image.


I was pretty sure this was the case (unless things have changed), but just
wanting to check since that question seems (somewhat) on-topic...

At godaddy we build individual projects using `kolla-build` (in part because
it makes it easy to rebuild + test + deply a single project with either an
update or a patch or ...) and I suspect others are doing this also (after
all the kolla-build command does take a regex of projects to build) - though
doing it in this way does seem like it would not reuse (all the layers
outside of the base operating system) layers 'optimally'?

Thoughts?

-Josh

Sam Yaple wrote:

docker_image wouldn't be the best place for that. Buf if you are looking
for a quicker solution, kolla_docker was written specifically to be
license compatible for openstack. its structure should make it easily
adapted to delete an image. And you can copy it and cut it up thanks to
the license.

Are you pushing images with no shared base layers at all (300MB
compressed image is no shared base layers)? With shared base layers a
full image set of Kolla images should be much smaller than the numbers
you posted.

Thanks,
SamYaple

On Thu, Oct 19, 2017 at 11:03 AM, Gabriele Cerami<gcer...@redhat.com
<mailto:gcer...@redhat.com>>  wrote:

 Hi,

 our CI scripts are now automatically building, testing and pushing
 approved openstack/RDO services images to public repositories in
 dockerhub using ansible docker_image module.

 Promotions have had some hiccups, but we're starting to regularly
upload
 new images every 4 hours.

 When we'll get at full speed, we'll potentially have
 - 3-4 different sets of images, one per release of openstack (counting
a
EOL release grace period)
 - 90-100 different services images per release
 - 4-6 different versions of the same image ( keeping older promoted
images for a while )

 At around 300MB per image a possible grand total is around 650GB of
 space used.

 We don't know if this is acceptable usage of dockerhub space and for
 this we already sent a similar email the to docker support to ask
 specifically if the user would get penalty in any way (e.g. enforcing
 quotas, rete limiting, blocking). We're still waiting for a reply.

 In any case it's critical to keep the usage around the estimate, and
to
 achieve this we need a way to automatically delete the older images.
 docker_image module does not provide this functionality, and we think
 the only way is issuing direct calls to dockerhub API

 https://docs.docker.com/registry/spec/api/#deleting-an-image
 <https://docs.docker.com/registry/spec/api/#deleting-an-image>

 docker_image module structure doesn't seem to encourage the addition
of
 such functionality directly in it, so we may be forced to use the uri
 module.
 With new images uploaded potentially every 4 hours, this will become a
 problem to be solved within the next two weeks.

 We'd appreciate any input for an existing, in progress and/or better
 solution for bulk deletion, and issues that may arise with our space
 usage in dockerhub

 Thanks


__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
Ope

Re: [openstack-dev] [TripleO][Kolla] Concerns about containers images in DockerHub

2017-10-19 Thread Joshua Harlow

This reminded me of something I wanted to ask.

Is it true to state that only way to get 'fully' shared-base layers is 
to have `kolla-build` build all the projects (that a 
person/company/other may use) in one invocation? (in part because of the 
jinja2 template generation which would cause differences in dockerfiles?...)


I was pretty sure this was the case (unless things have changed), but 
just wanting to check since that question seems (somewhat) on-topic...


At godaddy we build individual projects using `kolla-build` (in part 
because it makes it easy to rebuild + test + deply a single project with 
either an update or a patch or ...) and I suspect others are doing this 
also (after all the kolla-build command does take a regex of projects to 
build) - though doing it in this way does seem like it would not reuse 
(all the layers outside of the base operating system) layers 'optimally'?


Thoughts?

-Josh

Sam Yaple wrote:

docker_image wouldn't be the best place for that. Buf if you are looking
for a quicker solution, kolla_docker was written specifically to be
license compatible for openstack. its structure should make it easily
adapted to delete an image. And you can copy it and cut it up thanks to
the license.

Are you pushing images with no shared base layers at all (300MB
compressed image is no shared base layers)? With shared base layers a
full image set of Kolla images should be much smaller than the numbers
you posted.

Thanks,
SamYaple

On Thu, Oct 19, 2017 at 11:03 AM, Gabriele Cerami > wrote:

Hi,

our CI scripts are now automatically building, testing and pushing
approved openstack/RDO services images to public repositories in
dockerhub using ansible docker_image module.

Promotions have had some hiccups, but we're starting to regularly upload
new images every 4 hours.

When we'll get at full speed, we'll potentially have
- 3-4 different sets of images, one per release of openstack (counting a
   EOL release grace period)
- 90-100 different services images per release
- 4-6 different versions of the same image ( keeping older promoted
   images for a while )

At around 300MB per image a possible grand total is around 650GB of
space used.

We don't know if this is acceptable usage of dockerhub space and for
this we already sent a similar email the to docker support to ask
specifically if the user would get penalty in any way (e.g. enforcing
quotas, rete limiting, blocking). We're still waiting for a reply.

In any case it's critical to keep the usage around the estimate, and to
achieve this we need a way to automatically delete the older images.
docker_image module does not provide this functionality, and we think
the only way is issuing direct calls to dockerhub API

https://docs.docker.com/registry/spec/api/#deleting-an-image


docker_image module structure doesn't seem to encourage the addition of
such functionality directly in it, so we may be forced to use the uri
module.
With new images uploaded potentially every 4 hours, this will become a
problem to be solved within the next two weeks.

We'd appreciate any input for an existing, in progress and/or better
solution for bulk deletion, and issues that may arise with our space
usage in dockerhub

Thanks

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Security of Meta-Data

2017-10-03 Thread Joshua Harlow

I would treat the metadata service as not secure.

From amazon docs (equivalent can be said about openstack):

'''
Important

Although you can only access instance metadata and user data from within 
the instance itself, the data is not protected by cryptographic methods. 
Anyone who can access the instance can view its metadata. Therefore, you 
should take suitable precautions to protect sensitive data (such as 
long-lived encryption keys). You should not store sensitive data, such 
as passwords, as user data.

'''

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html

So private keys would be a no-no, public keys would be ok (since they 
are public anyway).


Giuseppe de Candia wrote:

Hi Folks,


Are there any documented conventions regarding the security model for
MetaData?


Note that CloudInit allows passing user and ssh service public/private
keys via MetaData service (or ConfigDrive). One assumes it must be
secure, but I have not found a security model or documentation.


My understanding of the Neutron reference implementation is that
MetaData requests are HTTP (not HTTPS) and go from the VM to the
MetaData proxy on the Network Node (after which they are proxied to Nova
meta-data API server). The path from VM to Network Node using HTTP
cannot guarantee confidentiality and is also susceptible to
Man-in-the-Middle attacks.

Some Neutron drivers proxy Metadata requests locally from the node
hosting the VM that makes the query. I have mostly seen this
presented/motivated as a way of removing dependency on the Network node,
but it should also increase security. Yet, I have not seen explicit
discussions of the security model, nor any attempt to set a standard for
security of the meta-data.

Finally, there do not seem to be granular controls over what meta-data
is presented over ConfigDrive (when enabled) vs. meta-data REST API. As
an example, Nova vendor data is presented over both, if both are
enabled; config drive is presumably more secure.

thanks,
Pino


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] Retiring openstack/pylockfile

2017-10-02 Thread Joshua Harlow

Impossible to list it all, lol

Doug Hellmann wrote:

Or https://github.com/jazzband

Now we need a project to list all of the organizations full of
unmaintained software...


On Oct 2, 2017, at 12:12 PM, Joshua Harlow <harlo...@fastmail.com
<mailto:harlo...@fastmail.com>> wrote:

Yup, +1 from me also,

Honestly might just be better to put pylockfile under ownership of:

https://github.com/pycontribs

I think the above is made for these kinds of things (vs oslo),

Thoughts?

-Josh

Ben Nemec wrote:

+1. I believe the work we had originally intended to put into pylockfile
ended up in the fasteners library instead.

On 09/29/2017 10:07 PM, ChangBo Guo wrote:

pylockfile was deprecated about two years ago in [1] and it is not
used in any OpenStack Projects now [2] , we would like to retire it
according to steps of retiring a project[3].


[1]c8798cedfbc4d738c99977a07cde2de54687ac6c#diff-88b99bb28683bd5b7e3a204826ead112

[2] http://codesearch.openstack.org/?q=pylockfile=nope==
[3]https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project

--
ChangBo Guo(gcb)
Community Director @EasyStack


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] Retiring openstack/pylockfile

2017-10-02 Thread Joshua Harlow

Yup, +1 from me also,

Honestly might just be better to put pylockfile under ownership of:

https://github.com/pycontribs

I think the above is made for these kinds of things (vs oslo),

Thoughts?

-Josh

Ben Nemec wrote:

+1. I believe the work we had originally intended to put into pylockfile
ended up in the fasteners library instead.

On 09/29/2017 10:07 PM, ChangBo Guo wrote:

pylockfile was deprecated about two years ago in [1] and it is not
used in any OpenStack Projects now [2] , we would like to retire it
according to steps of retiring a project[3].


[1]c8798cedfbc4d738c99977a07cde2de54687ac6c#diff-88b99bb28683bd5b7e3a204826ead112

[2] http://codesearch.openstack.org/?q=pylockfile=nope==
[3]https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project

--
ChangBo Guo(gcb)
Community Director @EasyStack


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] patches for simple typo fixes

2017-09-26 Thread Joshua Harlow

Sean Dague wrote:

I think the concern is the ascribed motive for why people are putting
these up. That's fine to feel that people are stat padding (and that too
many things are driven off metrics). But, honestly, that's only
important if we make it important. Contributor stats are always going to
be pretty much junk stats. They are counting things to be the same which
are wildly variable in meaning (number of patches, number of Lines of
Code).


If this is a real thing (which I don't know if it is, but I could 
believe that it is) due to management or other connecting those stats to 
involvement (and likely at some point $$) why don't we just turn off 
http://stackalytics.com/ or make it require a launchpad login (make it a 
little harder to access) or put a big warning banner on it that says 
these stats are not-representative of much of anything...


The hard part is it's not us as a community deciding 'important if we 
make it important' because such motives are not directly associated to 
contributors but instead may or may not be connected to management of 
said contributor (and management of the contributors that are paid to 
work on openstack has always been in the background somewhere, like a 
ghost...).


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.service] looping.RetryDecorator behavior

2017-09-22 Thread Joshua Harlow

Hi Renat,

I was more just saying that depending on your situation it might be 
better to switch to using tenacity (not that the retry decorator is 
deprecated, though I wouldn't personally use it).


As you mentioned in 
https://bugs.launchpad.net/oslo.service/+bug/1718635/comments/1 this 
class and decorator is not thread safe so if that is a concern for you 
then that is also better in tenacity.


I think a lot of usage of that decorator though is like the following:

def main_func():

   @loopingcall.RetryDecorator
   def inner_func():
  

   
   inner_func()
   

So likely thread-safety was never a concern, though I can't quite say... 
I think (but I might be wrong) when that class/code was being proposed I 
recommended just using 'retrying' (the precursor to tenacity) but 
bygones be bygones...


Renat Akhmerov wrote:

Thanks Josh,

I’m not sure I fully understand your point though. You mean it’s a
legacy (deprecated?) code that we should never use in our code? Should
it be considered a private class of oslo_service?

In our global requirements tenacity is configured as "tenacity>=3.2.1”,
should we bump it to 4.4.0?

Renat Akhmerov
@Nokia

On 21 Sep 2017, 22:42 +0700, Joshua Harlow <harlo...@fastmail.com>, wrote:

It does look like is sort of a bug,

Though in all honesty I wouldn't be using oslo.service or that looping
code in the future for doing retrying...

https://pypi.python.org/pypi/tenacity is a much better library with more
`natural` syntax and works more as one would expect (even under threaded
situations).

If I could of I would of never let 'loopingcall.py' become a file that
exists, but the past is the past, ha.

Renat Akhmerov wrote:

Hi Oslo team,

Can you please check the bug [1]?

There may be a problem with how looping.RetryDecorator works. Just
stumbled on it in Mistral. Not sure if it’s really a bug or made by
design. If it’s by design then maybe we need to have more accurate
documentation for it.

FYI: We use this decorator in Mistral and it’s also used in Nova, [2].

[1] https://bugs.launchpad.net/oslo.service/+bug/1718635
[2]
http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/disk/mount/api.py

Thanks

Renat Akhmerov
@Nokia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.service] looping.RetryDecorator behavior

2017-09-21 Thread Joshua Harlow

It does look like is sort of a bug,

Though in all honesty I wouldn't be using oslo.service or that looping 
code in the future for doing retrying...


https://pypi.python.org/pypi/tenacity is a much better library with more 
`natural` syntax and works more as one would expect (even under threaded 
situations).


If I could of I would of never let 'loopingcall.py' become a file that 
exists, but the past is the past, ha.


Renat Akhmerov wrote:

Hi Oslo team,

Can you please check the bug [1]?

There may be a problem with how looping.RetryDecorator works. Just
stumbled on it in Mistral. Not sure if it’s really a bug or made by
design. If it’s by design then maybe we need to have more accurate
documentation for it.

FYI: We use this decorator in Mistral and it’s also used in Nova, [2].

[1] https://bugs.launchpad.net/oslo.service/+bug/1718635
[2]
http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/disk/mount/api.py

Thanks

Renat Akhmerov
@Nokia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] TripleO/Ansible PTG session

2017-09-18 Thread Joshua Harlow
Was there any discussion at the PTG on how the newly released AWX[1] 
will affect tripleo/ansible (will it?) or ara or such? Thoughts there?


[1] https://github.com/ansible/awx

James Slagle wrote:

On Wednesday at the PTG, TripleO held a session around our current use
of Ansible and how to move forward. I'll summarize the results of the
session. Feel free to add anything I forgot and provide any feedback
or questions.

We discussed the existing uses of Ansible in TripleO and how they
differ in terms of what they do and how they interact with Ansible. I
covered this in a previous email[1], so I'll skip over summarizing
those points again.

I explained a bit about the  "openstack overcloud config download"
approach implemented in Pike by the upgrades squad. This method
no-op's out the deployment steps during the actual Heat stack-update,
then uses the cli to query stack outputs to create actual Ansible
playbooks from those output values. The Undercloud is then used as the
Ansible runner to apply the playbooks to each Overcloud node.

I created a sequence diagram for this method and explained how it
would also work for initial stack deployment[2]:

https://slagle.fedorapeople.org/tripleo-ansible-arch.png

The high level proposal was to move in a direction where we'd use the
config download method for all Heat driven stack operations
(stack-create and stack-update).

We highlighted and discussed several key points about the method shown
in the diagram:

- The entire sequence and flow is driven via Mistral on the Undercloud
by default. This preserves the API layer and provides a clean reusable
interface for the CLI and GUI.

- It would still be possible to run ansible-playbook directly for
various use cases (dev/test/POC/demos). This preserves the quick
iteration via Ansible that is often desired.

- The remaining SoftwareDeployment resources in tripleo-heat-templates
need to be supported by config download so that the entire
configuration can be driven with Ansible, not just the deployment
steps. The success criteria for this point would be to illustrate
using an image that does not contain a running os-collect-config.

- The ceph-ansible implementation done in Pike could be reworked to
use this model. "config download" could generate playbooks that have
hooks for calling external playbooks, or those hooks could be
represented in the templates directly. The result would be the same
either way though in that Heat would no longer be triggering a
separate Mistral workflow just for ceph-ansible.

- We will need some centralized log storage for the ansible-playbook
results and should consider using ARA.

As it would be a lot of work to eventually make this method the
default, I don't expect or plan that we will complete all this work in
Queens. We can however start moving in this direction.

Specifically, I hope to soon add support to config download for the
rest of the SoftwareDeployment resources in tripleo-heat-templates as
that will greatly simplify the undercloud container installer. Doing
so will illustrate using the ephemeral heat-all process as simply a
means for generating ansible playbooks.

I plan to create blueprints this week for Queens and beyond. If you're
interested in this work, please let me know. I'm open to the idea of
creating an official squad for this work, but I'm not sure if it's
needed or not.

As not everyone was able to attend the PTG, please do provide feedback
about this plan as it should still be considered open for discussion.

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-July/119405.html
[2] https://slagle.fedorapeople.org/tripleo-ansible-arch.png



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Anyone know CNCF and how to get their landscape 'tweaked'

2017-09-15 Thread Joshua Harlow

Hi folks,

Something that has been bugging me (a tiny bit, not a lot) is the 
following CNCF landscape picture.


https://raw.githubusercontent.com/cncf/landscape/master/landscape/CloudNativeLandscape_v0.9.7.jpg

If you look for openstack (for some reason it's under bare metal) in 
that you may also get the weird feeling (as I did) that there is some 
kind of misunderstanding that the CNCF leadership/technical 
community/... as to what is openstack.


I am wondering if we (or the openstack foundation?) can have a larger 
sit-down with those folks and explain to them what is openstack and why 
its components are not just baremetal...


Full cncf-toc thread at:

https://lists.cncf.io/pipermail/cncf-toc/2017-September/thread.html#1170

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Should we be using subprocess32?

2017-09-12 Thread Joshua Harlow

Hi folks,

I know there is a bunch of usage of subprocess in openstack and 
especially since there is heavy usage of python 2.7 it made me wonder if 
we should try to move to subprocess32 to avoid some of the bugs that 
seem to exist (maybe distributors backported them?):


For example a major one (seems to be):

- https://github.com/google/python-subprocess32/commit/6ef1fea55

"""Popen.wait() is now thread safe so that multiple

threads may be calling wait() or poll() on a Popen instance at the same time
without losing the Popen.returncode value.
"""

That one concerns me slightly, because I know that certain openstack 
projects do use threads (and not eventlet monkey-patched green-thread 
hybrids).


TLDR; should we (could we?) switch?

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Should we be using subprocess32?

2017-09-12 Thread Joshua Harlow

Hi folks,

I know there is a bunch of usage of subprocess in openstack and 
especially since there is heavy usage of python 2.7 it made me wonder if 
we should try to move to subprocess32 to avoid some of the bugs that 
seem to exist (maybe distributors backported them?):


For example a major one (seems to be):

- https://github.com/google/python-subprocess32/commit/6ef1fea55

"""Popen.wait() is now thread safe so that multiple

  threads may be calling wait() or poll() on a Popen instance at the 
same time

  without losing the Popen.returncode value.
"""

That one concerns me slightly, because I know that certain openstack 
projects do use threads (and not eventlet monkey-patched green-thread 
hybrids).


TLDR; should we (could we?) switch?

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Is shade (and os_client_config) thread safe?

2017-08-07 Thread Joshua Harlow

Hi there folks,

I'm doing various scans of our clouds here at godaddy and using shade to 
do some of the calls.


Though when I do stuff like the following sometimes it has issues...

http://paste.openstack.org/show/617712/

Typically this causes the following error:

Traceback (most recent call last):
  File "tools/fetch_flavors.py", line 72, in 
main()
  File "tools/fetch_flavors.py", line 61, in main
results.append(fut.result())
  File 
"/Users/jxharlow/.venv/lib/python2.7/site-packages/concurrent/futures/_base.py", 
line 422, in result

return self.__get_result()
  File 
"/Users/jxharlow/.venv/lib/python2.7/site-packages/concurrent/futures/thread.py", 
line 62, in run

result = self.fn(*self.args, **self.kwargs)
  File "tools/fetch_flavors.py", line 24, in extract_cloud
client = shade.openstack_cloud(cloud=cloud_name)
  File 
"/Users/jxharlow/.venv/lib/python2.7/site-packages/shade/__init__.py", 
line 106, in openstack_cloud

return OpenStackCloud(cloud_config=cloud_config, strict=strict)
  File 
"/Users/jxharlow/.venv/lib/python2.7/site-packages/shade/openstackcloud.py", 
line 156, in __init__

self.image_api_use_tasks = cloud_config.config['image_api_use_tasks']
KeyError: 'image_api_use_tasks'

Though if I add a lock around the following then things go better:

with SHADE_LOCK:
client = shade.openstack_cloud(cloud=cloud_name)

So that makes me wonder, is ummm, this library (or one of its 
dependencies not thread-safe?); has anyone else seen similar things, 
perhaps they've already been fixed?


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][api] Backwards incompatible changes based on config

2017-08-04 Thread Joshua Harlow

Morgan Fainberg wrote:

On Fri, Aug 4, 2017 at 3:09 PM, Kevin L. Mitchell  wrote:

On Fri, 2017-08-04 at 14:52 -0700, Morgan Fainberg wrote:

Maybe not, but please do recall that there are many deployers out
there
that track master, not fixed releases, so we need to take that
level of
compatibility into account.


Any idea of who are these deployers? I think I knew once who they might 
have been but I'm not really sure anymore. Are they still doing this 
(and can afford doing it)? Why don't we hear more about them? I'd expect 
that deployers (and there associated developer army) that are trying to 
do this would be the *most* active in IRC and in the mailing list yet I 
don't really see any such activity (which either means we never break 
them, which seems highly unlikely, or that they don't communicate 
through the normal channels, ie they go through some vendor, or that 
they just flat out don't exist anymore).


I'd personally really like to know how they do it (especially if they do 
not have an associated developer army)... Because they have always been 
a pink elephant that I've heard exists 'somewhere' and they manage to 
make this all work 'somehow'.


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-openstacksdk] Status of python-openstacksdk project

2017-08-04 Thread Joshua Harlow

Also note that this appears to exist:

https://github.com/openstack/python-openstackclient/blob/master/requirements.txt#L10

So even if python-openstacksdk is not a top level project, I would 
assume that it being a requirement would imply that it is? Or perhaps 
neither the python-openstackclient or python-openstacksdk should really 
be used? I've been telling people that python-openstackclient should be 
good to use (I hope that is still correct, though I do have to tell 
people to *not* use python-openstackclient from python itself, and only 
use it from bash/other shell).


Michael Johnson wrote:

Hi OpenStack developers,

I was wondering what is the current status of the python-openstacksdk
project.  The Octavia team has posted some patches implementing our new
Octavia v2 API [1] in the SDK, but we have not had any reviews.  I have also
asked some questions in #openstack-sdks with no responses.
I see that there are some maintenance patches getting merged and a pypi
release was made 6/14/17 (though not through releases project).  I'm not
seeing any mailing list traffic and the IRC meetings seem to have ended in
2016.

With all the recent contributor changes, I want to make sure the project
isn't adrift in the sea of OpenStack before we continue to spend development
time implementing the SDK for Octavia. We were also planning to use it as
the backing for our dashboard project.

Since it's not in the governance projects list I couldn't determine who the
PTL to ping would be, so I decided to ping the dev mailing list.

My questions:
1. Is this project abandoned?
2. Is there a plan to make it an official project?
3. Should we continue to develop for it?

Thanks,
Michael (johnsom)

[1]
https://review.openstack.org/#/q/project:openstack/python-openstacksdk+statu
s:open+topic:%255Eoctavia.*


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tracing (all the places)

2017-08-03 Thread Joshua Harlow

vin...@vn.fujitsu.com wrote:

Hello harlowja,

I'm really happy to see that you are back in this `tracing` topic [and 
@boris-42 (too)].


We never left, haha, but ya, I can say (and probably boris would agree) 
that trying to get OSprofiler started and integrated somewhat 'burned' 
both of us (it involved a ton of convincing people of the value of it, 
when I had more hoped that the value of it was obvious). But I'm glad 
that people are starting to realize its value (even if they have to be 
told and educated by google or other companies that have been doing this 
for a long time).




Last week, we saw that Rajul proposed 02 new blueprint in OSprofiler [1] and 
[2].
Besides, some other blueprints are being implemented in OSprofiler
such as overhead control [3] and OpenTracing compatible [4] [5]
(Uber Jaeger [6] is one of OpenTracing compatible tracer out there).

For OpenTracing part, I have a PoC to make OSprofiler compatible with
OpenTracing specification at [5]. You can take a look at it this time too.
However, this time, I focus on reporting span/trace to other destinations
(rather than current drivers for OSprofiler[7]).

OpenTracing API is changing a little bit fast for now, therefore, some APIs 
will be deprecated soon.
I had some discussions with OpenTracing community about some trouble when 
making OSprofiler
compatible with OpenTracing API.


Ya I expected this, opentracing also I think has a python 
client/wrapper(?), have you looked at what this offers (last time I 
checked most of opentracing was just a bunch of wrappers actually, and 
not much actually code that did anything unique)?




For OpenStack part, last cycle, Performance team and other OpenStack developers 
added
OSprofiler support for many other projects (Nova, Magnum, Ironic, Zun ...)
and Panko, Aodh, Swift are on the way.


Yippe, now the bigger questions is where are all the UIs visualizing the 
traces (I know boris had https://boris-42.github.io/ngk.html but there 
has to be something nicer that perhaps the OpenTracing community has for 
a UI, ideally not a java monster like Zipkin, ha). Any thoughts there?




At last, hope you will join us (again) in OpenStack `tracing` things.


We shall see :-P



[1] 
https://blueprints.launchpad.net/osprofiler/+spec/asynchronous-trace-collection
[2] 
https://blueprints.launchpad.net/osprofiler/+spec/tail-based-coherent-sampling
[3] 
https://blueprints.launchpad.net/osprofiler/+spec/osprofiler-overhead-control
[4] https://blueprints.launchpad.net/osprofiler/+spec/opentracing-compatible
[5] https://review.openstack.org/#/c/480018/
[6] http://jaeger.readthedocs.io/en/latest/architecture/
[7] https://github.com/openstack/osprofiler/tree/master/osprofiler/drivers

Best regards,

Vinh Nguyen Trong
PODC – Fujitsu Vietnam Ltd.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Tracing (all the places)

2017-08-03 Thread Joshua Harlow
Since I think there was another thread out there around tracing I'd 
thought I'd send out a few others for folks that show tracing being 
added to multiple other popular project (interesting to read over the 
proposals and such).


- 
https://github.com/grpc/grpc/blob/master/src/core/ext/census/README.md#census---a-resource-measurement-and-tracing-system


- https://github.com/kubernetes/kubernetes/issues/26507 (k8s tracing 
addition/proposal)


- http://jaeger.readthedocs.io/en/latest/architecture/

It'd be real nice to finally get some kind of tracing support integrated 
into openstack; osprofiler was started a long time ago and I think it's 
due time for it to actually be used and integrated :)


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.db] nominating Jay Pipes for oslo-db-core

2017-07-27 Thread Joshua Harlow

+1 from me.

-Josh

ChangBo Guo wrote:

+1000

2017-07-27 22:04 GMT+08:00 Doug Hellmann >:

I have noticed that Jay has been very deeply involved in several
recent design discussions about oslo.db, and he obviously has a
great deal of experience in the area, so even though he hasn't been
actively reviewing patches recently I think he would be a good
addition to the team. I have asked him, and he is interested and will
try to become more active as well.

Please indicate your opinion with +1/-1.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
ChangBo Guo(gcb)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] PTL nominations

2017-07-18 Thread Joshua Harlow

Thanks much for the service (and time and effort and more) gcb!!! :)

ChangBo Guo wrote:

Hi oslo folks,

The PTL nomination week is fast approaching [0], and as you might have
guessed by the subject of this email, I am not planning to run for
Queens, I'm still in the team and give some guidance about oslo PTL's
daily work as previous PTL did before .
It' my honor to be oslo PTL, I learned a lot  and grew quickly. It's
time to give someone else the opportunity to grow in the amazing role of
oslo PTL

[0]https://review.openstack.org/#/c/481768/4/configuration.yaml

--
ChangBo Guo(gcb)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-14 Thread Joshua Harlow
Out of curiosity, since I keep on hearing/reading all the tripleo 
discussions on how tripleo folks are apparently thinking/doing? 
redesigning the whole thing to use ansible + mistral + heat, or ansible 
+ kubernetes or ansible + mistral + heat + ansible (a second time!) or ...


Seeing all those kinds of questions and suggestions around what should 
be used and why and how (and even this thread) makes me really wonder 
who actually uses tripleo and can afford/understand such kinds of changes?


Does anyone?

If there are  is there going to be an upgrade 
path for there existing 'cloud/s' to whatever this solution is?


What operator(s) has the ability to do such a massive shift at this 
point in time? Who are these 'mystical' operators?


All this has really peaked my curiosity because I am personally trying 
to do that shift (not exactly the same solution...) and I know it is a 
massive undertaking (that will take quite a while to get right) even for 
a simple operator with limited needs out of openstack (ie godaddy); so I 
don't really understand how the generic solution for all existing 
tripleo operators can even work...


Flavio Percoco wrote:


Greetings,

As some of you know, I've been working on the second phase of TripleO's
containerization effort. This phase if about migrating the docker based
deployment onto Kubernetes.

These phase requires work on several areas: Kubernetes deployment,
OpenStack
deployment on Kubernetes, configuration management, etc. While I've been
diving
into all of these areas, this email is about the second point, OpenStack
deployment on Kubernetes.

There are several tools we could use for this task. kolla-kubernetes,
openstack-helm, ansible roles, among others. I've looked into these
tools and
I've come to the conclusion that TripleO would be better of by having
ansible
roles that would allow for deploying OpenStack services on Kubernetes.

The existing solutions in the OpenStack community require using Helm.
While I
like Helm and both, kolla-kubernetes and openstack-helm OpenStack
projects, I
believe using any of them would add an extra layer of complexity to
TripleO,
which is something the team has been fighting for years years -
especially now
that the snowball is being chopped off.

Adopting any of the existing projects in the OpenStack communty would
require
TripleO to also write the logic to manage those projects. For example,
in the
case of openstack-helm, the TripleO team would have to write either ansible
roles or heat templates to manage - install, remove, upgrade - the
charts (I'm
happy to discuss this point further but I'm keepping it at a high-level on
purpose for the sake of not writing a 10k-words-long email).

James Slagle sent an email[0], a couple of days ago, to form TripleO plans
around ansible. One take-away from this thread is that TripleO is adopting
ansible more and more, which is great and it fits perfectly with the
conclusion
I reached.

Now, what this work means is that we would have to write an ansible role
for
each service that will deploy the service on a Kubernetes cluster.
Ideally these
roles will also generate the configuration files (removing the need of
puppet
entirely) and they would manage the lifecycle. The roles would be
isolated and
this will reduce the need of TripleO Heat templates. Doing this would give
TripleO full control on the deployment process too.

In addition, we could also write Ansible Playbook Bundles to contain
these roles
and run them using the existing docker-cmd implementation that is coming
out in
Pike (you can find a PoC/example of this in this repo[1]).

Now, I do realize the amount of work this implies and that this is my
opinion/conclusion. I'm sending this email out to kick-off the
discussion and
gather thoughts and opinions from the rest of the community.

Finally, what I really like about writing pure ansible roles is that
ansible is
a known, powerfull, tool that has been adopted by many operators
already. It'll
provide the flexibility needed and, if structured correctly, it'll allow
for
operators (and other teams) to just use the parts they need/want without
depending on the full-stack. I like the idea of being able to separate
concerns
in the deployment workflow and the idea of making it simple for users of
TripleO
to do the same at runtime. Unfortunately, going down this road means
that my
hope of creating a field where we could collaborate even more with other
deployment tools will be a bit limited but I'm confident the result
would also
be useful for others and that we all will benefit from it... My hopes
might be a
bit naive *shrugs*

Flavio

[0]
http://lists.openstack.org/pipermail/openstack-dev/2017-July/119405.html
[1] https://github.com/tripleo-apb/tripleo-apbs

--
@flaper87
Flavio Percoco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [Glare][TC] Application for inclusion of Glare in the list of official projects

2017-07-10 Thread Joshua Harlow

Ed Leafe wrote:

On Jul 10, 2017, at 5:06 AM, Mikhail Fedosin > wrote:


Given all the advantages and features of Glare, I believe that it can
become the successful drop-in replacement.


Can you clarify this? Let’s assume I have a decent-sized deployment
running Glance. If I were to remove Glance and replace it with Glare,
are you saying that nothing would break? Operators, users, scripts,
SDKs, etc., would all work unchanged?


Sounds interesting,

Is there some kind of glance-compat API?



-- Ed Leafe





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][all][ptl] Contributor Portal and Better New Contributor On-boarding

2017-06-27 Thread Joshua Harlow

Boris Pavlovic wrote:


Overall it would take 1-2 days for people not familiar with OpenStack.

What about if one make "Sing-Up" page:

1) Few steps: provide Username, Contact info, Agreement, SSH key (and it
will do all work for you set Gerrit, OpenStack,...)
2) After one finished form it gets instruction for his OS how to setup
and run properly git review
3) Maybe few tutorials (how to find some bug, how to test it and where
are the docs, devstack, ...)


Sounds nice.

I wouldn't mind this as I also saw how painful it was (with the same 
intern).




That would simplify onboarding process...

Best regards,
Boris Pavlovic

On Mon, Jun 26, 2017 at 2:45 AM, Alexandra Settle > wrote:

I think this is a good idea :) thanks Mike. We get a lot of people
coming to the docs chan or ML asking for help/where to start and
sometimes it’s difficult to point them in the right direction.

__ __

Just from experience working with contributor documentation, I’d
avoid all screen shots if you can – updating them whenever the
process changes (surprisingly often) is a lot of unnecessary
technical debt.

__ __

The docs team put a significant amount of effort in a few releases
back writing a pretty comprehensive Contributor Guide. For the
purposes you describe below, I imagine a lot of the content here
could be adapted. The process of setting up for code and docs is
exactly the same:
http://docs.openstack.org/contributor-guide/index.html
 

__ __

I also wonder if we could include a ‘what is openstack’ 101 for new
contributors. I find that there is a **lot** of material out there,
but it is often very hard to explain to people what each project
does, how they all interact, why we install from different sources,
why do we have official and unofficial projects etc. It doesn’t have
to be seriously in-depth, but an overview that points people who are
interested in the right directions. Often this will help people
decide on what project they’d like to undertake.

__ __

Cheers,

__ __

Alex

__ __

*From: *Mike Perez >
*Reply-To: *"OpenStack Development Mailing List (not for usage
questions)" >
*Date: *Friday, June 23, 2017 at 9:17 PM
*To: *OpenStack Development Mailing List
>
*Cc: *Wes Wilson >,
"ild...@openstack.org "
>,
"knel...@openstack.org "
>
*Subject: *[openstack-dev] [docs][all][ptl] Contributor Portal and
Better New Contributor On-boarding

__ __

Hello all,

__ __

Every month we have people asking on IRC or the dev mailing list
having interest in working on OpenStack, and sometimes they're given
different answers from people, or worse, no answer at all. 

__ __

Suggestion: lets work our efforts together to create some common
documentation so that all teams in OpenStack can benefit.

__ __

First it’s important to note that we’re not just talking about code
projects here. OpenStack contributions come in many forms such as
running meet ups, identifying use cases (product working group),
documentation, testing, etc. We want to make sure those potential
contributors feel welcomed too!

__ __

What is common documentation? Things like setting up Git, the many
accounts you need to setup to contribute (gerrit, launchpad,
OpenStack foundation account). Not all teams will use some common
documentation, but the point is one or more projects will use them.
Having the common documentation worked on by various projects will
better help prevent duplicated efforts, inconsistent documentation,
and hopefully just more accurate information.

__ __

A team might use special tools to do their work. These can also be
integrated in this idea as well.

__ __

Once we have common documentation we can have something like:

 1. Choose your own adventure: I want to contribute by code

 2. What service type are you interested in? (Database, Block
storage, compute)

 3. Here’s step-by-step common documentation to setting up Git,
IRC, Mailing Lists, Accounts, etc.

 4. A service type project might choose to also include
additional documentation in that flow for special tools, etc.



Important things to note in this flow:

  

Re: [openstack-dev] [all] etcd3 as base service - update

2017-06-08 Thread Joshua Harlow

Julien Danjou wrote:

On Thu, Jun 08 2017, Mike Bayer wrote:


So far I've seen a proposal of etcd3 as a replacement for memcached in
keystone, and a new dogpile connector was added to oslo.cache to handle
referring to etcd3 as a cache backend.  This is a really simplistic / minimal
kind of use case for a key-store.


etcd3 is not meant to be a cache. Synchronizing caching value using the
Raft protocol sounds a bit overkill. A cluster of memcached would be
probably of a better use.


Agreed from me,

My thinking is that people should look over https://raft.github.io/ or 
http://thesecretlivesofdata.com/raft/ (or both or others...)


At least read how it sort of works, before trying to put it everywhere 
(the same can and should be said for any new service), because its not a 
solution for all the things.


The other big thing to know is how writes happen in this kind of system, 
they all go through a single node (the leader, who sends the same data 
to followers and waits for a certain number of followers to respond 
before committing)


Anyways, with great power comes great responsibility...

IMHO just be careful and understand the technology before using it for 
things it may not really be good for. Oh ya and perhaps someone will 
want to finally take more advantage of 
https://docs.openstack.org/developer/taskflow/jobs.html#overview (which 
uses the same concepts etcd exposes to make highly available workflows 
that can survive node failure).





But, keeping in mind I don't know anything about etcd3 other than "it's another
key-store", it's the only database used by Kubernetes as a whole, which
suggests it's doing a better job than Redis in terms of "durable".


Not sure about that. And Redis has much more data structure than etcd,
that is can be faster/more efficient than etcd. But it does not have
Raft and a synchronisation protocol. Its clustering is rather poor in
comparison of etcd.


So I wouldn't be surprised if new / existing openstack applications
express some gravitational pull towards using it as their own
datastore as well. I'll be trying to hang onto the etcd3 track as much
as possible so that if/when that happens I still have a job :).


Sounds like a recipe for disaster. :)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] etcd3 as base service - update

2017-06-07 Thread Joshua Harlow
So just out of curiosity, but do people really even know what etcd is 
good for? I am thinking that there should be some guidance from folks in 
the community as to where etcd should be used and where it shouldn't 
(otherwise we just all end up in a mess).


Perhaps a good idea to actually give examples of how it should be used, 
how it shouldn't be used, what it offers, what it doesn't... Or at least 
provide links for people to read up on this.


Thoughts?

Davanum Srinivas wrote:

One clarification: Since https://pypi.python.org/pypi/etcd3gw just
uses the HTTP API (/v3alpha) it will work under both eventlet and
non-eventlet environments.

Thanks,
Dims


On Wed, Jun 7, 2017 at 6:47 AM, Davanum Srinivas  wrote:

Team,

Here's the update to the base services resolution from the TC:
https://governance.openstack.org/tc/reference/base-services.html

First request is to Distros, Packagers, Deployers, anyone who
installs/configures OpenStack:
Please make sure you have latest etcd 3.x available in your
environment for Services to use, Fedora already does, we need help in
making sure all distros and architectures are covered.

Any project who want to use etcd v3 API via grpc, please use:
https://pypi.python.org/pypi/etcd3 (works only for non-eventlet services)

Those that depend on eventlet, please use the etcd3 v3alpha HTTP API using:
https://pypi.python.org/pypi/etcd3gw

If you use tooz, there are 2 driver choices for you:
https://github.com/openstack/tooz/blob/master/setup.cfg#L29
https://github.com/openstack/tooz/blob/master/setup.cfg#L30

If you use oslo.cache, there is a driver for you:
https://github.com/openstack/oslo.cache/blob/master/setup.cfg#L33

Devstack installs etcd3 by default and points cinder to it:
http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/etcd3
http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/cinder#n356

Review in progress for keystone to use etcd3 for caching:
https://review.openstack.org/#/c/469621/

Doug is working on proposal(s) for oslo.config to store some
configuration in etcd3:
https://review.openstack.org/#/c/454897/

So, feel free to turn on / test with etcd3 and report issues.

Thanks,
Dims

--
Davanum Srinivas :: https://twitter.com/dims






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mogan] Architecture diagrams

2017-05-31 Thread Joshua Harlow

Hi mogan folks,

I was doing some source code examination of mogan and it peaked my 
interest in how it all is connected together. In part I see there is a 
state machine, some taskflow usage, some wsgi usage that looks like 
parts of it are inspired(?) by various other projects.


That got me wondering if there is any decent diagrams or documents that 
explain how it all connects together and I thought I might as well ask 
and see if there are any (50/50 chances? ha).


I am especially interested in the state machine, taskflow and such (no 
tooz seems to be there) and how they are used (if they are, or are going 
to be used); I guess in part because I know the most about those 
libraries/components :)


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Documenting config drive - what do you want to see?

2017-05-24 Thread Joshua Harlow

Matt Riedemann wrote:

Rocky tipped me off to a request to document config drive which came up
at the Boston Forum, and I tracked that down to Clark's wishlist
etherpad [1] (L195) which states:

"Document the config drive. The only way I have been able to figure out
how to make a config drive is by either reading nova's source code or by
reading cloud-init's source code."

So naturally I have some questions, and I'm looking to flesh the idea /
request out a bit so we can start something in the in-tree nova devref.

Question the first: is this existing document [2] helpful? At a high
level, that's more about 'how' rather than 'what', as in what's in the
config drive.


So yes and no, for example there are afaik operational differences that 
are not mentioned; for example I believe the instance metadata ie (in 
that page the "meta" field) can be changed and then a REST call to the 
metadata REST api will return that updated value; while the config-drive 
will never have that updated value, so these kinds of gotchas really 
should be mentioned somewhere. Are there other kinds of gotchas like the 
above (where the metadata REST api will return a different/changing 
value while the config-drive will stay immutable)?


Those would be really nice to know about/document and/or fix.

One that comes to mind is does the "security-groups" field change in the 
metadata REST api (while the config-drive stays the same) when a 
security group is added/removed...




Question the second: are people mostly looking for documentation on the
content of the config drive? I assume so, because without reading the
source code you wouldn't know, which is the terrible part.


For better/worse idk if there are that many people trying to figure out 
the contents; cloud-init tries to hide it behind the concept of a 
datasource (see 
https://cloudinit.readthedocs.io/en/latest/topics/datasources.html#datasource-documentation 
for a bunch of them) but yes I think a better job could be done 
explaining the contents (if just to make certain cloud-init `like` 
programs easier to make).




Based on this, I can think of a few things we can do:

1. Start documenting the versions which come out of the metadata API
service, which regardless of whether or not you're using it, is used to
build the config drive. I'm thinking we could start with something like
the in-tree REST API version history [3]. This would basically be a
change log of each version, e.g. in 2016-06-30 you got device tags, in
2017-02-22 you got vlan tags, etc.

2. Start documenting the contents similar to the response tables in the
compute API reference [4]. For example, network_data.json has an example
response in this spec [5]. So have an example response and a table with
an explanation of fields in the response, so describe
ethernet_mac_address and vif_id, their type, whether or not they are
optional or required, and in which version they were added to the
response, similar to how we document microversions in the compute REST
API reference.

--

Are there other thoughts here or things I'm missing? At this point I'm
just trying to gather requirements so we can get something started. I
don't have volunteers to work on this, but I'm thinking we can at least
start with some basics and then people can help flesh it out over time.



As one of the developers of cloud-init yes please to all the above.

Fyi,

https://cloudinit.readthedocs.io/en/latest/topics/datasources/configdrive.html 
is something cloud-init has (nothing like the detail that could be 
produced by nova itself).


`network_data.json` was one of those examples that was somewhat hard to 
figure it out, but eventually the other cloud-init folks and myself did.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-05-19 Thread Joshua Harlow

Mehdi Abaakouk wrote:

Not really, I just put some comments on reviews and discus this on IRC.
Since nobody except Telemetry have expressed/try to get rid of eventlet.


Octavia is using cotyledon and they have gotten rid of eventlet. Didn't 
seem like it was that hard either to do it (of course the experience in 
how easy it was is likely not transferable to other projects...)


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][concurrency] lockutils lock fairness / starvation

2017-05-18 Thread Joshua Harlow

Chris Friesen wrote:

On 05/16/2017 10:45 AM, Joshua Harlow wrote:

So fyi,

If you really want something like this:

Just use:

http://fasteners.readthedocs.io/en/latest/api/lock.html#fasteners.lock.ReaderWriterLock



And always get a write lock.

It is a slightly different way of getting those locks (via a context
manager)
but the implementation underneath is a deque; so fairness should be
assured in
FIFO order...


I'm going ahead and doing this. Your docs for fastener don't actually
say that lock.ReaderWriterLock.write_lock() provides fairness. If you're
going to ensure that stays true it might make sense to document the fact.


Sounds great, I was starting to but then got busy with other stuff :-P



Am I correct that fasteners.InterProcessLock is basically as fair as the
underlying OS-specific lock? (Which should be reasonably fair except for
process scheduler priority.)


Yup that IMHO would be fair, its just fnctl under the covers (at least 
for linux). Though from what I remember at 
https://github.com/harlowja/fasteners/issues/26#issuecomment-253543912 
the lock class here seemed a little nicer (though more complex). That 
guy I think was going to propose some kind of merge, but that never 
seemd to appear.





Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][concurrency] lockutils lock fairness / starvation

2017-05-16 Thread Joshua Harlow

Chris Friesen wrote:

On 05/16/2017 10:45 AM, Joshua Harlow wrote:

So fyi,

If you really want something like this:

Just use:

http://fasteners.readthedocs.io/en/latest/api/lock.html#fasteners.lock.ReaderWriterLock



And always get a write lock.

It is a slightly different way of getting those locks (via a context
manager)
but the implementation underneath is a deque; so fairness should be
assured in
FIFO order...


That might work as a local patch, but doesn't help the more general case
of fair locking in OpenStack. The alternative to adding fair locks in
oslo would be to add fairness code to all the various OpenStack services
that use locking, which seems to miss the whole point of oslo.


Replace 'openstack community' with 'python community'? ;)



In the implementation above it might also be worth using one condition
variable per waiter, since that way you can wake up only the next waiter
in line rather than waking up everyone only to have all-but-one of them
go back to sleep right away.



Ah good idea, I'll see about doing/adding/changing that.

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Joshua Harlow

My guess is same with octavia.

https://github.com/openstack/octavia/tree/master/diskimage-create#diskimage-builder-script-for-creating-octavia-amphora-images

-Josh

Fox, Kevin M wrote:

+1. ironic and trove have the same issues as well. lowering the bar in order to 
kick the tires will help OpenStack a lot in adoption.

From: Sean Dague [s...@dague.net]
Sent: Tuesday, May 16, 2017 6:28 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] 
[tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes]
 do we want to be publishing binary container images?

On 05/16/2017 09:24 AM, Doug Hellmann wrote:




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.messaging] Call to deprecate the 'pika' driver in the oslo.messaging project

2017-05-16 Thread Joshua Harlow

Fine with me,

I'd personally rather get down to say 2 'great' drivers for RPC,

And say 1 (or 2?) for notifications.

So ya, wfm.

-Josh

Mehdi Abaakouk wrote:

+1 too, I haven't seen its contributors since a while.

On Mon, May 15, 2017 at 09:42:00PM -0400, Flavio Percoco wrote:

On 15/05/17 15:29 -0500, Ben Nemec wrote:



On 05/15/2017 01:55 PM, Doug Hellmann wrote:

Excerpts from Davanum Srinivas (dims)'s message of 2017-05-15
14:27:36 -0400:

On Mon, May 15, 2017 at 2:08 PM, Ken Giusti  wrote:

Folks,

It was decided at the oslo.messaging forum at summit that the pika
driver will be marked as deprecated [1] for removal.


[dims} +1 from me.


+1


Also +1


+1

Flavio

--
@flaper87
Flavio Percoco





__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][concurrency] lockutils lock fairness / starvation

2017-05-16 Thread Joshua Harlow

So fyi,

If you really want something like this:

Just use:

http://fasteners.readthedocs.io/en/latest/api/lock.html#fasteners.lock.ReaderWriterLock

And always get a write lock.

It is a slightly different way of getting those locks (via a context 
manager) but the implementation underneath is a deque; so fairness 
should be assured in FIFO order...


https://github.com/harlowja/fasteners/blob/master/fasteners/lock.py#L139

and

https://github.com/harlowja/fasteners/blob/master/fasteners/lock.py#L220

-Josh

Chris Friesen wrote:

On 05/15/2017 03:42 PM, Clint Byrum wrote:


In order to implement fairness you'll need every lock request to happen
in a FIFO queue. This is often implemented with a mutex-protected queue
of condition variables. Since the mutex for the queue is only held while
you append to the queue, you will always get the items from the queue
in the order they were written to it.

So you have lockers add themselves to the queue and wait on their
condition variable, and then a thread running all the time that reads
the queue and acts on each condition to make sure only one thread is
activated at a time (or that one thread can just always do all the work
if the arguments are simple enough to put in a queue).


Do you even need the extra thread? The implementations I've seen for a
ticket lock (in C at least) usually have the unlock routine wake up the
next pending locker.

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-10 Thread Joshua Harlow

Dmitry Tantsur wrote:

On 05/09/2017 07:59 PM, Joshua Harlow wrote:

Matt Riedemann wrote:

On 5/8/2017 1:10 PM, Octave J. Orgeron wrote:

I do agree that scalability and high-availability are definitely issues
for OpenStack when you dig deeper into the sub-components. There is a
lot of re-inventing of the wheel when you look at how distributed
services are implemented inside of OpenStack and deficiencies. For some
services you have a scheduler that can scale-out, but the conductor or
worker process doesn't. A good example is cinder, where cinder-volume
doesn't scale-out in a distributed manner and doesn't have a good
mechanism for recovering when an instance fails. All across the
services
you see different methods for coordinating requests and tasks such as
rabbitmq, redis, memcached, tooz, mysql, etc. So for an operator, you
have to sift through those choices and configure the per-requisite
infrastructure. This is a good example of a problem that should be
solved with a single architecturally sound solution that all services
can standardize on.


There was an architecture workgroup specifically designed to understand
past architectural decisions in OpenStack, and what the differences are
in the projects, and how to address some of those issues, but from lack
of participation the group dissolved shortly after the Barcelona summit.
This is, again, another example of if you want to make these kinds of
massive changes, it's going to take massive involvement and leadership.


I agree on the 'massive changes, it's going to take massive
involvement and leadership.' though I am not sure how such changes and
involvement actually happens; especially nowadays where companies
which such leadership are moving on to something else (k8s, mesos, or
other...)

So knowing that what are the options to actually make some kind of
change occur? IMHO it must be driven by PTLs (yes I know they are
always busy, to bad, so sad, lol). I'd like all the PTLs to get
together and restart the arch-wg and make it a *requirement* that PTLs
actually show up (and participate) in that group/meeting vs it just
being a bunch of senior(ish) folks, such as myself, that showed up.
Then if PTLs do not show up, I would start to say that the next time
around they are running for PTL said lack of participation in the
wider openstack vision should be known and potentially cause them to
get kicked out (voted out?) of being a PTL in the future.


How we have whom to blame. Problem solved?


Not likely just yet problem solved, but sometimes tough love (IMHO) is 
needed. I believe it is, you may disagree and that's cool, but then I 
might give you some tough love also, lol.








The problem in a lot of those cases comes down to development being
detached from the actual use cases customers and operators are going to
use in the real world. Having a distributed control plane with multiple
instances of the api, scheduler, coordinator, and other processes is
typically not testable without a larger hardware setup. When you get to
large scale deployments, you need an active/active setup for the
control
plane. It's definitely not something you could develop for or test
against on a single laptop with devstack. Especially, if you want to
use
more than a handful of the OpenStack services.


I've heard *crazy* things about actual use cases customers and
operators are doing because of the scaling limits that projects have
(ie nova has a limit of 300 compute nodes so ABC customer will then
setup X * 300 clouds to reach Y compute nodes because of that limit).

IMHO I'm not even sure I would want to target said use-cases in the
first place, because they feel messed up in the first place (and it
seems bad/dumb? to go down the rabbit hole of targeting use-cases that
were deployed to band-aid over the initial problems that created those
use-cases/deployments in the first place).



I think we can all agree with this. Developers don't have a lab with
1000 nodes lying around to hack on. There was OSIC but that's gone. I've
been requesting help in Nova from companies to do scale testing and help
us out with knowing what the major issues are, and report those back in
a form so we can work on those issues. People will report there are
issues, but not do the profiling, or at least not report the results of
profiling, upstream to help us out. So again, this is really up to
companies that have the resources to do this kind of scale testing and
report back and help fix the issues upstream in the community. That
doesn't require OpenStack 2.0.



So how do we close that gap? The only way I really know is by having
people that can see the problems from the get-go, instead of having to
discover it at some later point (when it falls over and ABC customer
starts to start having Y clouds just to reach the target number of
compute nodes they want to reach). Now maybe the skill level in
openstack (especially in regards to distributed systems) is just to
low and the only real way

Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-09 Thread Joshua Harlow

Matt Riedemann wrote:

On 5/8/2017 1:10 PM, Octave J. Orgeron wrote:

I do agree that scalability and high-availability are definitely issues
for OpenStack when you dig deeper into the sub-components. There is a
lot of re-inventing of the wheel when you look at how distributed
services are implemented inside of OpenStack and deficiencies. For some
services you have a scheduler that can scale-out, but the conductor or
worker process doesn't. A good example is cinder, where cinder-volume
doesn't scale-out in a distributed manner and doesn't have a good
mechanism for recovering when an instance fails. All across the services
you see different methods for coordinating requests and tasks such as
rabbitmq, redis, memcached, tooz, mysql, etc. So for an operator, you
have to sift through those choices and configure the per-requisite
infrastructure. This is a good example of a problem that should be
solved with a single architecturally sound solution that all services
can standardize on.


There was an architecture workgroup specifically designed to understand
past architectural decisions in OpenStack, and what the differences are
in the projects, and how to address some of those issues, but from lack
of participation the group dissolved shortly after the Barcelona summit.
This is, again, another example of if you want to make these kinds of
massive changes, it's going to take massive involvement and leadership.


I agree on the 'massive changes, it's going to take massive involvement 
and leadership.' though I am not sure how such changes and involvement 
actually happens; especially nowadays where companies which such 
leadership are moving on to something else (k8s, mesos, or other...)


So knowing that what are the options to actually make some kind of 
change occur? IMHO it must be driven by PTLs (yes I know they are always 
busy, to bad, so sad, lol). I'd like all the PTLs to get together and 
restart the arch-wg and make it a *requirement* that PTLs actually show 
up (and participate) in that group/meeting vs it just being a bunch of 
senior(ish) folks, such as myself, that showed up. Then if PTLs do not 
show up, I would start to say that the next time around they are running 
for PTL said lack of participation in the wider openstack vision should 
be known and potentially cause them to get kicked out (voted out?) of 
being a PTL in the future.




The problem in a lot of those cases comes down to development being
detached from the actual use cases customers and operators are going to
use in the real world. Having a distributed control plane with multiple
instances of the api, scheduler, coordinator, and other processes is
typically not testable without a larger hardware setup. When you get to
large scale deployments, you need an active/active setup for the control
plane. It's definitely not something you could develop for or test
against on a single laptop with devstack. Especially, if you want to use
more than a handful of the OpenStack services.


I've heard *crazy* things about actual use cases customers and operators 
are doing because of the scaling limits that projects have (ie nova has 
a limit of 300 compute nodes so ABC customer will then setup X * 300 
clouds to reach Y compute nodes because of that limit).


IMHO I'm not even sure I would want to target said use-cases in the 
first place, because they feel messed up in the first place (and it 
seems bad/dumb? to go down the rabbit hole of targeting use-cases that 
were deployed to band-aid over the initial problems that created those 
use-cases/deployments in the first place).




I think we can all agree with this. Developers don't have a lab with
1000 nodes lying around to hack on. There was OSIC but that's gone. I've
been requesting help in Nova from companies to do scale testing and help
us out with knowing what the major issues are, and report those back in
a form so we can work on those issues. People will report there are
issues, but not do the profiling, or at least not report the results of
profiling, upstream to help us out. So again, this is really up to
companies that have the resources to do this kind of scale testing and
report back and help fix the issues upstream in the community. That
doesn't require OpenStack 2.0.



So how do we close that gap? The only way I really know is by having 
people that can see the problems from the get-go, instead of having to 
discover it at some later point (when it falls over and ABC customer 
starts to start having Y clouds just to reach the target number of 
compute nodes they want to reach). Now maybe the skill level in 
openstack (especially in regards to distributed systems) is just to low 
and the only real way to gather data is by having companies do scale 
testing (ie some kind of architecting things to work after they are 
deployed); if so that's sad...


-Josh

__
OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Joshua Harlow

Fox, Kevin M wrote:

Note, when I say OpenStack below, I'm talking about 
nova/glance/cinder/neutron/horizon/heat/octavia/designate. No offence to the 
other projects intended. just trying to constrain the conversation a bit... 
Those parts are fairly comparable to what k8s provides.

I think part of your point is valid, that k8s isn't as feature rich in some 
ways, (networking for example), and will get more complex in time. But it has a 
huge amount of functionality for significantly less effort compared to an 
OpenStack deployment with similar functionality today.

I think there are some major things different between the two projects that are 
really paying off for k8s over OpenStack right now. We can use those as 
learning opportunities moving forward or the gap will continue to widen, as 
will the user migrations away from OpenStack. These are mostly architectural 
things.

Versions:
  * The real core of OpenStack is essentially version 1 + iterative changes.
  * k8s is essentially the third version of Borg. Plenty of room to ditch bad 
ideas/decisions.

That means OpenStack's architecture has essentially grown organically rather 
then being as carefully thought out. The backwards compatibility has been a 
good goal, but its so hard to upgrade most places burn it down and stand up 
something new anyway so a lot of work with a lot less payoff then you would 
think. Maybe it is time to consider OpenStack version 2...



Ya, just to add a thought (I agree with a lot of the rest, it's a very 
nice summary btw IMHO), but afaik a lot (I can't really quantify the 
number, for obvious reasons) of people have been burned by upgrading or 
operating what is being produced, that there has to be a lot of trust 
that is regained by such things not working from day zero. Trust is a 
really really hard thing to get back once it is lost, though I think it 
is great that people are trying, what are we doing to regain that trust 
back?


I honestly don't quite know. Anyone else?


I think OpenStack's greatest strength is its standardized api's. Thus far we've 
been changing the api's over time and keeping the implementation mostly the 
same... maybe we should consider keeping the api the same and switch some of 
the implementations out... It might take a while to get back to where we are 
now, but I suspect the overall solution would be much better now that we have 
so much experience with building the first one.

k8s and OpenStack do largely the same thing. get in user request, schedule the 
resource onto some machines and allow management/lifecycle of the thing.

Why then does k8s scalability goal target 5000 nodes and OpenStack really 
struggle with more then 300 nodes without a huge amount of extra work? I think 
its architecture. OpenStack really abuses rabbit, does a lot with relational 
databases that maybe are better done elsewhere, and forces isolation between 
projects that maybe is not the best solution.

Part of it I think is combined services. They don't have separate services for 
cinder-api/nova-api,neutron-api/heat-api/etc. Just kube-apiserver. same with 
the *-schedulers, just kube-scheduler. This means many fewer things to manage 
for ops, and allows for faster communication times (lower latency). In theory 
OpenStack could scale out much better with the finer grained services. but I'm 
not sure thats really ever shown true in practice.

Layered/eating own dogfood:
  * OpenStack assumes the operator will install "all the things".
  * K8s uses k8s to deploy lots of itself.

You use kubelet with the same yaml file format normally used to deploy stuff to 
deploy etcd/kube-apiserver/kube-scheduler/kube-controller-manager to get a 
working base system.
You then use the base system to launch sdn, ingress, service-discovery, the ui, 
etc.

This means the learning required is substantially less when it comes to 
debugging problems, performing upgrades, etc, because its the same for the most 
part for k8s as it is for any other app running on it. The learning costs is 
way lower.

Versioning:
  * With OpenStack, Upgrades are hard, mismatched version servers/agents are 
hard/impossible.
  * K8s, they support the controllers being 2 versions ahead of the clients.

Its hard to bolt this on after the fact, but its also harder when you have 
multiple communications channels to do it with. Having to do it in http, sql, 
and rabbit messages makes it so much harder. Having only one place talk to the 
single datastore (etcd) makes that easier, as is only having one place 
everything interacts with the servers kube-apiserver.

Some amount of distribution:
  * OpenStack components are generally expected to come from distro's.
  * K8s, Core pieces like kube-apiserver are distributed prebuilt and ready to 
go in container images if you chose to use them.

Minimal silo's:
  * the various OpenStack projects are very silo'ed.
  * Most of the k8s subsystems currently are all tightly integrated with each 
other and are 

Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

2017-04-24 Thread Joshua Harlow

Just for some insight,

This is the stuff we are putting into our kolla image(s):

{% block openstack_base_header %}
ADD additions-archive /

# Because we have a internal pip with internal packages.
RUN cp /additions/pip/pip.conf /etc/pip.conf
ENV PIP_CONFIG_FILE /etc/pip.conf

LABEL JENKINS_BUILD=${jenkins_job.build_number}
LABEL JENKINS_BUILD_TAG=${jenkins_job.build_tag}
LABEL JENKINS_BUILD_URL=${jenkins_job.build_url}
{% endblock %}

So the above shows up, and can be linked back into the jenkins job that 
built the images.


Then inside the image we have the following:::

$ cat /job.json  | python -mjson.tool
{
"build_id": "15",
"build_name": null,
"build_number": "15",
"build_tag": "jenkins-cinder-15",
"build_url": "https://cloud1.jenkins.int.godaddy.com/job/cinder/15/;,
"kolla_customizations": {
"extra_requirements": [],
"repositories": [
{
"name": "kolla",
"ref": "stable/newton",
"url": "git://git.openstack.org/openstack/kolla"
},
{
"name": "clean",
"ref": "7.0.1",
"url": "git://git.openstack.org/openstack/cinder.git"
},
{
"name": "dirty",
"ref": "7.0.1",
"url": "git://git.openstack.org/openstack/cinder.git"
},
{
"name": "deploy",
"ref": "master",
"url": 
"g...@github.secureserver.net:cloudplatform/openstack-deploy.git"

},
{
"name": "requirements",
"ref": "stable/liberty",
"url": "git://git.openstack.org/openstack/requirements"
}
],
"template_overrides": "\n\n{% set 
cinder_volume_packages_override = ['python-rados', 'python-rbd', 'ceph'] 
%}\n\n"

},
"project": {
"author": "OpenStack",
"fullname": "cinder-7.0.1",
"name": "cinder",
"url": "http://www.openstack.org/;,
"version": "7.0.1"
},
"run": {
"deploy_git": null,
"deploy_ref": null,
"git_target_repo": null,
"git_target_repo_branch": null,
"kolla_git": null,
"kolla_image_namespace": null,
"kolla_ref": null,
"maintainers": null,
"project": null,
"project_git": null,
"project_ref": null,
"pushing_to_artifactory": null,
"requirement_git": null,
"requirement_ref": null,
"running_unit_tests": null,
"unit_test_path": null
}
}

Though the run dict values are `null`, gotta fix that, haha.

But the general idea is hopefully clear.

-Josh

Steve Baker wrote:



On Thu, Apr 20, 2017 at 8:12 AM, Michał Jastrzębski > wrote:

So after discussion started here [1] we came up with something like
that:

1. Docker build will create "fingerprint" - manifesto of versions
saved somewhere (LABEL?)


This would be great, especially a full package version listing in an
image label. However I don't see an easy way of populating a label from
data inside the image. Other options could be:
- have a script inside the image in a known location which generates the
package manifest on the fly, do a docker run whenever you need to get a
manifest to compare with another image.
- write out the package list during image build to a known location, do
a docker run to cat out its contents when needed

As for the format, taking a yum only image as an example would we need
anything more than the output of "rpm -qa | sort"?

2. We create new CLI tool kolla-registry for easier management of
pushing and versioning
3. kolla-registry will be able to query existing source docker
registry (incl. dockerhub) for latest tag-revision and it's version
manifesto, also dest registry for tags-revisions and manifesto
4. if source image manifesto != dest image manifesto -> push source
image to dest registry and increase tag-revision by 1
5. kolla-registry will output latest list of images:tags-revisions
available for kolla-k8s/ansible to consume
6. we keep :4.0.0 style images for every tag in kolla repository.
These are static and will not be revised.


Yes, this is fine, but please keep in mind that this change[1] could be
merged without changing these published 4.0.0 style image tags, with the
added advantage of locally built images with a git checkout of kolla
have a less ambiguous default tag.
[1] https://review.openstack.org/#/c/448380/

Different scenerios can be handled this way
1. Autopushing to dockerhub will query freshest built registry
(tarballs, source) and and dockerhub (dest), it will create
image:branchname (nova-api:ocata) for HEAD of stable branch every run
and image:branchname-revision with revision increase
2. Users will have easy time managing their local registry 

Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-04-20 Thread Joshua Harlow

Doug Hellmann wrote:

Excerpts from gordon chung's message of 2017-04-20 17:12:26 +:

On 20/04/17 01:32 AM, Joshua Harlow wrote:

Wasn't there also some decision made in austin (?) about how we as a
group stated something along the line of co-installability isn't as
important as it once was (and may not even be practical or what people
care about anymore anyway)?


I don't remember that, but I may not have been in the room at the
time.  In the past when we've discussed that idea, we've continued
to maintain that co-installability is still needed for distributors
who have packaging constraints that require it and for use cases
like single-node deployments for POCs.


Ya, looking back I think it was:

https://etherpad.openstack.org/p/newton-global-requirements

I think that was robert that lead that session, but I might be incorrect 
there.





With kolla becoming more popular (tripleo I think is using it, and ...)
and the containers it creates making isolated per-application
environments it makes me wonder what of global-requirements is still
valid (as a concept) and what isn't.


We still need to review dependencies for license compatibility, to
minimize redundancy, and to ensure that we're not adding things to
the list that are not being maintained upstream. Even if we stop syncing
versions, official projects need to be those reviews, and having the
global list is a way to ensure that the reviews are done.


I do remember the days of free for all requirements (or requirements
sometimes just put/stashed in devstack vs elsewhere), which I don't
really want to go back to; but if we finally all agree that
co-installability isn't what people actually do and/or care about
(anymore?) then maybe we can re-think some things?

agree with all of ^... but i imagine to make progress on this, we'd have
to change/drop devstack usage in gate and that will take forever and a
lifetime (is that a chick flick title?) given how embedded devstack is
in everything. it seems like the solution starts with devstack.

cheers,



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-04-19 Thread Joshua Harlow


Doug Hellmann wrote:

Excerpts from Clark Boylan's message of 2017-04-19 08:10:43 -0700:

On Wed, Apr 19, 2017, at 05:54 AM, Julien Danjou wrote:

Hoy,

So Gnocchi gate is all broken (agan) because it depends on "pbr" and
some new release of oslo.* depends on pbr!=2.1.0.

Neither Gnocchi nor Oslo cares about whatever bug there is in pbr 2.1.0
that got in banished by requirements Gods. It does not prevent it to be
used e.g. to install the software or get version information. But it
does break anything that is not in OpenStack because well, pip installs
the latest pbr (2.1.0) and then oslo.* is unhappy about it.

It actually breaks everything, including OpenStack. Shade and others are
affected by this as well. The specific problem here is that PBR is a
setup_requires which means it gets installed by easy_install before
anything else. This means that the requirements restrictions are not
applied to it (neither are the constraints). So you get latest PBR from
easy_install then later when something checks the requirements
(pkg_resources console script entrypoints?) they break because latest
PBR isn't allowed.

We need to stop pinning PBR and more generally stop pinning any
setup_requires (there are a few more now since setuptools itself is
starting to use that to list its deps rather than bundling them).


So I understand the culprit is probably pip installation scheme, and we
can blame him until we fix it. I'm also trying to push pbr 2.2.0 to
avoid the entire issue.

Yes, a new release of PBR undoing the "pin" is the current sane step
forward for fixing this particular issue. Monty also suggested that we
gate global requirements changes on requiring changes not pin any
setup_requires.


But for the future, could we stop updating the requirements in oslo libs
for no good reason? just because some random OpenStack project hit a bug
somewhere?

For example, I've removed requirements update on tooz¹ for more than a
year now, which did not break *anything* in the meantime, proving that
this process is giving more problem than solutions. Oslo libs doing that
automatic update introduce more pain for all consumers than anything (at
least not in OpenStack).

You are likely largely shielded by the constraints list here which is
derivative of the global requirements list. Basically by using
constraints you get distilled global requirements and even without being
part of the requirements updates you'd be shielded from breakages when
installed via something like devstack or other deployment method using
constraints.


So if we care about Oslo users outside OpenStack, I beg us to stop this
crazyness. If we don't, we'll just spend time getting rid of Oslo over
the long term…

I think we know from experience that just stopping (eg reverting to the
situation we had before requirements and constraints) would lead to
sadness. Installations would frequently be impossible due to some
unresolvable error in dependency resolution. Do you have some
alternative in mind? Perhaps we loosen the in project requirements and
explicitly state that constraints are known to work due to testing and
users should use constraints? That would give users control to manage
their own constraints list too if they wish. Maybe we do this in
libraries while continuing to be more specific in applications?


At the meeting in Austin, the requirements team accepted my proposal
to stop syncing requirements updates into projects, as described
in https://etherpad.openstack.org/p/ocata-requirements-notes

We haven't been able to find anyone to work on the implementation,
though. I is my understanding that Tony did contact the Telemetry
and Swift teams, who are most interested in this area of change,
about devoting some resources to the tasks outlined in the proposal.

Doug


My 2c,

Cheers,



Wasn't there also some decision made in austin (?) about how we as a 
group stated something along the line of co-installability isn't as 
important as it once was (and may not even be practical or what people 
care about anymore anyway)?


With kolla becoming more popular (tripleo I think is using it, and ...) 
and the containers it creates making isolated per-application 
environments it makes me wonder what of global-requirements is still 
valid (as a concept) and what isn't.


I do remember the days of free for all requirements (or requirements 
sometimes just put/stashed in devstack vs elsewhere), which I don't 
really want to go back to; but if we finally all agree that 
co-installability isn't what people actually do and/or care about 
(anymore?) then maybe we can re-think some things?


I personally still like having an ability to know some set of 
requirements works for certain project X for a given release Z (as 
tested by the gate); though I am not really concerned about if the same 
set of requirements works for certain project Y (also in release Z). If 
this is something others agree with then perhaps we just need to store 
those requirements and the 

Re: [openstack-dev] [Taskflow] Current state or the project ?

2017-04-19 Thread Joshua Harlow

Robin De-Lillo wrote:

Hello Guys,

I'm Robin a Software developer for a VFX company based in Canada. As the
company grow up, we are currently looking into redesigning our internal
processes and workflows in a more nodal/graph based approach.

Ideally we would like to start from an existing library so we don't
re-implement things from scratch. We found out TaskFlow which, after a
couple of tests, looks very promising to us. Good work with that !!

We were wondering what is the current state of this project ? Is that
still something under active development or a priority for OpenStack ?
As we would definitely be happy to contribute to this library in the
future, we are just gathering information around for now to ensure we
pick up the best solution which suit our needs.

Thanks a lot,
Robin De Lillo



Hi there!

So what you describe seems like a good fit for taskflow, since its 
engine concept is really based on the key concept of 'nodal/graph based 
approach.' (ie the engine[1] really is just a bunch of code around graph 
traversal in various orders depending on task execution, using the 
futures concept/paradigm, and results and such).


Any way we can get more details on what you might want to be doing. That 
might help further distill if it's a good fit or not. If you can't say 
that's ok to (depends on the project/company and all that).


So about the current state.

It's still alive, development has slowed a little (in that I haven't 
been active as much after I moved to godaddy, where I'm helping revamp 
some of there deployment, automation... and operational aspects of 
openstack itself); but it still IMHO gets fixes and I'm more than 
willing and able to help folks out in learning some stuff. So I wouldn't 
say super-active, but ongoing as needed (which I think is somewhat 
common for more of oslo than I would like to admit); though don't take 
that negatively :)


Others (with a slightly less bias than I might have, haha) though I 
think should chime in on there experiences :)


The question around 'priority for OpenStack', that's a tough one, 
because I think the priorities of OpenStack are sort of end-user / 
deployer/operator ... defined, so it's slightly hard to identify what 
they are (besides 'make OpenStack great again', lol).


What other solutions are you thinking of/looking at/considering?

Typically what I've seen are celery, RQ (redis) and probably a few 
others that I listed once @ 
https://docs.openstack.org/developer/taskflow/shelf.html#libraries-frameworks 
(all of these share similar 'aspects' as taskflow, to some degree).


That's my 3 cents ;)

-Josh

[1] https://docs.openstack.org/developer/taskflow/engines.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [shade] help wanted - tons to do, not enough people

2017-04-12 Thread Joshua Harlow

Just a question, not meant as anything bad against shade,

But would effort be better spent on openstacksdk?

Take the good parts of shade and just move it to openstacksdk, perhaps 
as a 'higher level api' available in openstacksdk?


Then ansible openstack components (which I believe use shade) could then 
switch to openstacksdk and all will be merry...


Thoughts?

Monty Taylor wrote:

Hey everybody!

This isn't the most normal thing I've done, but whatever.

We've got this great library, shade, which people love to use to
interact with their OpenStack clouds (even when they don't know they're
using it, like when they're using Ansible) - and which we use in Infra
behind nodepool to get test nodes for everyone's test jobs.

Unfortunately, we have a somewhat ludicrously small contributor base
right now (primarily me) Now, I'm pretty darned good and can produce a
constant stream of patches - so you might not _think_ there's not a ton
of developers, but I am, in the end, only one person, and that's not
great. Shrews has been the other primary developer, but he's been
focusing his attention on zuulv3 (which is a very good thing)

In any case - I'd love some more folks to come in and do some things -
and maybe none of you knew we were looking for new people! So come have
fun with us! We have an IRC channel: #openstack-shade - and bugs are
tracked in Storyboard (https://storyboard.openstack.org/#!/project/760)

There is a worklist for bugs that have been triaged as important - that
is, there is a clearly broken behavior out in production for someone:

https://storyboard.openstack.org/#!/worklist/194

The most important project (that's not terrible to get started on) is
one I'm calling "restification" - which is that we're replacing our use
of python-*client with making direct REST calls via keystoneauth. This
is important so that we can ensure that we're supporting backwards
compat for our users, while maintaining co-installability with OpenStack
releases.

https://storyboard.openstack.org/#!/worklist/195

The process is as follows:

- Pick a call or calls to migrate
- Make a patch changing the unittests to use requests-mock instead of
mocking the client object (this changes the test to validate the HTTP
calls we're making, but shows that using the currently known to be
working client calls)
- Make a patch that replaces the submit_task call in
shade/openstackclient.py (and removes the Task class from
shade/_tasks.py) with a rest call using the appropriate
self._{servicename}_client - this should not change any unit tests -
that shows that the new call does the same things as the old call.

Once you get the hang of it, it's fun - and it's a fun way to learn a
ton about OpenStack's REST APIs.

After that's done, we have some deeper tasks that need done related to
caching and client-side rate limiting (we support these already, but we
need to support them the same way everywhere) - and we have a whole
mult-cloud/multi-threaded facade class to write that should be a ton of
fun - but we need to finish restification before we do a ton of new things.

If you want to start hacking with us - I recommend coming in and saying
hi before you dive in to huge restification patches - and also start
small - do one call and figure out the flow - going off and doing
multi-day patches often leads to pain and suffering.

Anyway - I hope some of you think interop client libraries are fun like
I do - and would love to have you play with us!

Thanks!
Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [architecture] Arch-WG, we hardly knew ye..

2017-04-07 Thread Joshua Harlow

Sad to see it go, but I understand the reasoning about why.

Back to the coal mines :-P

-Josh

Clint Byrum wrote:

I'm going to be blunt. I'm folding the Architecture Working Group
immediately following our meeting today at 2000 UTC. We'll be using the
time to discuss continuity of the base-services proposal, and any other
draw-down necessary. After that our meetings will cease.

I had high hopes for the arch-wg, with so many joining us to discuss
things in Atlanta. But ultimately, we remain a very small group with
very limited resources, and so I don't think it's the best use of our
time to continue chasing the arch-wg.

Thanks everyone for your contributions. See you in the trenches.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ARA - Ansible Run Analysis: Would you like to help ?

2017-03-22 Thread Joshua Harlow

Sounds neat,

So this would be similar to what tower or semaphore also have (I would 
assume they have something very like ARA internally) but instead of 
providing the whole start/stop/inventory workflow this just provides the 
viewing component?


David Moreau Simard wrote:

Hi openstack-dev,

There's this project I'm passionate that I want to tell you about: ARA [1].
So, what's ARA ?

ARA is an Ansible callback plugin that you can set up anywhere you run
Ansible today.
The next time you run an ansible-playbook command, it'll automatically
record and organize all the data and provide an intuitive interface
for you to browse the playbook results.

In practice, you can find a video demonstration of what the user
interface looks like here [2].

ARA doesn't require you to change your existing workflows, it doesn't
require you to re-write your playbooks.
It's offline, self-contained, standalone and decentralized by default.
You can run it on your laptop for a single playbook or run it across
thousands of runs, recording millions of tasks in a centralized
database.
You can read more about the project's core values and philosophies in
the documented manifesto [3].

ARA is already used by many different projects that leverage Ansible
to fulfill their needs, for example:
- OpenShift-Ansible
- OpenStack-Ansible
- Kolla-Ansible
- TripleO-Quickstart
- Browbeat
- devstack-gate

ARA's also garnered quite a bit of interest outside the OpenStack
community and there is already a healthy amount of users hanging out
in IRC on #ara.

So, it looks like the project is going well. Why am I asking for help ?

ARA has been growing in popularity, that's definitely something I am
very happy about.
However, this also means that there are more users, more feedback,
more questions, more bugs, more feature requests, more use cases and
unfortunately, ARA doesn't happen to be my full time job.
ARA is a tool that I created to make my job easier !

Also, as much as I hate to admit it, I am by no means a professional
python developer -- even less so in frontend (html/css/js).
Being honest, there are things that we should be doing in the project
that I don't have the time or the skills to accomplish.

Examples of what I would need help with, aside from what's formally on
StoryBoard [4]:
- Help the community (answer questions, triage bugs, etc)
- Flask experts (ARA is ultimately a flask application)
- Better separation of components (decouple things properly into a
server/client/api interface)
- Full python3 compatibility, test coverage and gating
- Improve/optimize SQL models/performance

Contributing to ARA in terms of code is no different than any other
OpenStack project but I've documented the process if you are not
familiar with it [5].
ARA has good unit and integration test coverage and I love to think
it's not a project that is hard to develop for.

If you feel the project is interesting and would like to get involved,
I'd love to welcome you on board.

Let's chat.

[1]: https://github.com/openstack/ara
[2]: https://www.youtube.com/watch?v=aQiN5wBXZ4g
[3]: http://ara.readthedocs.io/en/latest/manifesto.html
[4]: https://storyboard.openstack.org/#!/project/843
[5]: http://ara.readthedocs.io/en/latest/contributing.html

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][barbican][castellan] Proposal to rename Castellan to oslo.keymanager

2017-03-16 Thread Joshua Harlow
I'd be fine with it also, not sure it will change much, but meh, worth a 
shot. We are all happy loving people after all, so might as well try to 
help others when we can :-P


-Josh

Davanum Srinivas wrote:

+1 from me to bring castellan under Oslo governance with folks from
both oslo and Barbican as reviewers without a project rename. Let's
see if that helps get more adoption of castellan

Thanks,
Dims

On Thu, Mar 16, 2017 at 12:25 PM, Farr, Kaitlin M.
  wrote:

This thread has generated quite the discussion, so I will try to
address a few points in this email, echoing a lot of what Dave said.

Clint originally explained what we are trying to solve very well. The hope was
that the rename would emphasize that Castellan is just a basic
interface that supports operations common between key managers
(the existing Barbican back end and other back ends that may exist
in the future), much like oslo.db supports the common operations
between PostgreSQL and MySQL. The thought was that renaming to have
oslo part of the name would help reinforce that it's just an interface,
rather than a standalone key manager. Right now, the only Castellan
back end that would work in DevStack is Barbican. There has been talk
in the past for creating other Castellan back ends (Vault or Tang), but
no one has committed to writing the code for those yet.

The intended proposal was to rename the project, maintain the current
review team (which is only a handful of Barbican people), and bring on
a few Oslo folks, if any were available and interested, to give advice
about (and +2s for) OpenStack library best practices. However, perhaps
pulling it under oslo's umbrella without a rename is blessing it enough.

In response to Julien's proposal to make Castellan "the way you can do
key management in Python" -- it would be great if Castellan were that
abstract, but in practice it is pretty OpenStack-specific. Currently,
the Barbican team is great at working on key management projects
(including both Barbican and Castellan), but a lot of our focus now is
how we can maintain and grow integration with the rest of the OpenStack
projects, for which having the name and expertise of oslo would be a
great help.

Thanks,

Kaitlin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-14 Thread Joshua Harlow

Monty Taylor wrote:

On 03/14/2017 06:04 PM, Davanum Srinivas wrote:

Team,

So one more thing popped up again on IRC:
https://etherpad.openstack.org/p/oslo.config_etcd_backend

What do you think? interested in this work?

Thanks,
Dims

PS: Between this thread and the other one about Tooz/DLM and
os-lively, we can probably make a good case to add etcd as a base
always-on service.


As I mentioned in the other thread, there was specific and strong
anti-etcd sentiment in Tokyo which is why we decided to use an
abstraction. I continue to be in favor of us having one known service in
this space, but I do think that it's important to revisit that decision
fully and in context of the concerns that were raised when we tried to
pick one last time.


I'm in agreement with this.

I don't mind tooz either (it's good at what it is for) since I took a 
part in creating it... Given that I can't help but wonder how nice it 
would be to pick one (etcd, zookeeper, consul?) and just do nice things 
with it (perhaps u know even work with the etcd or zookeeper or consul 
developers [depending on which one we pick] on features and bug fixes 
and such).




It's worth noting that there is nothing particularly etcd-ish about
storing config that couldn't also be done with zk and thus just be an
additional api call or two added to Tooz with etcd and zk drivers for it.


Ya, to me zookeeper and etcd look pretty much the same now-a-days.

Which I guess is why https://github.com/coreos/zetcd ('A ZooKeeper 
"personality" for etcd) (although I'm not sure I'd want to run that, ha) 
exists as a thing.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-14 Thread Joshua Harlow
So just fyi, this has been talked about before (but prob in context of 
zookeeper or various other pluggable config backends).


Some links:

- https://review.openstack.org/#/c/243114/
- https://review.openstack.org/#/c/243182/
- https://blueprints.launchpad.net/oslo.config/+spec/oslo-config-db
- https://review.openstack.org/#/c/130047/

I think the general questions that seem to reappear are around the 
following:


* How does reloading work (does it)?

* What's the operational experience (editing a ini file is about the 
lowest bar we can possible get to, for better and/or worse).


* Does this need to be a new oslo.config backend or is it better suited 
by something like the following (external programs loop)::


   etcd_client = make_etcd_client(args)
   while True:
   has_changed = etcd_client.get_new_config("/blahblah") # or use a 
watch

   if has_changed:
  fetch_and_write_ini_file(etcd_client)
  trigger_reload()
   time.sleep(args.wait)

* Is an external loop better (maybe, maybe not?)

Pretty sure there are some etherpad discussions around this also somewhere.

Clint Byrum wrote:

Excerpts from Davanum Srinivas's message of 2017-03-14 13:04:37 -0400:

Team,

So one more thing popped up again on IRC:
https://etherpad.openstack.org/p/oslo.config_etcd_backend

What do you think? interested in this work?

Thanks,
Dims

PS: Between this thread and the other one about Tooz/DLM and
os-lively, we can probably make a good case to add etcd as a base
always-on service.



This is a cool idea, and I think we should do it.

A few loose ends I'd like to see in a spec:

* Security Security Security. (Hoping if I say it 3 times a real
   security person will appear and ask the hard questions).
* Explain clearly how operators would inspect, edit, and diff their
   configs.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-14 Thread Joshua Harlow

Jay Pipes wrote:

On 03/14/2017 02:50 PM, Julien Danjou wrote:

On Tue, Mar 14 2017, Jay Pipes wrote:


Not tooz, because I'm not interested in a DLM nor leader election
library
(that's what the underlying etcd3 cluster handles for me), only a
fast service
liveness/healthcheck system, but it shows usage of etcd3 and Google
Protocol
Buffers implementing a simple API for liveness checking and host
maintenance
reporting.


Cool cool. So that's the same feature that we implemented in tooz 3
years ago. It's called "group membership". You create a group, make
nodes join it, and you know who's dead/alive and get notified when their
status change.


The point of os-lively is not to provide a thin API over ZooKeeper's
group membership interface. The point of os-lively is to remove the need
to have a database (RDBMS) record of a service in Nova.

tooz simply abstracts a group membership API across a number of drivers.
I don't need that. I need a way to maintain a service record (with
maintenance period information, region, and an evolvable data record
format) and query those service records in an RDBMS-like manner but
without the RDBMS being involved.


My plan is to push some proof-of-concept patches that replace Nova's
servicegroup API with os-lively and eliminate Nova's use of an RDBMS for
service liveness checking, which should dramatically reduce the
amount of both
DB traffic as well as conductor/MQ service update traffic.


Interesting. Joshua and Vilob tried to push usage of tooz group
membership a couple of years ago, but it got nowhere. Well, no, they got
2 specs written IIRC:

https://specs.openstack.org/openstack/nova-specs/specs/liberty/approved/service-group-using-tooz.html


But then it died for whatever reasons on Nova side.


It died because it didn't actually solve a problem.


Hmmm, idk about that, more likely other things involved, but point taken 
(and not meant personally).




The problem is that even if we incorporate tooz, we would still need to
have a service table in the RDBMS and continue to query it over and over
again in the scheduler and API nodes.

I want all service information in the same place, and I don't want to
use an RDBMS for that information. etcd3 provides an ideal place to
store service record information. Google Protocol Buffers is an ideal
data format for evolvable versioned objects. os-lively presents an API
that solves the problem I want to solve in Nova. tooz didn't.


Def looks like u are doing some custom service indexes and such in etcd, 
so ya, the default in tooz may not fit that kind of specialized model 
(though I can't say such a model would be unique to nova).


https://gist.github.com/harlowja/57394357e81703a595a15d6dd7c774eb was 
something I threw together, tooz may not be a perfect match, but still 
seems like it can evolve to store something like your indexes @ 
https://github.com/jaypipes/os-lively/blob/master/os_lively/service.py#L524-L542 





Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What do we want to be when we grow up?

2017-03-13 Thread Joshua Harlow

Thierry Carrez wrote:

Joshua Harlow wrote:

[...]
* Be opinionated; let's actually pick *specific* technologies based on
well thought out decisions about what we want out of those technologies
and integrate them deeply (and if we make a bad decision, that's ok, we
are all grown ups and we'll deal with it). IMHO it hasn't turned out
well trying to have drivers for everything and everyone so let's umm
stop doing that.


About "being all grown-ups and dealing with it", the problem is that
it's mostly an externality: the choice is done by developers and the
cost of handling the bad decision is carried by operators. Externalities
make for bad decisions.


Fair point, so how do we make it not a externality (I guess this is 
where the core services arch-wg thread comes in?)? It all reminds me of 
the gimp and how its UI was really bad to use for years. Sometimes 
developers don't make the best decisions (and rightly so I will admit I 
sometimes don't either).




I agree that having drivers for everything is nonsense. The model we
have started to promote (around base services) is an expand/contract
model: start by expanding support to a couple viable options, and then
once operators / the market decides on one winner, contract to only
supporting that winner, and start using the specific features of that
technology.


Is it possible to avoid the expand/contract? I get that idea, but it 
seems awfully slow and drawn out... I'd almost rather pick a good enough 
solution, and devote a lot of resources into making it the best solution 
instead of choosing 2 solutions (neither very good) and then later 
picking 1 (by the time that happens, someone that picked solution #1 
would be quite a bit farther ahead of you).




The benefit is that the final choice ends up being made by the
operators. Yes, it means that at the start you will have to do with the
lowest common denominator. But frankly at this stage it would be awesome
to just have the LCD of DLMs, rather than continue disagreeing on
Zookeeper vs. etcd and not even having that lowest common denominator.


On a side note, whenever I heard operators or developers it makes me 
sad... Why do we continue to think there are two groups here? I'd almost 
like there to be some kind of rotation among *all* openstack folks where 
say individuals in the community rotate between companies to get a feel 
for what it means to operate & develop this beast.


Perhaps some kind of internship like thing (except call it something 
else), I'd certainly like to break down these walls that continue to be 
mentioned when I don't really think they need to exist...





* Leads others; we are one of the older cloud foundations (I think?) so
we should be leading others such as the CNCF and such, so we must be
heavily outreaching to these others and helping them learn from our
mistakes


We can always do more, but this is already happening. I was asked for
and provided early advice to the CNCF while they were setting up their
technical governance structure. Other foundations reached out to us to
discuss and adopt our vulnerability management models. There are a lot
more examples.


Is it theortically possible that we just merge with some of these 
foundations? Aren't we better as a bundle of twigs instead of our own 
stick?


"A single twig breaks, but the bundle of twigs is strong." - Tecumseh

Why aren't we leading the formation of that bundle?




[...]
* Full control of infrastructure (mostly discard it); I don't think we
necessarily need to have full control of infrastructure anymore. I'd
rather target something that builds on the layers of others at this
point and offers value there.


+1



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What do we want to be when we grow up?

2017-03-13 Thread Joshua Harlow

Monty Taylor wrote:

On 03/10/2017 11:39 PM, Joshua Harlow wrote:

* Interoperability - kept as is (though I can't really say how many
public clouds there are anymore to interoperate with).


There are plenty. As Dan Smith will tell you, I'm fond of telling people
just how  many OpenStack Public Cloud accounts I have.


How many do  you have, what are there user names and passwords?

Please paste secrets.yaml and clouds.yaml, mmmk

On another topic, can we integrate with something not a plain-text 
secret file in shade (or os-client-config?); I know keyring 
(https://pypi.python.org/pypi/keyring) was tried once, perhaps we should 
try it again?




This:

https://docs.openstack.org/developer/os-client-config/vendor-support.html

Is a non-comprehensive list. I continue to find new ones I didn't know
about. I also haven't yet gotten an account on Teuto.net to verify it.

There is also this:

https://www.openstack.org/marketplace/public-clouds/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-13 Thread Joshua Harlow



Clint Byrum wrote:

Excerpts from Fox, Kevin M's message of 2017-03-14 00:09:55 +:

With my operator hat on, I would like to use the etcd backend, as I'm already 
paying the cost of maintaining etcd clusters as part of Kubernetes. Adding 
Zookeeper is a lot more work.



It would probably be a good idea to put etcd in as a plugin and make
sure it is tested in the gate. IIRC nobody has requested the one thing
ZK can do that none of the others can (fair locking), but it's likely
there are bugs in both drivers since IIRC neither are enabled in any
gate.


Pretty sure etcd can do fair locking now :-P

Or reading 
https://coreos.com/etcd/docs/latest/v2/api.html#atomically-creating-in-order-keys 
it seems like we should be able to.


Honestly the APIs of both etcd and zookeeper seem pretty equivalent; 
etcd is a little more k/v oriented (and has key ttls, to a degree) but 
the rest seems nearly identical (which isn't a bad thing).


Etcd I think is also switching to grpc sometime in the future (afaik); 
that feature is in alpha/?beta?/experimental right now.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What do we want to be when we grow up?

2017-03-10 Thread Joshua Harlow

This thread is an extraction of:

http://lists.openstack.org/pipermail/openstack-dev/2017-March/thread.html#113362

And is a start of someideas for what we as a group want our vision to 
be. I'm going to start by taking what Zane put up (see above thread) 
already and mutate it.


* Infinite scaling (kept the same as); be nice to target this, although 
it may be a bit abstract with the definition I saw, but meh, that's ok, 
gotta start somewhere.


* Granularity of allocation (kept as is).

* Be opinionated; let's actually pick *specific* technologies based on 
well thought out decisions about what we want out of those technologies 
and integrate them deeply (and if we make a bad decision, that's ok, we 
are all grown ups and we'll deal with it). IMHO it hasn't turned out 
well trying to have drivers for everything and everyone so let's umm 
stop doing that.


* Leads others; we are one of the older cloud foundations (I think?) so 
we should be leading others such as the CNCF and such, so we must be 
heavily outreaching to these others and helping them learn from our 
mistakes (in all reality I don't quite know why we need a openstack 
foundation as an entity in the first place, instead of say just joining 
the linux foundation and playing nicely with others there).


* Granularity of allocation (doesn't feel like this should need 
mentioning anymore, since I sort of feel its implicit now-a-days but 
fair enough, might as well keep it for the sake of remembering it).


* Full control of infrastructure (mostly discard it); I don't think we 
necessarily need to have full control of infrastructure anymore. I'd 
rather target something that builds on the layers of others at this 
point and offers value there. If it is really needed provide a 
light-weight *opinionated* version of nova, cinder, neutron that the 
upper layers can use (perhaps this light-weight version is what becomes 
of the current IAAS projects as they exist).


* Hardware virtualization (seems mostly implicit now-a-days)

* Built-in reliability (same as above, if we don't do this we should all 
look around for jobs elsewhere)


* Application control - (securely) (same as above)

* Integration - cloud services that effectively form part of the user's 
application can communicate amongst themselves, where appropriate, 
without the need for client-side glue (see also: Built-in reliability).


   - Ummm maybe, if this creates yet another ecosystem where only the 
things inside that ecosystem work with each other, then nope, I veto 
that; if it means the services created work with other services over 
standardized APIs (that are bigger than a single ecosystem) then I'm ok 
with that.


* Interoperability - kept as is (though I can't really say how many 
public clouds there are anymore to interoperate with).


* Self-healing - whatever services we write should heal and scale 
themselves, if a operator has to twiddle some settings or get called up 
at night due to something busting itself, we failed.


* Self-degradation - whatever services we write should be able to 
degrade there functionality *automatically* taking into account there 
surroundings (also related to self-healing).


* Heavily embrace that fact that a growing number of users don't 
actually want to own any kind of server (including myself) - amazon 
lambda or equivalent may be worth energy to actually make a reality.


* Move beyond copying what others have already done (ie aws) and develop 
the equivalent of a cross-company research 'arm?' that can utilize the 
smart people we have to actually develop leading edge solutions (and be 
ok with those solutions failing, cause they may).


* More (as I think of them I'll write them)

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat][murano][app-catalog] The future of the App Catalog

2017-03-10 Thread Joshua Harlow

Clint Byrum wrote:

Excerpts from Thierry Carrez's message of 2017-03-10 16:48:02 +0100:

Christopher Aedo wrote:

On Thu, Mar 9, 2017 at 4:08 AM, Thierry Carrez  wrote:

Christopher Aedo wrote:

On Mon, Mar 6, 2017 at 3:26 AM, Thierry Carrez  wrote:

[...]
In parallel, Docker developed a pretty successful containerized
application marketplace (the Docker Hub), with hundreds of thousands of
regularly-updated apps. Keeping the App Catalog around (including its
thinly-wrapped Docker container Murano packages) make us look like we
are unsuccessfully trying to compete with that ecosystem, while
OpenStack is in fact completely complementary.

Without something like Murano "thinly wrapping" docker apps, how would
you propose current users of OpenStack clouds deploy docker apps?  Or
any other app for that matter?  It seems a little unfair to talk about
murano apps this way when no reasonable alternative exists for easily
deploying docker apps.  When I look back at the recent history of how
we've handled containers (nova-docker, magnum, kubernetes, etc) it
does not seem like we're making it easy for the folks who want to
deploy a container on their cloud...

I'd say there are two approaches: you can use the container-native
approach ("docker run" after provisioning some container-enabled host
using Nova or K8s cluster using Magnum), or you can use the
OpenStack-native approach (zun create nginx) and have it
auto-provisioned for you. Those projects have a narrower scope, and
fully co-opt the container ecosystem without making us appear as trying
to build our own competitive application packaging/delivery/marketplace
mechanism.

I just think that adding the Murano abstraction in the middle of it and
using an AppCatalog-provided Murano-powered generic Docker container
wrapper is introducing unnecessary options and complexity -- options
that are strategically hurting us when we talk to those adjacent
communities...

OK thank you for making it clearer, now I understand where you're
coming from.  I do agree with this sentiment.  I don't have any
experience with zun but it sounds like it's the least-cost way to
deploy a docker at for the environments where it's installed.

I think overall the app catalog was an interesting experiment, but I
don't think it makes sense to continue as-is.  Unless someone comes up
with a compelling new direction, I don't see much point in keeping it
running.  Especially since it sounds like Mirantis is on board (and
the connection to a murano ecosystem was the only thing I saw that
might be interesting).

Right -- it's also worth noting that I'm only talking about the App
Catalog here, not about Murano. Zun might be a bit too young for us to
place all our eggs in the same basket, and some others pointed to how
Murano is still viable alternative package for things that are more
complex than a set of containers. What I'm questioning at the moment is
the continued existence of a marketplace that did not catch fire as much
as we hoped -- an app marketplace with not enough apps just hurts more
than it helps imho.

In particular, I'm fine if (for example) the Docker-wrapper murano
package ends up being shipped as a standard/example package /in/ Murano,
and continues to exist there as a "reasonable alternative for easily
deploying docker apps" :)



While we were debating how to do everything inside our walls, Google
and Microsoft became viable public cloud vendors along side the other
players. We now live in a true multi-cloud world (not just a theoretical
one).


Yes, please if we could stop thinking as a community that everyone 
and everything is inside the openstack wall; and that every company that 
deploys or uses openstack only uses things inside that wall (because 
they don't); companies don't IMHO care anymore (if they ever did) if a 
project is in the openstack wall or not, they care about it being useful 
and working and maintainable/sustainable.




And what I see when I look outside our walls is not people trying to make
the initial steps identical or easy. For that there's PaaS. Instead, for
those that want the full control of their computers that IaaS brings,
there's a focus on making it simple, and growing a process that works
the same for the parts that are the same, and differently for the parts
that are different.

I see things like Terraform embracing the differences in clouds, not
hiding behind lowest common denominators. So if you want a Kubernetes on
GCE and one on OpenStack, you'd write two different Terraform plans
that give you the common set of servers you expect, get you config
management and kubernetes setup and hooked into the infrastructure
however it needs to be, and then get out of your way.

So, while I think it's cool to make sure we are supporting our users
when all they want is us, it might make more sense to do that outside
our walls, where we can meet the rest of the cloud world too.


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-10 Thread Joshua Harlow

Clint Byrum wrote:

Excerpts from Joshua Harlow's message of 2017-03-09 21:53:58 -0800:

Renat Akhmerov wrote:

On 10 Mar 2017, at 06:02, Zane Bitter>  wrote:

On 08/03/17 11:23, David Moreau Simard wrote:

The App Catalog, to me, sounds sort of like a weird message that
OpenStack somehow requires applications to be
packaged/installed/deployed differently.
If anything, perhaps we should spend more effort on advertising that
OpenStack provides bare metal or virtual compute resources and that
apps will work just like any other places.

Look, it's true that legacy apps from the 90s will run on any VM you
can give them. But the rest of the world has spent the last 15 years
moving on from that. Applications of the future, and increasingly the
present, span multiple VMs/containers, make use of services provided
by the cloud, and interact with their own infrastructure. And users
absolutely will need ways of packaging and deploying them that work
with the underlying infrastructure. Even those apps from the 90s
should be taking advantage of things like e.g. Neutron security
groups, configuration of which is and will always be out of scope for
Docker Hub images.

So no, we should NOT spend more effort on advertising that we aim to
become to cloud what Subversion is to version control. We've done far
too much of that already IMHO.

100% agree with that.

And this whole discussion is taking me to the question: is there really
any officially accepted strategy for OpenStack for 1, 3, 5 years?

I can propose what I would like for a strategy (it's not more VMs and
more neutron security groups...), though if it involves (more) design by
committee, count me out.

I honestly believe we have to do the equivalent of a technology leapfrog
if we actually want to be relevant; but maybe I'm to eager...



Open source isn't really famous for technology leapfrogging.


Time to get famous.

I hate accepting what the status quo is just because it's not been 
famous (or easy, or turned out, or ...) before.




And before you say "but Docker.." remember that Solaris had zones 13
years ago.

What a community like ours is good at doing is gathering all the
exciting industry leading bleeding edge chaos into a boring commodity
platform. What Zane is saying (and I agree with) is let's make sure we see
the whole cloud forest and not just focus on the VM trees in front of us.

I'm curious what you (Josh) or Zane would change too.
Containers/apps/kubes/etc. have to run on computers with storage and
networks. OpenStack provides a pretty rich set of features for giving
users computers with storage on networks, and operators a way to manage
those. So I fail to see how that is svn to "cloud native"'s git. It
seems more foundational and complimentary.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-10 Thread Joshua Harlow

gordon chung wrote:


On 10/03/17 12:53 AM, Joshua Harlow wrote:

I can propose what I would like for a strategy (it's not more VMs and
more neutron security groups...), though if it involves (more) design by
committee, count me out.

I honestly believe we have to do the equivalent of a technology leapfrog
if we actually want to be relevant; but maybe I'm to eager...


seems like a manifesto waiting to be penned. probably best in a separate
thread... not sure with what tag if any. regardless, i think it'd be
good to see what others long term vision is... maybe it'll help others
consider what their own expectations are.


Ya, it sort of is, ha. I will start another thread; and put my thinking 
cap on to make sure it's a tremendously huge manifesto, lol.




cheers,



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-09 Thread Joshua Harlow

Renat Akhmerov wrote:



On 10 Mar 2017, at 06:02, Zane Bitter > wrote:

On 08/03/17 11:23, David Moreau Simard wrote:

The App Catalog, to me, sounds sort of like a weird message that
OpenStack somehow requires applications to be
packaged/installed/deployed differently.
If anything, perhaps we should spend more effort on advertising that
OpenStack provides bare metal or virtual compute resources and that
apps will work just like any other places.


Look, it's true that legacy apps from the 90s will run on any VM you
can give them. But the rest of the world has spent the last 15 years
moving on from that. Applications of the future, and increasingly the
present, span multiple VMs/containers, make use of services provided
by the cloud, and interact with their own infrastructure. And users
absolutely will need ways of packaging and deploying them that work
with the underlying infrastructure. Even those apps from the 90s
should be taking advantage of things like e.g. Neutron security
groups, configuration of which is and will always be out of scope for
Docker Hub images.

So no, we should NOT spend more effort on advertising that we aim to
become to cloud what Subversion is to version control. We've done far
too much of that already IMHO.


100% agree with that.

And this whole discussion is taking me to the question: is there really
any officially accepted strategy for OpenStack for 1, 3, 5 years?


I can propose what I would like for a strategy (it's not more VMs and 
more neutron security groups...), though if it involves (more) design by 
committee, count me out.


I honestly believe we have to do the equivalent of a technology leapfrog 
if we actually want to be relevant; but maybe I'm to eager...


Is

there any ultimate community goal we’re moving to regardless of
underlying technologies (containers, virtualization etc.)? I know we’re
now considering various community goals like transition to Python 3.5
etc. but these goals don’t tell anything about our future as an IT
ecosystem from user perspective. I may assume that I’m just not aware of
it. I’d be glad if it was true. I’m eager to know the answers for these
questions. Overall, to me it feels like every company in the community
just tries to pursue its own short-term (in the best case mid-term)
goals without really caring about long-term common goals. So if we say
OpenStack is a car then it seems like the wheels of this car are moving
in different directions. Again, I’d be glad if it wasn’t true. So maybe
some governance needed around setting and achieving ultimate goals of
OpenStack? Or if they already exist we need to better explain them and
advertise publicly? That in turn IMO could attract more businesses and
contributors.

Renat Akhmerov
@Nokia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Some information about the Forum at the Summit in Boston

2017-03-09 Thread Joshua Harlow

Ben Swartzlander wrote:

I might be the only one who has negative feelings about the PTG/Forum
split, but I suspect the foundation is suppressing negative feedback
from myself and other developers so I'll express my feelings here. If
there's anyone else who feels like me please reply, otherwise I'll
assume I'm just an outlier.


You aren't alone here.



The new structure is asking developers to travel 4 times a year
(minimum) and makes it impossible to participate in 2 or more vertical
projects.

I know that most of the people working on Manila have pretty limited
travel budgets, and meeting 4 times a year basically guarantees that a
good number of people will be remote at any given meeting. From my
perspective if I'm going to be meeting with people on the phone I'd
rather be on the phone myself and have everyone on equal footing.


The funny part is that if you don't go to the PTG then the following is 
what u get.


'All ATCs who contributed to the Ocata release but were unable to
attend the PTG will receive a $300 discount on the current ticket
price for the Boston Summit. '

So if you don't go to the PTGs (or attend virtually) then you get 
penalized on the summits as well, which is like, ummm, ya super awesome, 
lol.


And don't get me started as to why the summits are so expensive; 
especially knowing how much it costs to join the openstack foundation 
(https://www.openstack.org/join/).



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Saga of servicec discovery (is it needed?)

2017-03-02 Thread Joshua Harlow

Julien Danjou wrote:

On Thu, Mar 02 2017, Joshua Harlow wrote:


So before I start to go much farther on this (and start to dive into what
people are doing and why the various implementations exist and proposing a
cross-project solution, tooz, or etcd, or zookeeper or other...) I wanted to
get general feedback (especially from the projects that have these kinds of
implementations) if this is a worthwhile path to even try to go down.


IIUC we use such a mechanism in a few Telemetry projects to share
information between agents.

For example, Gnocchi metricd workers talk together and provide each
others their numbers of CPU so they can distribute fairly the number of
jobs they should take care for. They use that same mechanism to know
if/how every agent is alive.

We've been using that technology for 3 years now, and we named it
"tooz". You may have heard of it. ;-)

Cheers,


Whats that, ha,

But ya, one of the outcomes is tooz; but I'm not even sure there is 
information being shared about the duplication that is happening 
(projects seem to silo themselves with regards to stuff like this).


So I guess one of the first goals would be to undo that siloing, though 
if nobody from the projects cares about doing things in the same way 
using the same toolkit, then I'd say we are more screwed then I thought, 
lol.


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][infra] does someone cares about Jenkins? I stopped.

2017-03-02 Thread Joshua Harlow

Marcin Juszkiewicz wrote:

I am working on some improvements for Kolla. Part of that work is
sending patches for review.

Once patch is set for review.openstack.org there is a set of Jenkins
jobs started to make sure that patch does not break already working
code. And this is good thing.

How it is done is not good ;(

1. Kolla output is nightmare to debug.

There is --logs-dir option to provide separate logs for each image build
but it is not used. IMHO it should be as digging through such logs is
easier.



I to find the kolla output a bit painful, and I'm willing to help 
improve it, what would you think is a better approach (that we can try 
to get to).


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Saga of servicec discovery (is it needed?)

2017-03-02 Thread Joshua Harlow
So this is a general start of a large discussion that is similar to the 
other one I started[1], and this time around I wanted to start this on 
the mailing instead of a spec first approach.


The general question is around something I keep on noticing popping up 
in various projects and worry about what we as a community are doing 
with regard to diversity of implementation (and what can we do to make 
this better).


What seems to be being recreated (in various forms) is that multiple 
projects seem to have a need to store ephemeral data about some sort of 
agent (ie nova and its compute node agents, neutron and its agents, 
octavia and its amphora agents...) in a place that can also track the 
liveness (whether that component is up/down/dead/alive) of that agents 
(in general we can replace agent with service and call this service 
discovery).


It appears (correct me if I am wrong) the amount of ephemeral metadata 
associated with this agent typically seems to be somewhat minimal, and 
any non-ephemeral agent data should be stored somewhere else (ie a 
database).


I think it'd be great from a developer point of view to start to address 
this, and by doing so we can make operations of these various projects 
and there agents that much easier (it is a real PITA when each project 
does this differently, it makes cloud maintenance procedures that much 
more painful, because typically while doing maintenance you need to 
ensure these agents are turned off, having X different ways to do this 
with Y different side-effects makes this ummm, not enjoyable).


So before I start to go much farther on this (and start to dive into 
what people are doing and why the various implementations exist and 
proposing a cross-project solution, tooz, or etcd, or zookeeper or 
other...) I wanted to get general feedback (especially from the projects 
that have these kinds of implementations) if this is a worthwhile path 
to even try to go down.


IMHO it is, though it may once again require the community as a group to 
notice things are being done differently that are really the same and 
people caring enough to actually want to resolve this situation (in 
general I really would like the architecture working group to be able to 
proactively resolving these issues before they get created, 
retroactively trying to resolve these differences is not a 
healthy/sustainable thing we should be doing).


Thoughts?

[1] https://lwn.net/Articles/662140/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Device tagging: rebuild config drive upon instance reboot to refresh metadata on it

2017-02-27 Thread Joshua Harlow

Not afaik, first time I've heard about this type of device/data.

Tim Bell wrote:

Is there cloud-init support for this mode or do we still need to mount as a 
config drive?

Tim

On 20.02.17, 17:50, "Jeremy Stanley"  wrote:

 On 2017-02-20 15:46:43 + (+), Daniel P. Berrange wrote:
 >  The data is exposed either as a block device or as a character device
 >  in Linux - which one depends on how the NVDIMM is configured. Once
 >  opening the right device you can simply mmap() the FD and read the
 >  data. So exposing it as a file under sysfs doesn't really buy you
 >  anything better.

 Oh! Fair enough, if you can already access it as a character device
 then I agree that solves the use cases I was considering.
 --
 Jeremy Stanley

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-16 Thread Joshua Harlow

Alex Schultz wrote:

On Thu, Feb 16, 2017 at 9:12 AM, Ed Leafe  wrote:

On Feb 16, 2017, at 10:07 AM, Doug Hellmann  wrote:


When we signed off on the Big Tent changes we said competition
between projects was desirable, and that deployers and contributors
would make choices based on the work being done in those competing
projects. Basically, the market would decide on the "optimal"
solution. It's a hard message to hear, but that seems to be what
is happening.

This.

We got much better at adding new things to OpenStack. We need to get better at 
letting go of old things.

-- Ed Leafe





I agree that the market will dictate what continues to survive, but if
you're not careful you may be speeding up the decline as the end user
(deployer/operator/cloud consumer) will switch completely to something
else because it becomes to difficult to continue to consume via what
used to be there and no longer is.  I thought the whole point was to
not have vendor lock-in.  Honestly I think the focus is too much on
the development and not enough on the consumption of the development
output.  What are the point of all these features if no one can
actually consume them.



+1 to that.

I've been in the boat of development and consumption of it for my 
*whole* journey in openstack land and I can say the product as a whole 
seems 'underbaked' with regards to the way people consume the 
development output. It seems we have focused on how to do the dev. stuff 
nicely and a nice process there, but sort of forgotten about all that 
being quite useless if no one can consume them (without going through 
much pain or paying a vendor).


This has or has IMHO been a factor in why certain are companies (and the 
people they support) are exiting openstack and just going elsewhere.


I personally don't believe fixing this is 'let the market forces' figure 
it out for us (what a slow & horrible way to let this play out; I'd 
almost rather go pull my fingernails out). I do believe it will require 
making opinionated decisions which we have all never been very good at.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-15 Thread Joshua Harlow

For the cookbooks, every core and non-core project that is supported has to
be tracked. In addition to that, each platform that is supported must be
tracked, for quirks and idiosyncrasies, because they always have them.

Then, there are the cross-project teams that do the packaging, as well as
the teams that do not necessarily ship releases that must be tracked, for
variances in testing methods, mirrors outside the scope of infra, external
dependencies, etc. It can be slightly overwhelming and overloading at times,
even to someone reasonably seasoned. Scale that process, for every ecosystem
in which one desires to exist, by an order of magnitude.

There’s definitely a general undercurrent to all of this, and it’s bigger
than any one person or team to solve. We definitely can’t “read the release
notes” for this.


Radical idea, have each project (not libraries) contain a dockerfile 
that builds the project into a deployable unit (or multiple dockerfiles 
for projects with multiple components) and then it becomes the projects 
responsibility for ensuring that the right code is in that dockerfile to 
move from release to release (whether that be a piece of code that does 
a configuration migration).


This is basically what kolla is doing (except kolla itself contains all 
the dockerfiles and deployment tooling as well) and though I won't 
comment on the kolla perspective if each project managed its own 
dockerfiles that wouldn't seem like a bad thing... (it may have been 
proposed before).


Such a thing could move the responsibility (of at least the packaging 
components and dependencies) onto the projects themselves. I've been in 
the boat of try to do all the packaging and tracking variances and I 
know it's a some kind of hell and shifting the responsibility on the 
projects themselves may be a better solution (or at least can be one 
people discuss).



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate] Status of the project

2017-02-10 Thread Joshua Harlow

Fox, Kevin M wrote:

I'd say kube-dns and designate are very different technologies.

kube-dns is a service discovery mechanism for kubernetes intend to provide 
internal k8s resolution. The fact it uses dns to implement service discovery is 
kind of an implementation detail, not its primary purpose. There's no need for 
private dns management, scaling past the size of the k8s cluster itself, etc. A 
much easier problem to solve at the moment.

Designate really is a multitenant dns as a service implementation. While it can 
be used for service discovery, its not its primary purpose.

I see no reason they couldn't share some common pieces, but care should be 
given not to just say, lets throw out one for the other, as they really are 
different animals.



Arg, the idea wasn't meant to be that (abandon one for the other), but 
just to investigate the larger world and maybe we have to adapt our 
model of `multitenant dns as a service implementation` to be slightly 
different; so what..., if it means we get to keep contributors and grow 
a larger community (and partner with others and learn new things and 
adopt new strategies/designs and push the limits of tech and ...) by 
doing so then that's IMHO good.



Thanks,
Kevin

From: Jay Pipes [jaypi...@gmail.com]
Sent: Friday, February 10, 2017 9:50 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [designate] Status of the project

On 02/10/2017 12:21 PM, Joshua Harlow wrote:

Hayes, Graham wrote:

The HTML version of this is here:
http://graham.hayes.ie/posts/openstack-designate-where-we-are/

I have been asked a few times recently "What is the state of the
Designate
project?", "How is Designate getting on?", and by people who know what is
happening "What are you going to do about Designate?".

Needless to say, all of this is depressing to me, and the people that
I have
worked with for the last number of years to make Designate a truly
useful,
feature rich project.

*TL;DR;* for this - Designate is not in a sustainable place.

To start out - Designate has always been a small project. DNS does not
have
massive *cool* appeal - its not shiny, pretty, or something you see on
the
front page of HackerNews (unless it breaks - then oh boy do people
become DNS
experts).


Thanks for posting this, I know it was not easy to write...

Knowing where this is at and the issues. It makes me wonder if it is
worthwhile to start thinking about how we can start to look at 'outside
the openstack' projects for DNS. I believe there is a few that are
similar enough to designate (though I don't know well enough) for
example things like SkyDNS (or others which I believe there are a few).

Perhaps we need to start thinking outside the openstack 'box' in regards
to NIH syndrome and accept the fact that we as a community may not be
able to recreate the world successfully in all cases (the same could be
said about things like k8s and others).

If we got out of the mindset of openstack as a thing must have tightly
integrated components (over all else) and started thinking about how we
can be much more loosely coupled (and even say integrating non-python,
non-openstack projects) would that be beneficial (I think it would)?


This is already basically what Designate *is today*.

http://docs.openstack.org/developer/designate/support-matrix.html

Just because something is written in Golang and uses etcd for storage
doesn't make it "better" or not NIH.

For the record, the equivalent to Designate in k8s land is Kube2Sky, the
real difference being that Designate has a whole lot more options when
it comes to the DNS drivers and Designate integrates with OpenStack
services like Keystone.

Also, there's more to cloud DNS services than service discovery, which
is what SkyDNS was written for.

best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate] Status of the project

2017-02-10 Thread Joshua Harlow

Jay Pipes wrote:

On 02/10/2017 12:21 PM, Joshua Harlow wrote:

Hayes, Graham wrote:

The HTML version of this is here:
http://graham.hayes.ie/posts/openstack-designate-where-we-are/

I have been asked a few times recently "What is the state of the
Designate
project?", "How is Designate getting on?", and by people who know
what is
happening "What are you going to do about Designate?".

Needless to say, all of this is depressing to me, and the people that
I have
worked with for the last number of years to make Designate a truly
useful,
feature rich project.

*TL;DR;* for this - Designate is not in a sustainable place.

To start out - Designate has always been a small project. DNS does not
have
massive *cool* appeal - its not shiny, pretty, or something you see on
the
front page of HackerNews (unless it breaks - then oh boy do people
become DNS
experts).



Thanks for posting this, I know it was not easy to write...

Knowing where this is at and the issues. It makes me wonder if it is
worthwhile to start thinking about how we can start to look at 'outside
the openstack' projects for DNS. I believe there is a few that are
similar enough to designate (though I don't know well enough) for
example things like SkyDNS (or others which I believe there are a few).

Perhaps we need to start thinking outside the openstack 'box' in regards
to NIH syndrome and accept the fact that we as a community may not be
able to recreate the world successfully in all cases (the same could be
said about things like k8s and others).

If we got out of the mindset of openstack as a thing must have tightly
integrated components (over all else) and started thinking about how we
can be much more loosely coupled (and even say integrating non-python,
non-openstack projects) would that be beneficial (I think it would)?


This is already basically what Designate *is today*.

http://docs.openstack.org/developer/designate/support-matrix.html

Just because something is written in Golang and uses etcd for storage
doesn't make it "better" or not NIH.


Agreed, do those other projects (written in golang, or etcd or other) 
have communities that are growing; can we ensure better success (and 
health of our own community) by partnering with them? That was the main 
point (I don't really care what language they are written in or what 
storage backend they use).




For the record, the equivalent to Designate in k8s land is Kube2Sky, the
real difference being that Designate has a whole lot more options when
it comes to the DNS drivers and Designate integrates with OpenStack
services like Keystone.



That's cool, thanks; TIL.


Also, there's more to cloud DNS services than service discovery, which
is what SkyDNS was written for.


Sure, it was just an example.

The point was along the lines of if a project in our community is 
struggling and there is a similar project outside of openstack (that is 
trying to do similar things) is not struggling; perhaps it's better to 
partner with that other project and enhance that other project (and then 
recommend said project as the next-generation of ${whatever_project} was 
struggling here).


Said evaluation is something that we would likely have to do over time 
as well (because as from this example, desigate was a larger group once, 
it is now smaller).




best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate] Status of the project

2017-02-10 Thread Joshua Harlow

Hayes, Graham wrote:

The HTML version of this is here:
http://graham.hayes.ie/posts/openstack-designate-where-we-are/

I have been asked a few times recently "What is the state of the Designate
project?", "How is Designate getting on?", and by people who know what is
happening "What are you going to do about Designate?".

Needless to say, all of this is depressing to me, and the people that I have
worked with for the last number of years to make Designate a truly useful,
feature rich project.

*TL;DR;* for this - Designate is not in a sustainable place.

To start out - Designate has always been a small project. DNS does not have
massive *cool* appeal - its not shiny, pretty, or something you see on the
front page of HackerNews (unless it breaks - then oh boy do people
become DNS
experts).



Thanks for posting this, I know it was not easy to write...

Knowing where this is at and the issues. It makes me wonder if it is 
worthwhile to start thinking about how we can start to look at 'outside 
the openstack' projects for DNS. I believe there is a few that are 
similar enough to designate (though I don't know well enough) for 
example things like SkyDNS (or others which I believe there are a few).


Perhaps we need to start thinking outside the openstack 'box' in regards 
to NIH syndrome and accept the fact that we as a community may not be 
able to recreate the world successfully in all cases (the same could be 
said about things like k8s and others).


If we got out of the mindset of openstack as a thing must have tightly 
integrated components (over all else) and started thinking about how we 
can be much more loosely coupled (and even say integrating non-python, 
non-openstack projects) would that be beneficial (I think it would)?


-Josh



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   4   5   6   7   8   9   10   >