Re: [openstack-dev] [gnocchi] Running tests

2017-05-23 Thread Julien Danjou
On Tue, May 23 2017, aalvarez wrote:

> Ok so started the tests using:
>
> tox -e py27-postgresql-file
>
> The suite starts running fine, but then I get a failing test:

Can you reproduce it each time?

That's weird, I don't think we ever saw that.

-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Cells] Stupid question: Cells v2 & AZs

2017-05-23 Thread Belmiro Moreira
Hi David,
AVZs are basically aggregates.
In cells_v2 aggregates are defined in the cell_api, so it will be possible
to have
multiple AVZs per cell and AVZs that spread between different cells.

Belmiro

On Wed, May 24, 2017 at 5:14 AM, David Medberry 
wrote:

> Hi Devs and Implementers,
>
> A question came up tonight in the Colorado OpenStack meetup regarding
> cells v2 and availability zones.
>
> Can a cell contain multiple AZs? (I assume this is yes.)
>
> Can an AZ contain mutliple cells (I assumed this is no, but now in
> thinking about it, that's probably not right.)
>
> What's the proper way to think about this? In general, I'm considering AZs
> primarily as a fault zone type of mechanism (though they can be used in
> other ways.)
>
> Is there a clear diagram/documentation about this?
>
> And consider this to be an Ocata/Pike and later only type of question.
>
> Thanks.
>
> -dave
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][heat] - making Neutron more friendly for orchestration

2017-05-23 Thread Kevin Benton
We chatted a bit about this on IRC. The issue here is that subnets that
belong to router:external networks are not visible unless the network is
shared as well.
So the only way the users can learn about which subnets to pick are in the
list of subnet UUIDs in the json body of the network 'subnets' field.[1]
They would basically be picking blindly because they don't know any details
about the subnets on the external network.

I think for us to allow a reasonable workflow where they pick an external
subnet instead, we would need to revisit the decision to hide external
subnets from users.


1.
https://github.com/openstack/neutron-lib/blob/ca299b8e47fdd5030dda596fd779beb3e5bea6cf/neutron_lib/api/definitions/network.py#L43

On Fri, May 19, 2017 at 3:27 PM, Armando M.  wrote:

>
>
> On 19 May 2017 at 14:54, Clark Boylan  wrote:
>
>> On Fri, May 19, 2017, at 02:03 PM, Kevin Benton wrote:
>> > I split this conversation off of the "Is the pendulum swinging on PaaS
>> > layers?" thread [1] to discuss some improvements we can make to Neutron
>> > to
>> > make orchestration easier.
>> >
>> > There are some pain points that heat has when working with the Neutron
>> > API.
>> > I would like to get them converted into requests for enhancements in
>> > Neutron so the wider community is aware of them.
>> >
>> > Starting with the port/subnet/network relationship - it's important to
>> > understand that IP addresses are not required on a port.
>> >
>> > >So knowing now that a Network is a layer-2 network segment and a Subnet
>> > is... effectively a glorified DHCP address pool
>> >
>> > Yes, a Subnet controls IP address allocation as well as setting up
>> > routing
>> > for routers, which is why routers reference subnets instead of networks
>> > (different routers can route for different subnets on the same network).
>> > It
>> > essentially dictates things related to L3 addressing and provides
>> > information for L3 reachability.
>>
>> One thing that is odd about this is when creating a router you specify
>> the gateway information using a network which is l2 not l3. Seems like
>> it would be more correct to use a subnet rather than a network there?
>>
>
> I think this is due the way external networks ended up being modeled in
> neutron. I suppose we could have allowed the user to specify a subnet, so
> long that it fell in the bucket of subnets that belong to a router:external
> network.
>
>
>>
>> Clark
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Validations before upgrades and updates

2017-05-23 Thread Marios Andreou
On Wed, May 24, 2017 at 4:09 AM, Graeme Gillies  wrote:

> On 08/05/17 21:45, Marios Andreou wrote:
> > Hi folks, after some discussion locally with colleagues about improving
> > the upgrades experience, one of the items that came up was pre-upgrade
> > and update validations. I took an AI to look at the current status of
> > tripleo-validations [0] and posted a simple WIP [1] intended to be run
> > before an undercloud update/upgrade and which just checks service
> > status. It was pointed out by shardy that for such checks it is better
> > to instead continue to use the per-service  manifests where possible
> > like [2] for example where we check status before N..O major upgrade.
> > There may still be some undercloud specific validations that we can land
> > into the tripleo-validations repo (thinking about things like the
> > neutron networks/ports, validating the current nova nodes state etc?).
> >
> > So do folks have any thoughts about this subject - for example the kinds
> > of things we should be checking - Steve said he had some reviews in
> > progress for collecting the overcloud ansible puppet/docker config into
> > an ansible playbook that the operator can invoke for upgrade of the
> > 'manual' nodes (for example compute in the N..O workflow) - the point
> > being that we can add more per-service ansible validation tasks into the
> > service manifests for execution when the play is run by the operator -
> > but I'll let Steve point at and talk about those.
> >
> > cheers, marios
> >
> > [0] https://github.com/openstack/tripleo-validations
> > [1] https://review.openstack.org/#/c/462918/
> > [2]  https://github.com/openstack/tripleo-heat-templates/blob/
> stable/ocata/puppet/services/neutron-api.yaml#L197
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> Hi Marios,
>
> Forgive me if I misunderstand here, but it looks like part of this goal
> is to do things like ensure the overcloud is in a decent state before an
> upgrade/update is executed.
>
> How would this work in a situation where I have hit an openstack bug
> which causes my cinder service to stop working/fail, and I a fix has
> been created/packaged, ready for me to update my overcloud with, but the
> validations bomb out because cinder isn't running (and I can't update my
> overcloud to the newest package with the fix because the validation fails?)
>
>
o/ right... so there are roughly two groups of things here - validations
for the undercloud (of which we don't have much and we want to add some)
and validations for the overcloud. For the former we are targetting
tripleo-validations and for the latter adding to the existing service
checks in the tripleo-heat-template service manifests for execution during
the upgrade.

For both we need a way to disable them - one of the key concerns is the
scenario you describe. For the overcoud service checks we already have that
at least for the current simple "is service running " (grep
SkipUpgradeConfigTags at
https://docs.openstack.org/developer/tripleo-docs/post_deployment/upgrade.html).
For the tripleo-validations I believe there is a 'validations fatal' type
flag already that you can pass to the client.

hope it answers your concern



> Regards,
>
> Graeme
>
> --
> Graeme Gillies
> Principal Systems Administrator
> Openstack Infrastructure
> Red Hat Australia
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-23 Thread Tom Fifield

On 廿十七年五月廿四日 朝 09:38, Rochelle Grober wrote:

  From: Ildiko
  > On 2017. May 23., at 15:43, Sean McGinnis 

wrote:


On Mon, May 22, 2017 at 05:50:50PM -0500, Anne Gentle wrote:

On Mon, May 22, 2017 at 5:41 PM, Sean McGinnis

wrote:



[snip]



Hey Sean, is the "right to merge" the top difficulty you envision
with 1 or 2? Or is it finding people to do the writing and reviews?
Curious about your thoughts and if you have some experience with
specific day-to-day behavior here, I would love your insights.

Anne


I think it's more about finding people to do the writing and reviews,
though having incentives like having more say in that area of things
could be beneficial for finding those people.


I think it is important to note here that by having the documentation (in it’s
easily identifiable, own folder) living together with the code in the same
repository you have the developer(s) of the feature as first line candidates
on adding documentation to their change.

I know that writing good technical documentation is it’s own profession, but
having the initial data there which can be fixed by experienced writers if
needed is a huge win compared to anything separated, where you might not
have any documentation at all.

So by having the ability to -1 a change because of the lack of documentation
is on one hand might be a process change for reviewers, but gives you the
docs contributors as well.


Possible side benefits here:  If a new, wannabe developer starts with the docs 
to figure out how to participate in the project, they may/will (if encouraged) 
file bugs against the docs where they are wrong or lacking.  Beyond that, if 
the newbie is reading the code s/he may just fix some low hanging fruit docs 
issues, or even go deeper.  I know, devs don't read docs, but I think they 
sneak looks when they think no one is looking.  And then get infuriated if the 
docs don't match the code.  Perusers of code have more time to address issues 
than firefighters (fixing high priority bugs), so it's possible that this new 
approach will encourage more complete documentation.  I can be optimistic, too.


+1, contributing to documentation should be a much easier starting point 
than code & a good way to learn the gerrit workflow. If the doc patch 
coming in from the newbie is "better than what's there", it should be 
merged swiftly.




--Rocky
  

So to summarize, the changes what Alex described do not indicate that the
core team has to write the documentation themselves or finding a team of
technical writers before applying the changes, but be conscious about caring
whether docs is added along with the code changes.

Thanks,
Ildikó



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-
requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [devstack] [infra] heat api services with uwsgi

2017-05-23 Thread Rabi Mishra
On Tue, May 23, 2017 at 11:57 PM, Zane Bitter  wrote:

> On 23/05/17 01:23, Rabi Mishra wrote:
>
>> Hi All,
>>
>> As per the updated community goal[1]  for api deployment with wsgi,
>> we've to transition to use uwsgi rather than mod_wsgi at the gate. It
>> also seems mod_wsgi support would be removed from devstack in Queens.
>>
>> I've been working on a patch[2] for the transition and encountered a few
>> issues as below.
>>
>> 1. We encode stack_indentifer( along with the path
>> separator in heatclient. So, requests with encoded path separators are
>> dropped by apache (with 404), if we don't have 'AllowEncodedSlashes On'
>> directive in the site/vhost config[3].
>>
>
> We'd probably want 'AllowEncodedSlashes NoDecode'.
>

Yeah, that would be ideal  for supporting slashes in stack and resource
names where we take care of the encoding and decoding.


> Setting this for mod_proxy_uwsgi[4] seems to work on fedora but not
>> ubuntu.  From my testing It seems, it has to be set in 000-default.conf
>> for ubuntu.
>>
>> Rather than messing with the devstack plugin code, I went ahead proposed
>> a change to not encode the path separators in heatclient[5] ( Anyway
>> they would be decoded by apache with the directive 'AllowEncodedSlashes
>> On' before it's consumed by the service) which seem to have fixed those
>> 404s.
>>
>
> Pasting my comment from the patch:
>
> One potential problem with this is that you can probably craft a stack
> name in such a way that heatclient ends up calling a real but unexpected
> URL. (I don't think this is a new problem, but it's likely the problem that
> the default value of AllowEncodedSlashes is designed to fix, and we're
> circumventing it here.)
>

> It seems to me the ideal would be to force '/'s to be encoded when they
> occur in the stack and resource names. Clearly they should never have been
> encoded when they're actual path separators (e.g. between the stack name
> and stack ID).
>
It'd be even better if Apache were set to "AllowEncodedSlashes NoDecode"
> and we could then decode stack/resource names that include slashes after
> splitting at the path separators, so that those would actually work. I
> don't think the routing framework can handle that though.
>
>
I don't think we even support slashes (encoded or not) in stack name. The
validation below would not allow it.

https://git.openstack.org/cgit/openstack/heat/tree/heat/engine/stack.py#n143

As far as resource names are concerned, we don't encode or decode them
appropriately for it to work as expected. Creating a stack with resource
name containing '/' fails with validation error as it's not encoded for
being inside the template snippet and the validation below would fail.

https://git.openstack.org/cgit/openstack/heat/tree/heat/engine/resource.py#n214

For that reason I believe we disallow slashes in stack/resource names. So
> with "AllowEncodedSlashes Off" we'd get the right behaviour (which is to
> always 404 when the stack/resource name contains a slash).
>

>
Is there a generic way to set the above directive (when using
>> apache+mod_proxy_uwsgi) in the devstack plugin?
>>
>> 2.  With the above, most of the tests seem to work fine other than the
>> ones using waitcondition, where we signal back from the vm to the api
>>
>
> Not related to the problem below, but I believe that when signalling
> through the heat-cfn-api we use an arn to identify the stack, and I suspect
> that slashes in the arn are escaped at or near the source. So we may have
> no choice but to find a way to turn on AllowEncodedSlashes. Or is it in the
> query string part anyway?
>
> Yeah, it's not related to the problem below as the request not reaching
apache at all. I've  taken care of the above issue in the patch itself[1]
and the signal url looks ok to me[2].

[1] https://review.openstack.org/#/c/462216/11/heat/common/identifier.py

[2]
http://logs.openstack.org/16/462216/11/check/gate-heat-dsvm-functional-convg-mysql-lbaasv2-non-apache-ubuntu-xenial/e7d9e90/console.html#_2017-05-20_07_04_30_500696

services. I could see " curl: (7) Failed to connect to 10.0.1.78 port
>> 80: No route to host" in the vm console logs[6].
>>
>> It could connect to heat api services using ports 8004/8000 without this
>> patch, but not sure why not port 80? I tried testing this locally and
>> didn't see the issue though.
>>
>> Is this due to some infra settings or something else?
>>
>>
>> [1] https://governance.openstack.org/tc/goals/pike/deploy-api-in
>> -wsgi.html
>>
>> [2] https://review.openstack.org/#/c/462216/
>>
>> [3]
>> https://github.com/openstack/heat/blob/master/devstack/files
>> /apache-heat-api.template#L9
>>
>> [4]
>> http://logs.openstack.org/16/462216/6/check/gate-heat-dsvm-f
>> unctional-convg-mysql-lbaasv2-non-apache-ubuntu-xenial/
>> fbd06d6/logs/apache_config/heat-wsgi-api.conf.txt.gz
>>
>> [5] https://review.openstack.org/#/c/463510/
>>
>> [6]
>> http://logs.openstack.org/16/462216/11/check/gate-heat-dsvm-
>> functional-convg-mysql-lbaas

Re: [openstack-dev] [gnocchi] Running tests

2017-05-23 Thread aalvarez
Ok so started the tests using:

tox -e py27-postgresql-file

The suite starts running fine, but then I get a failing test:

==
Failed 1 tests - output below:
==

gnocchi.tests.test_indexer.TestIndexerDriver.test_list_resources_without_history


Captured traceback:
~~~
Traceback (most recent call last):
  File "gnocchi/tests/base.py", line 57, in skip_if_not_implemented
return func(*args, **kwargs)
  File "gnocchi/tests/test_indexer.py", line 839, in
test_list_resources_without_history
details=True)
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/oslo_db/api.py",
line 150, in wrapper
ectxt.value = e.inner_exc
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
line 220, in __exit__
self.force_reraise()
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
line 196, in force_reraise
six.reraise(self.type_, self.value, self.tb)
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/oslo_db/api.py",
line 138, in wrapper
return f(*args, **kwargs)
  File "gnocchi/indexer/sqlalchemy.py", line 1048, in list_resources
all_resources.extend(q.all())
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
line 2703, in all
return list(self)
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
line 2855, in __iter__
return self._execute_and_instances(context)
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
line 2878, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
line 945, in execute
return meth(self, multiparams, params)
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py",
line 263, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
line 1046, in _execute_clauseelement
if not self.schema_for_object.is_default else None)
  File "", line 1, in 
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py",
line 436, in compile
return self._compiler(dialect, bind=bind, **kw)
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py",
line 442, in _compiler
return dialect.statement_compiler(dialect, self, **kw)
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/sqlalchemy/sql/compiler.py",
line 435, in __init__
Compiled.__init__(self, dialect, statement, **kwargs)
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/sqlalchemy/sql/compiler.py",
line 216, in __init__
self.string = self.process(self.statement, **compile_kwargs)
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/sqlalchemy/sql/compiler.py",
line 242, in process
return obj._compiler_dispatch(self, **kwargs)
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/sqlalchemy/sql/visitors.py",
line 81, in _compiler_dispatch
return meth(self, **kw)
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/sqlalchemy/sql/compiler.py",
line 1716, in visit_select
for name, column in select._columns_plus_names
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/sqlalchemy/sql/compiler.py",
line 1488, in _label_select_column
**column_clause_args
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/sqlalchemy/sql/visitors.py",
line 75, in _compiler_dispatch
def _compiler_dispatch(self, visitor, **kw):
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/fixtures/_fixtures/timeout.py",
line 52, in signal_handler
raise TimeoutException()
fixtures._fixtures.timeout.TimeoutException



==
Totals

[openstack-dev] [Nova] [Cells] Stupid question: Cells v2 & AZs

2017-05-23 Thread David Medberry
Hi Devs and Implementers,

A question came up tonight in the Colorado OpenStack meetup regarding cells
v2 and availability zones.

Can a cell contain multiple AZs? (I assume this is yes.)

Can an AZ contain mutliple cells (I assumed this is no, but now in thinking
about it, that's probably not right.)

What's the proper way to think about this? In general, I'm considering AZs
primarily as a fault zone type of mechanism (though they can be used in
other ways.)

Is there a clear diagram/documentation about this?

And consider this to be an Ocata/Pike and later only type of question.

Thanks.

-dave
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] Meeting time

2017-05-23 Thread Felipe Monteiro
Hi Paul,

I'm open to changing the meeting time, although I'd like some input from
Murano cores, too. What times work for you and your colleagues? I can
create a patch in infra and add you and others to it to allow for people to
effectively vote for what times you prefer.

Felipe

On Tue, May 23, 2017 at 12:08 PM, Paul Bourke 
wrote:

> Hi Felipe / Murano community,
>
> I was wondering how would people feel about revising the time for the
> Murano weekly meeting?
>
> Personally the current time is difficult for me to attend as it falls at
> the end of a work day, I also have some colleagues that would like to
> attend but can't at the current time.
>
> Given recent low attendance, would another time suit people better?
>
> Thanks,
> -Paul
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-23 Thread Rochelle Grober
 From: Ildiko 
 > On 2017. May 23., at 15:43, Sean McGinnis 
> wrote:
> >
> > On Mon, May 22, 2017 at 05:50:50PM -0500, Anne Gentle wrote:
> >> On Mon, May 22, 2017 at 5:41 PM, Sean McGinnis
> >> 
> >> wrote:
> >>
> >>>
> >>> [snip]
> >>>
> >>
> >> Hey Sean, is the "right to merge" the top difficulty you envision
> >> with 1 or 2? Or is it finding people to do the writing and reviews?
> >> Curious about your thoughts and if you have some experience with
> >> specific day-to-day behavior here, I would love your insights.
> >>
> >> Anne
> >
> > I think it's more about finding people to do the writing and reviews,
> > though having incentives like having more say in that area of things
> > could be beneficial for finding those people.
> 
> I think it is important to note here that by having the documentation (in it’s
> easily identifiable, own folder) living together with the code in the same
> repository you have the developer(s) of the feature as first line candidates
> on adding documentation to their change.
> 
> I know that writing good technical documentation is it’s own profession, but
> having the initial data there which can be fixed by experienced writers if
> needed is a huge win compared to anything separated, where you might not
> have any documentation at all.
> 
> So by having the ability to -1 a change because of the lack of documentation
> is on one hand might be a process change for reviewers, but gives you the
> docs contributors as well.

Possible side benefits here:  If a new, wannabe developer starts with the docs 
to figure out how to participate in the project, they may/will (if encouraged) 
file bugs against the docs where they are wrong or lacking.  Beyond that, if 
the newbie is reading the code s/he may just fix some low hanging fruit docs 
issues, or even go deeper.  I know, devs don't read docs, but I think they 
sneak looks when they think no one is looking.  And then get infuriated if the 
docs don't match the code.  Perusers of code have more time to address issues 
than firefighters (fixing high priority bugs), so it's possible that this new 
approach will encourage more complete documentation.  I can be optimistic, too.

--Rocky
 
> So to summarize, the changes what Alex described do not indicate that the
> core team has to write the documentation themselves or finding a team of
> technical writers before applying the changes, but be conscious about caring
> whether docs is added along with the code changes.
> 
> Thanks,
> Ildikó
> 
> 
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Validations before upgrades and updates

2017-05-23 Thread Graeme Gillies
On 08/05/17 21:45, Marios Andreou wrote:
> Hi folks, after some discussion locally with colleagues about improving
> the upgrades experience, one of the items that came up was pre-upgrade
> and update validations. I took an AI to look at the current status of
> tripleo-validations [0] and posted a simple WIP [1] intended to be run
> before an undercloud update/upgrade and which just checks service
> status. It was pointed out by shardy that for such checks it is better
> to instead continue to use the per-service  manifests where possible
> like [2] for example where we check status before N..O major upgrade.
> There may still be some undercloud specific validations that we can land
> into the tripleo-validations repo (thinking about things like the
> neutron networks/ports, validating the current nova nodes state etc?).
> 
> So do folks have any thoughts about this subject - for example the kinds
> of things we should be checking - Steve said he had some reviews in
> progress for collecting the overcloud ansible puppet/docker config into
> an ansible playbook that the operator can invoke for upgrade of the
> 'manual' nodes (for example compute in the N..O workflow) - the point
> being that we can add more per-service ansible validation tasks into the
> service manifests for execution when the play is run by the operator -
> but I'll let Steve point at and talk about those. 
> 
> cheers, marios
> 
> [0] https://github.com/openstack/tripleo-validations 
> [1] https://review.openstack.org/#/c/462918/
> [2]  
> https://github.com/openstack/tripleo-heat-templates/blob/stable/ocata/puppet/services/neutron-api.yaml#L197
>  
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

Hi Marios,

Forgive me if I misunderstand here, but it looks like part of this goal
is to do things like ensure the overcloud is in a decent state before an
upgrade/update is executed.

How would this work in a situation where I have hit an openstack bug
which causes my cinder service to stop working/fail, and I a fix has
been created/packaged, ready for me to update my overcloud with, but the
validations bomb out because cinder isn't running (and I can't update my
overcloud to the newest package with the fix because the validation fails?)

Regards,

Graeme

-- 
Graeme Gillies
Principal Systems Administrator
Openstack Infrastructure
Red Hat Australia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] on the subject of when we should be deprecating API's in a release cycle

2017-05-23 Thread Amrith Kumar

> -Original Message-
> From: Matt Riedemann [mailto:mriede...@gmail.com]
> Sent: Tuesday, May 23, 2017 8:59 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] on the subject of when we should be deprecating
> API's in a release cycle
> 
> On 5/23/2017 7:50 PM, Amrith Kumar wrote:
> > TL;DR
> >
> > When IaaS projects in OpenStack deprecate their API's after milestone
> > 1, it puts PaaS projects in a pickle. I think it would be much better
> > for PaaS projects if the IaaS projects could please do their
> > deprecations well before
> > milestone-1
> >
> > The longer issue:
> >
> > OK, the guy from Trove is bitching again. The Trove gate is broken (again).
> > This time, it appears to be because Trove was using a deprecated Nova
> > Networking API call, and even though everyone and their brother knew
> > that Nova Networking was gone-gone, Trove never got the memo, and like
> > a few others got hit by it.
> >
> > But the fact of the matter is this, it happened. This has happened in
> > previous releases as well where at milestone 2 we are scrambling to
> > fix something because an IaaS project did a planned deprecation.
> >
> > I'm wondering whether we can get a consensus around doing these
> > earlier in the cycle, like before milestone-1, so other projects which
> > depend on the API have a chance to handle it with enough time to test and
> verify.
> >
> > Just to be explicitly clear, I AM NOT pointing fingers at Nova. I knew
> > that NN was gone, just that a couple of API's remained in use and we
> > got bit in the glueteus maximus. I asked Matt for help to find out
> > what API's had been deprecated, he almost immediately helped me with a
> > list and I'm working through getting them fixed (Thanks Matt).
> >
> > I'm merely raising the generic question of whether or not planned
> > deprecations should be done before Milestone 1.
> >
> > Thanks for reading the longer version ...
> >
> > --
> > Amrith Kumar
> > amrith.ku...@gmail.com
> >
> >
> >
> >
> > __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> The novaclient changes to deprecate the networking proxy CLIs and APIs was
> done in the Newton release. They were removed and released in 8.0.0 which was
> milestone 1 of the Pike release. So what are you specifically asking for
> here? Maybe Trove didn't get hit until recently because novaclient 8.0.0
> wasn't pulled into upper-constraints? That might have been why it seems
> recent for Trove. I think the u-c change was gating on Horizon fixing their
> stuff, but maybe u-c changes aren't gated on Trove unit tests?
> 
[Amrith Kumar] Hmm, trove's unit tests are gating against u-c, so that may have 
been the reason. You may be correct, u-c changes are not gated on Trove yet and 
I hesitate to request that given how flaky our gate currently is. The container 
stuff is coming along nicely and is much more reliable so I may be able to have 
a reliable containerized version that can be tested against before long and 
then I will request gating u-c changes on Trove as well.

> Admittedly the python API binding deprecations in novaclient weren't using
> the python warnings module with the DeprecationWarning, which we've been
> pretty consistent about with other API deprecations in the novaclient (like
> with the volume, image and baremetal proxy APIs). We dropped the ball on the
> networking ones though. We have docs in novaclient about how to deprecate
> things, but it's mostly CLI-focused so I'm going to update that to be
> explicit about deprecation warnings in the API bindings too.
> 

[Amrith Kumar] Yeah, but I won't try to hide behind that; we should have seen 
this coming. In fairness, looking at how the neutron stuff is implemented in 
Trove makes me believe that we have a refactoring project in the near future.

> --
> 
> Thanks,
> 
> Matt
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] on the subject of when we should be deprecating API's in a release cycle

2017-05-23 Thread Matt Riedemann

On 5/23/2017 7:50 PM, Amrith Kumar wrote:

TL;DR

When IaaS projects in OpenStack deprecate their API's after milestone 1, it
puts PaaS projects in a pickle. I think it would be much better for PaaS
projects if the IaaS projects could please do their deprecations well before
milestone-1

The longer issue:

OK, the guy from Trove is bitching again. The Trove gate is broken (again).
This time, it appears to be because Trove was using a deprecated Nova
Networking API call, and even though everyone and their brother knew that
Nova Networking was gone-gone, Trove never got the memo, and like a few
others got hit by it.

But the fact of the matter is this, it happened. This has happened in
previous releases as well where at milestone 2 we are scrambling to fix
something because an IaaS project did a planned deprecation.

I'm wondering whether we can get a consensus around doing these earlier in
the cycle, like before milestone-1, so other projects which depend on the
API have a chance to handle it with enough time to test and verify.

Just to be explicitly clear, I AM NOT pointing fingers at Nova. I knew that
NN was gone, just that a couple of API's remained in use and we got bit in
the glueteus maximus. I asked Matt for help to find out what API's had been
deprecated, he almost immediately helped me with a list and I'm working
through getting them fixed (Thanks Matt).

I'm merely raising the generic question of whether or not planned
deprecations should be done before Milestone 1.

Thanks for reading the longer version ...

--
Amrith Kumar
amrith.ku...@gmail.com




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



The novaclient changes to deprecate the networking proxy CLIs and APIs 
was done in the Newton release. They were removed and released in 8.0.0 
which was milestone 1 of the Pike release. So what are you specifically 
asking for here? Maybe Trove didn't get hit until recently because 
novaclient 8.0.0 wasn't pulled into upper-constraints? That might have 
been why it seems recent for Trove. I think the u-c change was gating on 
Horizon fixing their stuff, but maybe u-c changes aren't gated on Trove 
unit tests?


Admittedly the python API binding deprecations in novaclient weren't 
using the python warnings module with the DeprecationWarning, which 
we've been pretty consistent about with other API deprecations in the 
novaclient (like with the volume, image and baremetal proxy APIs). We 
dropped the ball on the networking ones though. We have docs in 
novaclient about how to deprecate things, but it's mostly CLI-focused so 
I'm going to update that to be explicit about deprecation warnings in 
the API bindings too.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Supporting volume_type when booting from volume

2017-05-23 Thread John Griffith
On Tue, May 23, 2017 at 3:43 PM, Dean Troyer  wrote:

> On Tue, May 23, 2017 at 3:42 PM, Sean McGinnis 
> wrote:
> >
> >> If it's just too much debt and risk of slippery slope type arguments on
> >> the Nova side (and that's fair, after lengthy conversations with Nova
> folks
> >> I get it), do we consider just orchestrating this from say OpenStack
> Client
> >> completely?  The last resort (and it's an awful option) is orchestrate
> the
> >> whole thing from Cinder.  We can certainly make calls to Nova and pass
> in
> >> the volume using the semantics that are already accepted and in use.
> >>
> >> John
> >>
> >
> > /me runs away screaming!
>
> Now I know Sean's weakness...
>
​Ha!  I thought it was the put it in Cinder part (so I have a patch queued
up for emergencies when I need to threaten him). :)
​


>
> In this particular case it may not be necessary, but I think early
> implementation of composite features in clients is actually the right
> way to prove the utility of these things going forward.

​Yeah, I've been doing more with OSC as of late and it really has all the
pieces and currently is one of the few places in OpenStack that really
knows what the other actors are up to (or at least how to communicate with
them and ask them to do things).

It does seem like a reasonable place (OSC), and as far as some major
objections I've heard already around "where would you draw the line"...
yeah, that's important.  To start though orchestrated "features" that have
been requested for multiple releases that are actually fairly trivial to
implement might be a great starting point.  It's at least worth thinking on
for a bit in my opinion.
​


> Establish and
> document the process, implement in a way for users to opt-in, and move
> into the services as they are proven useful.  With the magic of
> microversions we can then migrate from client-side to server-side as
> the implementations roll through the deployment lifecycle.
>
> This last bit is important.   Even today many of our users are unable
> to take advantage of useful features that are already over a year old
> due to the upgrade delay that production deployments see.
> Implementing new things in clients helps users on existing clouds.
> Sure other client implementations are left to their own work, but they
> should have a common process model to follow, and any choice to
> deviate from that is their own.
>
> dt
>
> --
>
> Dean Troyer
> dtro...@gmail.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Supporting volume_type when booting from volume

2017-05-23 Thread Yingjun Li
It’s definitely a nice feature to have for end user, actually we implemented it 
our own because we need this but
nova doesn’t support.

Yingjun

> On May 24, 2017, at 6:58 AM, Jay Bryant  wrote:
> 
> 
> On 5/23/2017 9:56 AM, Duncan Thomas wrote:
>> 
>> 
>> On 23 May 2017 4:51 am, "Matt Riedemann" > > wrote:
>> 
>> 
>> Is this really something we are going to have to deny at least once per 
>> release? My God how is it that this is the #1 thing everyone for all time 
>> has always wanted Nova to do for them?
>> 
>> Is it entirely unreasonable to turn the question around and ask why, given 
>> it is such a commonly requested feature, the Nova team are so resistant to 
>> it?
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
>> 
> 
> I am going to jump into the fray here ...
> 
> I think that at some point we need to do a cost/benefit analysis.  If 
> customers really want this, than maybe it is worth the potential technical 
> debt.  Going down a route of hacking something together from the client seems 
> to potentially incur more technical debt and create a worse UX.
> 
> At the risks of having things thrown at me, I am going to say that this could 
> have a number of benefits.  It could be leveraged by the Cinder Ephemeral 
> driver that is being considered.  Volume types associated with compute hosts 
> could be used to ensure use of storage local to the compute host that is 
> managed by Cinder.
> 
> Anyway, that is my $0.02.
> 
> Jay
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] on the subject of when we should be deprecating API's in a release cycle

2017-05-23 Thread Amrith Kumar
TL;DR

When IaaS projects in OpenStack deprecate their API's after milestone 1, it
puts PaaS projects in a pickle. I think it would be much better for PaaS
projects if the IaaS projects could please do their deprecations well before
milestone-1

The longer issue:

OK, the guy from Trove is bitching again. The Trove gate is broken (again).
This time, it appears to be because Trove was using a deprecated Nova
Networking API call, and even though everyone and their brother knew that
Nova Networking was gone-gone, Trove never got the memo, and like a few
others got hit by it.

But the fact of the matter is this, it happened. This has happened in
previous releases as well where at milestone 2 we are scrambling to fix
something because an IaaS project did a planned deprecation.

I'm wondering whether we can get a consensus around doing these earlier in
the cycle, like before milestone-1, so other projects which depend on the
API have a chance to handle it with enough time to test and verify.

Just to be explicitly clear, I AM NOT pointing fingers at Nova. I knew that
NN was gone, just that a couple of API's remained in use and we got bit in
the glueteus maximus. I asked Matt for help to find out what API's had been
deprecated, he almost immediately helped me with a list and I'm working
through getting them fixed (Thanks Matt).

I'm merely raising the generic question of whether or not planned
deprecations should be done before Milestone 1.

Thanks for reading the longer version ...

--
Amrith Kumar
amrith.ku...@gmail.com




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Supporting volume_type when booting from volume

2017-05-23 Thread Michael Glasgow

On 5/23/2017 4:43 PM, Dean Troyer wrote:

In this particular case it may not be necessary, but I think early
implementation of composite features in clients is actually the right
way to prove the utility of these things going forward.  Establish and
document the process, implement in a way for users to opt-in, and move
into the services as they are proven useful.


A slight disadvantage of this approach is that the resulting 
incongruence between the client and the API is obfuscating.  When an end 
user can make accurate inferences about the API based on how the client 
works, that's a form of transparency that can pay dividends.


Also in terms of the "slippery slope" that has been raised, putting 
small bits of orchestration into the client creates a grey area there as 
well:  how much is too much?


OTOH I don't disagree with you.  This approach might be the best of 
several not-so-great options, but I wish I could think of a better one.


--
Michael Glasgow

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Active or passive role with our database layer

2017-05-23 Thread Jay Pipes

On 05/23/2017 07:07 PM, Boris Pavlovic wrote:
And how can someone, that is trying to deploy OpenStack, understand/find 
the right config for db? Or it's Ops tasks and community doesn't care 
about them?


Neither. It's the ops responsibility to understand database 
configuration fundamentals (since, you know, they are operating a 
database...).


It's the dev community's responsibility to document necessary 
configuration settings, provide reasonable defaults and advise on 
how/when ops should adjust those defaults.


It's a give and take.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Save the Date- Queens PTG

2017-05-23 Thread Monty Taylor

On 05/23/2017 06:27 PM, Anita Kuno wrote:

On 2017-05-23 07:25 PM, Anita Kuno wrote:

On 2017-04-05 06:05 PM, Erin Disney wrote:

We are excited to announce the September Project Teams Gathering in
Denver, CO at the Denver Renaissance Stapleton Hotel this September
11th-15th.

As mentioned at the Atlanta PTG feedback session in February, we had
narrowed down our PTG location options to Denver and Montreal. Based
on attendee feedback we received in Atlanta, we knew keeping room
rates down was a priority and are excited we were able to negotiate a
$149/night for attendees in Denver versus nearly $200/night in Montreal.

We are sensitive to international travel and have heard and
understand concerns regarding both travel to and from the U.S. amidst
political uncertainty. We are reviewing options for remote
participation for those unable to join us in person. Moving forward,
we are planning to host the February/March 2018 PTG in Europe. Stay
tuned for additional information on future locations for the PTG and
look forward to Sydney in November 2017 and Vancouver in May 2018 for
the upcoming OpenStack Summits.

We will share registration and sponsorship information soon on this
mailing list. Mark your calendars and we hope to see you in Denver!

Erin Disney
OpenStack Marketing
e...@openstack.org 




According to xe.com (my favourite currency exchange rate website) $140
USD is equivalent to $189 CDN (Canadian currency), and $200 CDN
(Canadian currency) is equal to $147 USD.

Sound like a difference in room rate of about $10 per night in
Canadian currency and $7 per night in US currency.

I must be missing something.

Thank you,
Anita.

My mistake I mis-read, $149 USD is equal to $201 CDN (Canadian
currency), so it looks like the same price to me.


I believe (although I do not know for absolute certain) that the prices 
given were already normalized to USD. So it wasn't $149USD vs $200CDN it 
was $149USD vs $200USD.


It is, of course, worth clarifying.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Save the Date- Queens PTG

2017-05-23 Thread Anita Kuno

On 2017-05-23 07:25 PM, Anita Kuno wrote:

On 2017-04-05 06:05 PM, Erin Disney wrote:
We are excited to announce the September Project Teams Gathering in 
Denver, CO at the Denver Renaissance Stapleton Hotel this September 
11th-15th.


As mentioned at the Atlanta PTG feedback session in February, we had 
narrowed down our PTG location options to Denver and Montreal. Based 
on attendee feedback we received in Atlanta, we knew keeping room 
rates down was a priority and are excited we were able to negotiate a 
$149/night for attendees in Denver versus nearly $200/night in Montreal.


We are sensitive to international travel and have heard and 
understand concerns regarding both travel to and from the U.S. amidst 
political uncertainty. We are reviewing options for remote 
participation for those unable to join us in person. Moving forward, 
we are planning to host the February/March 2018 PTG in Europe. Stay 
tuned for additional information on future locations for the PTG and 
look forward to Sydney in November 2017 and Vancouver in May 2018 for 
the upcoming OpenStack Summits.


We will share registration and sponsorship information soon on this 
mailing list. Mark your calendars and we hope to see you in Denver!


Erin Disney
OpenStack Marketing
e...@openstack.org 




According to xe.com (my favourite currency exchange rate website) $140 
USD is equivalent to $189 CDN (Canadian currency), and $200 CDN 
(Canadian currency) is equal to $147 USD.


Sound like a difference in room rate of about $10 per night in 
Canadian currency and $7 per night in US currency.


I must be missing something.

Thank you,
Anita.
My mistake I mis-read, $149 USD is equal to $201 CDN (Canadian 
currency), so it looks like the same price to me.


Thanks,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Save the Date- Queens PTG

2017-05-23 Thread Anita Kuno

On 2017-04-05 06:05 PM, Erin Disney wrote:

We are excited to announce the September Project Teams Gathering in Denver, CO 
at the Denver Renaissance Stapleton Hotel this September 11th-15th.

As mentioned at the Atlanta PTG feedback session in February, we had narrowed 
down our PTG location options to Denver and Montreal. Based on attendee 
feedback we received in Atlanta, we knew keeping room rates down was a priority 
and are excited we were able to negotiate a $149/night for attendees in Denver 
versus nearly $200/night in Montreal.

We are sensitive to international travel and have heard and understand concerns 
regarding both travel to and from the U.S. amidst political uncertainty. We are 
reviewing options for remote participation for those unable to join us in 
person. Moving forward, we are planning to host the February/March 2018 PTG in 
Europe. Stay tuned for additional information on future locations for the PTG 
and look forward to Sydney in November 2017 and Vancouver in May 2018 for the 
upcoming OpenStack Summits.

We will share registration and sponsorship information soon on this mailing 
list. Mark your calendars and we hope to see you in Denver!

Erin Disney
OpenStack Marketing
e...@openstack.org 




According to xe.com (my favourite currency exchange rate website) $140 
USD is equivalent to $189 CDN (Canadian currency), and $200 CDN 
(Canadian currency) is equal to $147 USD.


Sound like a difference in room rate of about $10 per night in Canadian 
currency and $7 per night in US currency.


I must be missing something.

Thank you,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Active or passive role with our database layer

2017-05-23 Thread Boris Pavlovic
Zane,


> This is your periodic reminder that we have ~50 applications sharing the
> same database and not only do none of them know how the deployer will
> configure the database, most will not even have an idea which set of
> assumptions the other ~49 are making about how the deployer will configure
> the database.


And how can someone, that is trying to deploy OpenStack, understand/find
the right config for db? Or it's Ops tasks and community doesn't care about
them?

I would better give to Ops one config and say that everything should work
with it, and find the way to align everybody in community and make it
default for all projects.

Best regards,
Boris Pavlovic

On Tue, May 23, 2017 at 2:18 PM, Zane Bitter  wrote:

> On 21/05/17 15:38, Monty Taylor wrote:
>
>> One might argue that HA strategies are an operator concern, but in
>> reality the set of workable HA strategies is tightly constrained by how
>> the application works, and the pairing an application expecting one HA
>> strategy with a deployment implementing a different one can have
>> negative results ranging from unexpected downtime to data corruption.
>>
>
> This is your periodic reminder that we have ~50 applications sharing the
> same database and not only do none of them know how the deployer will
> configure the database, most will not even have an idea which set of
> assumptions the other ~49 are making about how the deployer will configure
> the database.
>
> (Ditto for RabbitMQ.)
>
> - ZB
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Supporting volume_type when booting from volume

2017-05-23 Thread Jay Bryant


On 5/23/2017 9:56 AM, Duncan Thomas wrote:



On 23 May 2017 4:51 am, "Matt Riedemann" > wrote:




Is this really something we are going to have to deny at least
once per release? My God how is it that this is the #1 thing
everyone for all time has always wanted Nova to do for them?


Is it entirely unreasonable to turn the question around and ask why, 
given it is such a commonly requested feature, the Nova team are so 
resistant to it?



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I am going to jump into the fray here ...

I think that at some point we need to do a cost/benefit analysis.  If 
customers really want this, than maybe it is worth the potential 
technical debt.  Going down a route of hacking something together from 
the client seems to potentially incur more technical debt and create a 
worse UX.


At the risks of having things thrown at me, I am going to say that this 
could have a number of benefits.  It could be leveraged by the Cinder 
Ephemeral driver that is being considered.  Volume types associated with 
compute hosts could be used to ensure use of storage local to the 
compute host that is managed by Cinder.


Anyway, that is my $0.02.

Jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 21

2017-05-23 Thread Jeremy Stanley
On 2017-05-23 20:44:54 +0100 (+0100), Chris Dent wrote:
[...]
> ## OpenStack moving too fast and too slow
[...]
> summit happened, people moved on to other things and there wasn't
> much in the way of resolution. Is there anything we could or
> should be doing here?
[...]

The session for this recap seems to be closely related:

http://lists.openstack.org/pipermail/openstack-dev/2017-May/117239.html

In short, we said LTS is predicated on being able to skip releases
when upgrading (otherwise you're potentially asking to incrementally
upgrade through unsupported EOL branches which may themselves have
untested upgrade bugs and bitrot). There was therefore this
discussion (one which I missed so quite grateful for the recap) in
Boston to talk through that bit, and looks like discussing it
further on the ML was identified as the next step.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] How to slice the week to minimize conflicts

2017-05-23 Thread Jay Bryant
I had expected more cinder/nova sessions the first days, so I like that
proposal.

If we are able to minimize project overlap or have alternatives the last 3
days I think we are moving towards a better solution.

Jay
On Fri, May 19, 2017 at 9:18 AM Thierry Carrez 
wrote:

> Emilien Macchi wrote:
> > On Thu, May 18, 2017 at 5:27 AM, Thierry Carrez 
> wrote:
> >> After giving it some thought, my current thinking is that we should
> >> still split the week in two, but should move away from an arbitrary
> >> horizontal/vertical split. My strawman proposal would be to split the
> >> week between inter-project work (+ teams that rely mostly on liaisons in
> >> other teams) on Monday-Tuesday, and team-specific work on
> Wednesday-Friday:
> >>
> >> Example of Monday-Tuesday rooms:
> >> Interop WG, Docs, QA, API WG, Packaging WG, Oslo, Goals helproom,
> >> Infra/RelMgt/support teams helpdesk, TC/SWG room, VM&BM Working group...
> >>
> >> Example of Wednesday-Thursday or Wednesday-Friday rooms:
> >> Nova, Cinder, Neutron, Swift, TripleO, Kolla, Infra...
> >
> > I like the idea of continuing to have Deployment tools part of
> > vertical projects room.
> > Though once it's confirmed, I would like to setup a 2 hours slot where
> > we meet together and make some cross-deployment-project collaboration.
> > In Atlanta, we managed to do it on last minute and I found it
> > extremely useful, let's repeat this but scheduled this time.
>
> Actually if you look above, I added the "Packaging WG" in the
> Monday-Tuesday rooms example. You could easily have 1 or 2 days there to
> discuss collaboration between packaging projects, before breaking out
> for 2 or 3 days with your own project team.
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [MassivelyDistributed] IRC Meeting tomorrow15:00 UTC

2017-05-23 Thread lebre . adrien
Dear all, 

A gentle reminder for our meeting tomorrow. 
As usual, the agenda is available at: 
https://etherpad.openstack.org/p/massively_distributed_ircmeetings_2017 (line 
597)
Please feel free to add items.

Best, 
ad_rien_

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 21

2017-05-23 Thread Matt Riedemann

On 5/23/2017 2:44 PM, Chris Dent wrote:

Doing LTS is probably too big for that, but "stable branch
reviews" is not.


Oh if we only had more things to review on stable branches. It's also 
just at a bare minimum having people propose backports. Very few 
people/organizations actually do that upstream. So it's always funny (in 
a sad way) how much people clamor for stable branch support upstream, 
and for a long time period, but people aren't even proposing backports 
upstream en masse. Anyway, there is my dig since you brought it back up. :)


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Supporting volume_type when booting from volume

2017-05-23 Thread Dean Troyer
On Tue, May 23, 2017 at 3:42 PM, Sean McGinnis  wrote:
>
>> If it's just too much debt and risk of slippery slope type arguments on
>> the Nova side (and that's fair, after lengthy conversations with Nova folks
>> I get it), do we consider just orchestrating this from say OpenStack Client
>> completely?  The last resort (and it's an awful option) is orchestrate the
>> whole thing from Cinder.  We can certainly make calls to Nova and pass in
>> the volume using the semantics that are already accepted and in use.
>>
>> John
>>
>
> /me runs away screaming!

Now I know Sean's weakness...

In this particular case it may not be necessary, but I think early
implementation of composite features in clients is actually the right
way to prove the utility of these things going forward.  Establish and
document the process, implement in a way for users to opt-in, and move
into the services as they are proven useful.  With the magic of
microversions we can then migrate from client-side to server-side as
the implementations roll through the deployment lifecycle.

This last bit is important.   Even today many of our users are unable
to take advantage of useful features that are already over a year old
due to the upgrade delay that production deployments see.
Implementing new things in clients helps users on existing clouds.
Sure other client implementations are left to their own work, but they
should have a common process model to follow, and any choice to
deviate from that is their own.

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Active or passive role with our database layer

2017-05-23 Thread Zane Bitter

On 21/05/17 15:38, Monty Taylor wrote:

One might argue that HA strategies are an operator concern, but in
reality the set of workable HA strategies is tightly constrained by how
the application works, and the pairing an application expecting one HA
strategy with a deployment implementing a different one can have
negative results ranging from unexpected downtime to data corruption.


This is your periodic reminder that we have ~50 applications sharing the 
same database and not only do none of them know how the deployer will 
configure the database, most will not even have an idea which set of 
assumptions the other ~49 are making about how the deployer will 
configure the database.


(Ditto for RabbitMQ.)

- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Supporting volume_type when booting from volume

2017-05-23 Thread Sean McGinnis


If it's just too much debt and risk of slippery slope type arguments on 
the Nova side (and that's fair, after lengthy conversations with Nova 
folks I get it), do we consider just orchestrating this from say 
OpenStack Client completely?  The last resort (and it's an awful option) 
is orchestrate the whole thing from Cinder.  We can certainly make calls 
to Nova and pass in the volume using the semantics that are already 
accepted and in use.


John



/me runs away screaming!


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-23 Thread Lance Bragstad
I'm in favor of option #1. I think it encourages our developers to become
better writers with guidance from the docs team. While ensuring docs are
proposed prior to merging the implementation cross-repository is totally
possible, I think #1 makes that flow easier.

Thanks for putting together the options, Alex.

On Tue, May 23, 2017 at 11:02 AM, Ildiko Vancsa 
wrote:

> Hi Alex,
>
> First of all thank you for writing this up the summary and list options
> with their expected impacts.
>
> >
> > 1. We could combine all of the documentation builds, so that each
> project has a single doc/source directory that includes developer,
> contributor, and user documentation. This option would reduce the number of
> build jobs we have to run, and cut down on the number of separate sphinx
> configurations in each repository. It would completely change the way we
> publish the results, though, and we would need to set up redirects from all
> of the existing locations to the new locations and move all of the existing
> documentation under the new structure.
> >
> > 2. We could retain the existing trees for developer and API docs, and
> add a new one for "user" documentation. The installation guide,
> configuration guide, and admin guide would move here for all projects.
> Neutron's user documentation would include the current networking guide as
> well. This option would add 1 new build to each repository, but would allow
> us to easily roll out the change with less disruption in the way the site
> is organized and published, so there would be less work in the short term.
>
> I’m fully in favor for option #1 and/or option #2. From the perspective of
> trying to move step-by-step and give a chance to project teams to
> acclimatize with the changes I think starting with #2 should be sufficient.
>
> Although if we think that option #1 is doable as a starting point and also
> end goal, you have my support for that too.
>
> >
> > 3. We could do option 2, but use a separate repository for the new
> user-oriented documentation. This would allow project teams to delegate
> management of the documentation to a separate review project-sub-team, but
> would complicate the process of landing code and documentation updates
> together so that the docs are always up to date.
> >
>
> As being one of the advocates on having the documentation living together
> with the code so we can give a chance to the experts of the code changes to
> add the corresponding documentation as well, I'm definitely against option
> #3. :)
>
> Thanks and Best Regards,
> Ildikó
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][tc][infra][security][stable] Proposal for shipping binaries and containers

2017-05-23 Thread Michał Jastrzębski
On 23 May 2017 at 08:13, Doug Hellmann  wrote:
> Excerpts from Davanum Srinivas (dims)'s message of 2017-05-23 10:44:30 -0400:
>> Team,
>>
>> Background:
>> For projects based on Go and Containers we need to ship binaries, for
>
> Can you elaborate on the use of the term "need" here. Is that because
> otherwise the projects can't be consumed? Is it the "norm" for
> projects from those communities? Something else?
>
>> example Kubernetes, etcd both ship binaries and maintain stable
>> branches as well.
>>   https://github.com/kubernetes/kubernetes/releases
>>   https://github.com/coreos/etcd/releases/
>>
>> Kubernetes for example ships container images to public registeries as well:
>>   
>> https://console.cloud.google.com/gcr/images/google-containers/GLOBAL/hyperkube?pli=1
>>   
>> https://github.com/kubernetes/kubernetes/tree/master/cluster/images/hyperkube
>
> What are the support lifetimes for those images? Who maintains them?
>
>> So here's a proposal based on the really long thread:
>> http://lists.openstack.org/pipermail/openstack-dev/2017-May/thread.html#116677
>>
>> The idea is to augment the existing processes for the new deliverables.
>>
>> * Projects define CI jobs for generating binaries and containers (some
>> already do!)
>> * Release team automation will kick builds off when specific versions
>> are released for the binaries and containers (Since Go based projects
>> can do cross-builds, we won't need to run these jobs on multiple
>> architectures which will keep the release process simple)
>
> I see how this would work for Go builds, since we would be tagging the
> thing being built. My understanding is that Kolla images are using the
> Kolla version, not the version of the software inside the image, though.
> How would that work? (Or maybe I misunderstood something from another
> thread and that's not how the images are versioned?)

Currently tagging is not fully answered question. Depends what
cadence/method for pushing we'll end up with. But since one image can
have multiple tags, we can do several at once. We can tag with :pike,
pike-2 (rev number), and :version-of-main-component, all pointing to
same image.

>> * Just like we upload stuff to tarballs.openstack.org, we will upload
>> binaries and containers there as well
>
> I know there's an infra spec for doing some of this, so I assume we
> anticipate having the storage capacity needed?
>
>> * Just like we upload things to pypi, we will upload containers with
>> specific versions to public repos.
>> * Projects can choose from the existing release models to make this
>> process as frequent as they need.
>>
>> Please note that I am deliberately ruling out the following
>> * Daily/Nightly releases that are accessible to end users, especially
>> from stable branches.
>
> The Kolla team did seem to want periodic builds for testing (to avoid
> having to build images in the test pipeline, IIUC). Do we still want to
> build those to tarballs.o.o? Does that even meet the needs of those test
> jobs?
>
>> * Project teams directly responsible for pushing stuff to end users

One thing to consider here is exactly same issue which was moved in
different thread, maybe even to higher degree. Golang binaries will
have their binaries built into it, so if one of deps has CVE, whole
binary will have it. Higher degree, because while containers can have
manifest of versions built into it, golang doesn't really (versioning
of deps in golang is actually quite tricky thing). If we want to ship
these binaries, they will have same dangers as images pushed to
dockerhub.

>> What do you think?
>>
>> Thanks,
>> Dims
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Active or passive role with our database layer

2017-05-23 Thread Mike Bayer



On 05/23/2017 03:16 PM, Edward Leafe wrote:

On May 23, 2017, at 1:43 PM, Jay Pipes  wrote:


[1] Witness the join constructs in Golang in Kubernetes as they work around 
etcd not being a relational data store:



Maybe it’s just me, but I found that Go code more understandable than some of 
the SQL we are using in the placement engine. :)

I assume that the SQL in a relational engine is faster than the same thing in 
code, but is that difference significant? For extremely large data sets I think 
that the database processing may be rate limiting, but is that the case here? 
Sometimes it seems that we are overly obsessed with optimizing data handling 
when the amount of data is relatively small. A few million records should be 
fast enough using just about anything.


When you write your app fresh, put some data into it, a few hundred 
rows, not at all.  Pull it all into memory and sort/filter all you want, 
SQL is too hard.  Push it to production!  works great.   send the 
customer your bill.


6 months later.   Customer has 10K rows.   The tools their contractor 
wrote seem a little sticky.Not sure when that happened?


A year later.  Customer is at 300K rows, nowhere near "a few million" 
records.  Application regularly crashes when asked to search and filter 
results.   Because Python interpreter uses a fair amount of memory for a 
result set, multiplied by the overhead of Python object() / dict() per 
row == 100's / 1000's of megs of memory to have 30 objects in memory 
all at once.  Multiply by dozens of threads / processes handling 
concurrent requests, Python interpreter rarely returns memory.  Then add 
latency of fetching 300K rows over the wire, converting to objects. 
Concurrent requests pile up because they're slower; == more processes, 
== more memory.


New contractor is called in to rewrite the whole thing in MongoDB.   Now 
it's fast again!   Proceed to chapter 2, "So you decided to use 
MongoDB"   :)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 21

2017-05-23 Thread Chris Dent


With the advent Thierry's weekly status reports[^1] on the proposals
currently under review by the TC and the optionality of the weekly
TC meetings, this report becomes less about meeting minutes and more
about reporting on the things that crossed my TC radar that seemed
important and/or that seemed like they could do with more input.

This week has no TC meeting. The plan is that discussion will occur
either asynchronously in mailing list threads on the "opentack-dev"
list or in gerrit reviews in the governance project[^2] or for
casual chats use IRC and the #openstack-dev channel[^3].

[^1]: 
[^2]: 

[^3]: The concept of office hours is being introduced: 


# Pending Stuff

## The need to talk about postgreSQL

There's ongoing discussion about how to deal with the position of
postgreSQL in the attention of the community. There are deployments
that use it and the documentation mentions it, but the attention of
most developers and all tests is not upon it. It is upon MySQL (and
its variants) instead.

There's agreement that this needs to be dealt with, but the degree
of change is debated, if not hotly then at least verbosely. An
initial review was posted proposing we clear up the document and
indicate a path forward that recognized an existing MySQL
orientation:





I felt this was too wordy, too MySQL oriented, and left out an
important step: agitate with the board. It was easier to explain
this in an alternative version resulting in:





Meanwhile discussion had begun (and still continues) in an email
thread:





Observing all this, Monty noticed that there is a philosophical
chasm that must be bridged before we can truly resolve this issue,
so he started yet another thread:





The outcome of that thread and these resolutions is likely to have a
fairly significant impact on how we think about managing dependent
services in OpenStack. There's a lot to digest behind those links
but on the scale of "stuff the TC is doing that will have impact"
this is probably one of them.

## Draft Vision for the TC

The draft vision for the TC[^4] got feedback on the review, via
survey[^5] and at the forum[^6]. Effort is now in progress to
incorporate that feedback and create something that is easier to
comprehend and will make the actual vision more clear. One common
bit of feedback was that the document needs a preamble and other
structural cues so that people get what it is trying to do.
johnthetubaguy, dtroyer and I (cdent) are on the hook for doing this
next phase of work. Feel free to contact one of us (or leave a
comment on the review, or send some email) if you feel like you have
something to add.

[^4]: 
[^5]: 

[^6]:


# Dropped Stuff

_A section with reminders of things that were happening or were
going to happen then either stopped without resolution or never
started in the first place._

## OpenStack moving too fast and too slow

A thread was started on this[^7]. It got huge. While there were many
subtopics, one of the larger ones was the desire for there to be a
long term support release. There were a few different reactions to
this, inaccurately paraphrased as:

* That we have any stable releases at all in the upstream is pretty
  amazing, some global projects don't bother, it's usually a
  downstream problem.
* Great idea, please provide some of the resources required to make
  it happen, the OpenStack community is not an unlimited supply of
  free labor.

Then summit happened, people moved on to other things and there
wasn't much in the way of resolution. Is there anything we could or
should be doing here?

If having LTS is that much of a big deal, then it is something which
the Foundation Board of Directors must be convinced is a priority.
Early in this process I had suggested we at least write a resolution
that repeats (in nicer form) the second bullet point above. We could
do that. There's also a new plan to create a top 5 help wanted
list[^8]. Doing LTS is probably too big for that, but "stable branch
reviews" is not.

[^7]: 
[^8]: 

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent_

Re: [openstack-dev] [tc] revised Postgresql deprecation patch for governance

2017-05-23 Thread Tim Bell
Thanks. It’s more of a question of not leaving people high and dry when they 
have made a reasonable choice in the past based on the choices supported at the 
time.

Tim

On 23.05.17, 21:14, "Sean Dague"  wrote:

On 05/23/2017 02:35 PM, Tim Bell wrote:
> Is there a proposal where deployments who chose Postgres on good faith 
can find migration path to a MySQL based solution?

Yes, a migration tool exploration is action #2 in the current proposal.

Also, to be clear, we're not at the stage of removing anything at this
point. We're mostly just signaling to people where the nice paved road
is, and where the gravel road is. It's like the signs in the spring
 on the road where frost heaves are (at least in the North East US).

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Missing answers on Pike release goals

2017-05-23 Thread Doug Hellmann
Excerpts from John Dickinson's message of 2017-05-23 09:12:29 -0700:
> 
> I can sympathize with the "do it tomorrow" turns into 6 weeks later...
>
> Part of the issue for me, personally, is that a governance patch
> does *not* feel simple or lightweight. I assume (in part based on
> experience) that any governance patch I propose will be closely
> examined and I will be forced to justify every corner case and
> comment made. Frankly, writing the patch that will stand up too a
> critical eye will take a long time. I'll do it tomorrow...

Maybe this does point to a need to move that information somewhere else.
It would ultimately be the same people reviewing it, though. I feel
strongly that we need the review step, but if folks think a different
repository would make a difference I'd be happy to set that up.

> Let's take the py3 goal as an example. Note: I am *not* wanting
> to get into a discussion about particular py3 issues or whatever.
> This is a discussion on the goals process, and I'm only using one
> of the current goals as an example of why I haven't proposed a
> governance patch for it.

> Swift does not support Py3. So clearly, there's work to be done
> to meet the goal. I've talked with others in the community about
> some of the blockers and concerns about porting to Py3. Several of
> the concerns are not trivial and will take substantial work to
> overcome[1]. A governance patch will need to list these issues, but
> I don't know if this is a complete list. If I propose a list that's
> incomplete, I feel like I'll be judged on the list I first proposed
> ("you finished the list, why doesn't it work?") instead of being a
> more dynamic process. I need to spend more time understanding what
> the issues are to make sure I have a complete list. I'll propose
> that patch tomorrow...

The patch does not necessarily need to list every detail. The purpose
of having a list of artifacts in the goal document is so that anyone
who wants to understand the state of the implementation can go look
there.  So, for example, if you're using a wiki page or an etherpad
to keep track of the details within the team, the patch only needs
to include a link to that. Some teams have done more, linking to
specs or changes that are already under review. Exactly what type
of artifact counts for a team is really up to that team.

The point is to show that each team is aware of the goal, and that
they've put together information in a place that someone outside
of the team can find it to either help, or at least follow progress.

> The outstanding work to get Py3 support in Swift is very large.
> Yet there are more goals being discussed now, and there's no way I
> can get Py3 support in Swift in Pike. Or Queens. Or probably Rocky
> either. That's not to say it isn't an important goal, but the scope
> combined with the TC deadline means that my governance patch for
> this goal (the tl;dr version is "not gonna happen") has to address
> this in sufficient detail to stand up to review by TC members who
> are on the PSF! I guess I'll start writing that tomorrow...

Some teams have a bit of a head start, but we expect many teams to
find the Python 3 work more than can be completed in a cycle. That's
perfectly OK. At the end of the cycle, we'll see where things stand,
and determine what the next steps are. That retrospective process
will be up to the teams, but I would expect it to factor into the
TC's decisions about what goals are adopted for Queens.

We don't want to have a big pile of unmet goals that all teams are
struggling to make progress on. That's why we have been limiting
ourselves to 1-2 goals per cycle.

> While I know that Py3 support is important, I also have to
> prioritize it against other important things. My employer has
> prioritized certain features because that directly impacts our
> ability to add customers (which directly affects my ability to get
> paid). Other employers in the community are doing the same for their
> employees. In the broader community, as clusters have grown over

There is undoubtedly tension between upstream and downstream needs
in some of these areas. We see that tension a lot with cross-project
initiatives. I don't have a good generic answer to the problem of
balancing community and employer needs, so I think the conversation
will have to happen case-by-case.

If we're finding that all of the contributors to a team are discouraged
from working on technical debt issues or other community goals in,
we'll need to address that. Uncovering that bit of information would
be an important outcome for the goals process, especially if it is
stated as directly as "no team member is being given time by their
employer to work on this community goal." If there is no response
from a team at all, though, we have no idea why that is the case.

If we know a team has issues tracking the goals due to a lack of
resources, then when the Board asks "how can we help," as they do
every time we have a joint meetin

Re: [openstack-dev] [tc] Active or passive role with our database layer

2017-05-23 Thread Jay Pipes

On 05/23/2017 03:16 PM, Edward Leafe wrote:

On May 23, 2017, at 1:43 PM, Jay Pipes  wrote:


[1] Witness the join constructs in Golang in Kubernetes as they work around 
etcd not being a relational data store:


Maybe it’s just me, but I found that Go code more understandable than some of 
the SQL we are using in the placement engine. :)


Err, apples, oranges.

The Golang code is doing a single JOIN operation. The placement API is 
doing dozens of join operations, aggregate operations, and more.



I assume that the SQL in a relational engine is faster than the same thing in 
code, but is that difference significant? For extremely large data sets I think 
that the database processing may be rate limiting, but is that the case here? 
Sometimes it seems that we are overly obsessed with optimizing data handling 
when the amount of data is relatively small. A few million records should be 
fast enough using just about anything.


You are more than welcome to implement the placement API in etcd or 
Cassandra, Ed. :)


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Active or passive role with our database layer

2017-05-23 Thread Edward Leafe
On May 23, 2017, at 1:43 PM, Jay Pipes  wrote:

> [1] Witness the join constructs in Golang in Kubernetes as they work around 
> etcd not being a relational data store:


Maybe it’s just me, but I found that Go code more understandable than some of 
the SQL we are using in the placement engine. :)

I assume that the SQL in a relational engine is faster than the same thing in 
code, but is that difference significant? For extremely large data sets I think 
that the database processing may be rate limiting, but is that the case here? 
Sometimes it seems that we are overly obsessed with optimizing data handling 
when the amount of data is relatively small. A few million records should be 
fast enough using just about anything.

-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] revised Postgresql deprecation patch for governance

2017-05-23 Thread Sean Dague
On 05/23/2017 02:35 PM, Tim Bell wrote:
> Is there a proposal where deployments who chose Postgres on good faith can 
> find migration path to a MySQL based solution?

Yes, a migration tool exploration is action #2 in the current proposal.

Also, to be clear, we're not at the stage of removing anything at this
point. We're mostly just signaling to people where the nice paved road
is, and where the gravel road is. It's like the signs in the spring
 on the road where frost heaves are (at least in the North East US).

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Supporting volume_type when booting from volume

2017-05-23 Thread John Griffith
On Tue, May 23, 2017 at 10:13 AM, Matt Riedemann 
wrote:

> On 5/23/2017 9:56 AM, Duncan Thomas wrote:
>
>> Is it entirely unreasonable to turn the question around and ask why,
>> given it is such a commonly requested feature, the Nova team are so
>> resistant to it?
>>
>
> Because it's technical debt for one thing. Adding more orchestration adds
> complexity, which adds bugs. Also, as noted in the linked devref on this,
> when nova proxies something via the compute API to another service's API,
> if that other service changes their API (like with nova's image proxy API
> to glance v1 for example, and needing to get to glance v2), then we have
> this weird situation with compatibility. Which is more technical debt.
> Microversions should make that less of an issue, but it's still there.
>
> It's also a slippery slope. Once you allow proxies and orchestration into
> part of the API, people use it as grounds for justifying doing more of it
> elsewhere, i.e. if we do this for volumes, when are we going to start
> seeing people asking for passing more detailed information about Neutron
> ports when creating a server?
>
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
​I get the concern about adding more orchestration etc, I'm not totally
convinced only because it's adding another flag as opposed to functionality
etc.  But, regardless I get the argument and the slippery slope after
talking through it with Matt and Dan multiple times.

The disappointing part of this for me is that the main reason this comes up
(I believe) is not only because Cinder volumes are AWESOME!  But, probably
more accurately; all of the non-OpenStack public clouds behave this way (or
the big ones do at least).  Service Providers using OpenStack as well as
user consuming OpenStack have voiced that they'd like to have this same
functionality/behavior that includes selecting what type of volume.

If it's just too much debt and risk of slippery slope type arguments on the
Nova side (and that's fair, after lengthy conversations with Nova folks I
get it), do we consider just orchestrating this from say OpenStack Client
completely?  The last resort (and it's an awful option) is orchestrate the
whole thing from Cinder.  We can certainly make calls to Nova and pass in
the volume using the semantics that are already accepted and in use.

John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Active or passive role with our database layer

2017-05-23 Thread Chris Dent

On Tue, 23 May 2017, Jay Pipes wrote:

Err, in my experience, having a *completely* dumb persistence layer -- i.e. 
one that tries to assuage the differences between, say, relational and 
non-relational stores -- is a recipe for disaster. The developer just ends up 
writing join constructs in that business layer instead of using a relational 
data store the way it is intended to be used. Same for aggregate operations. 
[1]


Now, if what you're referring to is "don't use vendor-specific extensions in 
your persistence layer", then yes, I agree with you.


If you've commited to doing an RDBMS then, yeah, stick with
relational, but dumb relational. Since that's where we are [3] in
OpenStack, then we should go with that.

[3] Of course sometimes I'm sad that we made that commitment and
instead we had an abstract storage interface, an implementation
of which was stupid text files on disk, another which was generic
sqlalchemy, and another which was raw SQL extracted wholesale from
the mind of jaypipes, optimized for Drizzle 8.x. But then I'm often
sad about completely unrealistic things.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [go][all] Settling on gophercloud as the go based API library for all OpenStack projects

2017-05-23 Thread John Griffith
On Tue, May 23, 2017 at 12:48 PM, Davanum Srinivas 
wrote:

> John,
>
> I had heard this a few time in Boston Summit. So want to put this to bed :)
>
> -- Dims
>
> On Tue, May 23, 2017 at 2:43 PM, John Griffith 
> wrote:
> >
> >
> > On Tue, May 23, 2017 at 8:54 AM, Davanum Srinivas 
> wrote:
> >>
> >> Folks,
> >>
> >> This has come up several times in various conversations.
> >>
> >> Can we please stop activity on
> >> https://git.openstack.org/cgit/openstack/golang-client/ and just
> >> settle down on https://github.com/gophercloud/gophercloud ?
> >>
> >> This becomes important since new container-y projects like
> >> stackube/fuxi/kuryr etc can just pick one that is already working and
> >> not worry about switching later. This is also a NIH kind of behavior
> >> (at least from a casual observer from outside).
> >>
> >> Thanks,
> >> Dims
> >>
> >> --
> >> Davanum Srinivas :: https://twitter.com/dims
> >>
> >> 
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > Oh, my bad... I'm actually guilty of bringing this up (this morning).  I
> was
> > confused about the direction of this, I've been a happy GopherCloud user
> for
> > a couple years so I'm perfectly happy with this answer.  Thanks and sorry
> > for adding to the confusion.
> >
> > John
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
​Sleep with the fishes confusing issue!​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][heat] - making Neutron more friendly for orchestration

2017-05-23 Thread Zane Bitter

On 19/05/17 19:53, Kevin Benton wrote:

So making a subnet ID mandatory for a port creation and
a RouterInterface ID mandatory for a Floating IP creation are both
possible in Heat without Neutron changes. Presumably you haven't done
that because it's backwards-incompatible, but you would need to
implement the change anyway if the Neutron API was changed to require it.

It seems like Heat has a backwards-compatibility requirement for
supporting old templates that aren't explicit. That will be the real
blocker to actually making any of these changes, no? i.e. Neutron isn't
preventing Heat from being more strict, it's the legacy Heat modeling
that is preventing it.


We have a translation mechanism for resource properties (much improved 
in Pike - thanks prazumovsky!) that could in theory help us to make such 
a change (with or without a corresponding change in the Neutron API) 
without breaking existing users (although it would probably require a 
bunch of expensive API calls at inopportune times). That would likely be 
just as much of a pain to maintain as the workarounds we have now, so 
tbh we're likely to stick with reflecting the Neutron API directly, 
whatever it does.


I've long since chalked this one up to 'lessons learned'; if I keep 
harping on it, it's because I want to make sure that everyone really 
does learn the lessons.



(a) drop the requirement that the Network has to be connected to the

external network with the FloatingIPs with a RouterInterface prior to
creating the FloatingIP. IIUC only *some* Neutron backends require this.

This can produce difficult to debug situations when multiple routers
attached to different external networks are attached to different
subnets of the same network and the user associates a floating IP to the
wrong fixed IP of the instance. Right now the interface check will
prevent that, but if we remove it the floating IP would just sit in the
DOWN state.

If a backend supports floating IPs without router interfaces entirely,
it's likely making assumptions that prevent it from supporting
multi-router scenarios. A single fixed IP on a port can have multiple
floating IPs associated with it from different external networks. So the
only way to distinguish which floating IP to translate to is which
router the traffic is being directed to by the instance, which requires
router interfaces.

Cheers

On Fri, May 19, 2017 at 3:29 PM, Zane Bitter mailto:zbit...@redhat.com>> wrote:

On 19/05/17 17:03, Kevin Benton wrote:

I split this conversation off of the "Is the pendulum swinging
on PaaS
layers?" thread [1] to discuss some improvements we can make to
Neutron
to make orchestration easier.

There are some pain points that heat has when working with the
Neutron
API. I would like to get them converted into requests for
enhancements
in Neutron so the wider community is aware of them.

Starting with the port/subnet/network relationship - it's
important to
understand that IP addresses are not required on a port.

So knowing now that a Network is a layer-2 network segment
and a Subnet

is... effectively a glorified DHCP address pool

Yes, a Subnet controls IP address allocation as well as setting up
routing for routers, which is why routers reference subnets
instead of
networks (different routers can route for different subnets on
the same
network). It essentially dictates things related to L3
addressing and
provides information for L3 reachability.

But at the end of the day, I still can't create a Port until
a Subnet exists


This is only true if you want an IP address on the port. This sounds
silly for most use cases, but there are a non-trivial portion of NFV
workloads that do not want IP addresses at all so they create a
network
and just attach ports without creating any subnets.


Fair. A more precise statement of the problem would be that given a
template containing both a Port and a Subnet that it will be
attached to, there is a specific order in which those need to be
created that is _not_ reflected in the data flow between them.

I still don't know what Subnet a Port will be attached to
(unless the

user specifies it explicitly using the --fixed-ip option...
regardless
of whether they actually specify a fixed IP),

So what would you like Neutron to do differently here? Always
force a
user to pick which subnet they want an allocation from


That would work.

if there are
multiple?


Ideally even if there aren't.

If so, can't you just force that explicitness in Heat?


I think the answer here is exactly the same as for Neutron: yes, we
totally could have if we'd realised that it 

Re: [openstack-dev] [go][all] Settling on gophercloud as the go based API library for all OpenStack projects

2017-05-23 Thread Davanum Srinivas
John,

I had heard this a few time in Boston Summit. So want to put this to bed :)

-- Dims

On Tue, May 23, 2017 at 2:43 PM, John Griffith  wrote:
>
>
> On Tue, May 23, 2017 at 8:54 AM, Davanum Srinivas  wrote:
>>
>> Folks,
>>
>> This has come up several times in various conversations.
>>
>> Can we please stop activity on
>> https://git.openstack.org/cgit/openstack/golang-client/ and just
>> settle down on https://github.com/gophercloud/gophercloud ?
>>
>> This becomes important since new container-y projects like
>> stackube/fuxi/kuryr etc can just pick one that is already working and
>> not worry about switching later. This is also a NIH kind of behavior
>> (at least from a casual observer from outside).
>>
>> Thanks,
>> Dims
>>
>> --
>> Davanum Srinivas :: https://twitter.com/dims
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> Oh, my bad... I'm actually guilty of bringing this up (this morning).  I was
> confused about the direction of this, I've been a happy GopherCloud user for
> a couple years so I'm perfectly happy with this answer.  Thanks and sorry
> for adding to the confusion.
>
> John
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Active or passive role with our database layer

2017-05-23 Thread Jay Pipes

On 05/23/2017 07:23 AM, Chris Dent wrote:

That "higher dev cost" is one of my objections to the 'active'
approach but it is another implication that worries me more. If we
limit deployer architecture choices at the persistence layer then it
seems very likely that we will be tempted to build more and more
power and control into the persistence layer rather than in the
so-called "business" layer. In my experience this is a recipe for
ossification. The persistence layer needs to be dumb and
replaceable.


Err, in my experience, having a *completely* dumb persistence layer -- 
i.e. one that tries to assuage the differences between, say, relational 
and non-relational stores -- is a recipe for disaster. The developer 
just ends up writing join constructs in that business layer instead of 
using a relational data store the way it is intended to be used. Same 
for aggregate operations. [1]


Now, if what you're referring to is "don't use vendor-specific 
extensions in your persistence layer", then yes, I agree with you.


Best,
-jay

[1] Witness the join constructs in Golang in Kubernetes as they work 
around etcd not being a relational data store:


https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/deployment/deployment_controller.go#L528-L556

Instead of a single SQL statement:

SELECT p.* FROM pods AS p
JOIN deployments AS d
ON p.deployment_id = d.id
WHERE d.name = $name;

the deployments controller code has to read every Pod message from etcd 
and loop through each Pod message, returning a list of Pods that match 
the deployment searched for.


Similarly, Kubenetes API does not support any aggregate (SUM, GROUP BY, 
etc) functionality. Instead, clients are required to perform these kinds 
of calculations/operations in memory. This is because etcd, being an 
(awesome) key/value store is not designed for aggregate operations (just 
as Cassandra or CockroachDB do not allow most aggregate operations).


My point here is not to denigrate Kubernetes. Far from it. They (to 
date) have a relatively shallow relational schema and doing join and 
index maintenance [2] operations in client-side code has so far been a 
cost that the project has been OK carrying. The point I'm trying to make 
is that the choice of data store semantics (relational or not, columnar 
or not, eventually-consistent or not, etc) *does make a difference* to 
the architecture of a project, its deployment and the amount of code 
that the project needs to keep to properly handle its data schema. 
There's no way -- in my experience -- to make a "persistence layer" that 
papers over these differences and ends up being useful.


[2] In Kubernetes, all services are required to keep all relevant data 
in memory:


https://github.com/kubernetes/community/blob/master/contributors/design-proposals/principles.md

This means that code that maintains a bunch of in-memory indexes of 
various data objects ends up being placed into every component, Here's 
an example of this in the kubelet (the equivalent-ish of the 
nova-compute daemon) pod manager, keeping an index of pods and mirrored 
pods in memory:


https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/pod/pod_manager.go#L104-L114

https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/pod/pod_manager.go#L159-L181

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [go][all] Settling on gophercloud as the go based API library for all OpenStack projects

2017-05-23 Thread John Griffith
On Tue, May 23, 2017 at 8:54 AM, Davanum Srinivas  wrote:

> Folks,
>
> This has come up several times in various conversations.
>
> Can we please stop activity on
> https://git.openstack.org/cgit/openstack/golang-client/ and just
> settle down on https://github.com/gophercloud/gophercloud ?
>
> This becomes important since new container-y projects like
> stackube/fuxi/kuryr etc can just pick one that is already working and
> not worry about switching later. This is also a NIH kind of behavior
> (at least from a casual observer from outside).
>
> Thanks,
> Dims
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
​Oh, my bad... I'm actually guilty of bringing this up (this morning).  I
was confused about the direction of this, I've been a happy GopherCloud
user for a couple years so I'm perfectly happy with this answer.  Thanks
and sorry for adding to the confusion.

John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] revised Postgresql deprecation patch for governance

2017-05-23 Thread Tim Bell
Is there a proposal where deployments who chose Postgres on good faith can find 
migration path to a MySQL based solution?

Tim

On 23.05.17, 18:35, "Octave J. Orgeron"  wrote:

As OpenStack has evolved and grown, we are ending up with more and more 
MySQL-isms in the code. I'd love to see OpenStack support every database 
out there, but that is becoming more and more difficult. I've tried to 
get OpenStack to work with other databases like Oracle DB, MongoDB, 
TimesTen, NoSQL, and I can tell you that first hand it's not doable 
without making some significant changes. Some services would be easy to 
make more database agnostic, but most would require a lot of reworking. 
I think the pragmatic thing is to do is focus on supporting the MySQL 
dialect with the different engines and clustering technologies that have 
emerged. oslo_db is a great abstraction layer.  We should continue to 
build upon that and make sure that every OpenStack service uses it 
end-to-end. I've already seen plenty of cases where services like 
Barbican and Murano are not using it. I've also seen plenty of use cases 
where core services are using the older methods of connecting to the 
database and re-inventing the wheel to deal with things like retries. 
The more we use oslo_db and make sure that people are consistent with 
it's use and best practices, we better off we'll be in the long-run.

On the topic of doing live upgrades. I think it's a "nice to have" 
feature, but again we need a consistent framework that all services will 
follow. It's already complicated enough with how different services deal 
with parallelism and locking. So if we are going to go down this path 
across even the core services, we need to have a solid solution and 
framework. Otherwise, we'll end up with a hodgepodge of maturity levels 
between services. The expectation from operators is that if you say you 
can do live upgrades, they will expect that to be the case across all of 
OpenStack and not a buffet style feature. We would also have to take 
into consideration larger shops that have more distributed and 
scaled-out control planes. So we need be careful on this as it will have 
a wide impact on development, testing, and operating.

Octave


On 5/23/2017 6:00 AM, Sean Dague wrote:
> On 05/22/2017 11:26 PM, Matt Riedemann wrote:
>> On 5/22/2017 10:58 AM, Sean Dague wrote:
>>> I think these are actually compatible concerns. The current proposal to
>>> me actually tries to address A1 & B1, with a hint about why A2 is
>>> valuable and we would want to do that.
>>>
>>> It feels like there would be a valuable follow on in which A2 & B2 were
>>> addressed which is basically "progressive enhancements can be allowed to
>>> only work with MySQL based backends". Which is the bit that Monty has
>>> been pushing for in other threads.
>>>
>>> This feels like what a Tier 2 support looks like. A basic SQLA and pray
>>> so that if you live behind SQLA you are probably fine (though not
>>> tested), and then test and advanced feature roll out on a single
>>> platform. Any of that work might port to other platforms over time, but
>>> we don't want to make that table stakes for enhancements.
>> I think this is reasonable and is what I've been hoping for as a result
>> of the feedback on this.
>>
>> I think it's totally fine to say tier 1 backends get shiny new features.
>> I mean, hell, compare the libvirt driver in nova to all other virt
>> drivers in nova. New features are written for the libvirt driver and we
>> have to strong-arm them into other drivers for a compatibility story.
>>
>> I think we should turn on postgresql as a backend in one of the CI jobs,
>> as I've noted in the governance change - it could be the nova-next
>> non-voting job which only runs on nova, but we should have something
>> testing this as long as it's around, especially given how easy it is to
>> turn this on in upstream CI (it's flipping a devstack variable).
> Postgresql support shouldn't be in devstack. If we're taking a tier 2
> approach, someone needs to carve out database plugins from devstack and
> pg would be one (as could be galera, etc).
>
> This historical artifact that pg was maintained in devstack, but much
> more widely used backends were not, is part of the issue.
>
> It would also be a good unit test case as to whether there are pg
> focused folks around out there willing to do this basic devstack plugin
> / job setup work.
>
>   -Sean
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://l

[openstack-dev] [publiccloud-wg] Tomorrows meeting PublicCloudWorkingGroup

2017-05-23 Thread Tobias Rydberg

Hi everyone,

First of all, really fun to see the interest for the group and the forum 
sessions we moderated in Boston. I hope that we can keep up that spirit 
and looking forward to a lot of participants in the bi-weekly meetings 
for this cycle.


So, reminder for tomorrows meeting for the PublicCloudWorkingGroup.
May 24th - 1400 UTC in IRC channel #openstack-meeting-3

Etherpad: https://etherpad.openstack.org/p/publiccloud-wg

Agenda
1. Recap Boston Summit
2. Goals for Sydney Summit
3. Other

Have a great day and see you all tomorrow!

Tobias
tob...@citynetwork.se




smime.p7s
Description: S/MIME Cryptographic Signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [devstack] [infra] heat api services with uwsgi

2017-05-23 Thread Zane Bitter

On 23/05/17 01:23, Rabi Mishra wrote:

Hi All,

As per the updated community goal[1]  for api deployment with wsgi,
we've to transition to use uwsgi rather than mod_wsgi at the gate. It
also seems mod_wsgi support would be removed from devstack in Queens.

I've been working on a patch[2] for the transition and encountered a few
issues as below.

1. We encode stack_indentifer( along with the path
separator in heatclient. So, requests with encoded path separators are
dropped by apache (with 404), if we don't have 'AllowEncodedSlashes On'
directive in the site/vhost config[3].


We'd probably want 'AllowEncodedSlashes NoDecode'.


Setting this for mod_proxy_uwsgi[4] seems to work on fedora but not
ubuntu.  From my testing It seems, it has to be set in 000-default.conf
for ubuntu.

Rather than messing with the devstack plugin code, I went ahead proposed
a change to not encode the path separators in heatclient[5] ( Anyway
they would be decoded by apache with the directive 'AllowEncodedSlashes
On' before it's consumed by the service) which seem to have fixed those
404s.


Pasting my comment from the patch:

One potential problem with this is that you can probably craft a stack 
name in such a way that heatclient ends up calling a real but unexpected 
URL. (I don't think this is a new problem, but it's likely the problem 
that the default value of AllowEncodedSlashes is designed to fix, and 
we're circumventing it here.)


It seems to me the ideal would be to force '/'s to be encoded when they 
occur in the stack and resource names. Clearly they should never have 
been encoded when they're actual path separators (e.g. between the stack 
name and stack ID).


It'd be even better if Apache were set to "AllowEncodedSlashes NoDecode" 
and we could then decode stack/resource names that include slashes after 
splitting at the path separators, so that those would actually work. I 
don't think the routing framework can handle that though.


For that reason I believe we disallow slashes in stack/resource names. 
So with "AllowEncodedSlashes Off" we'd get the right behaviour (which is 
to always 404 when the stack/resource name contains a slash).



Is there a generic way to set the above directive (when using
apache+mod_proxy_uwsgi) in the devstack plugin?

2.  With the above, most of the tests seem to work fine other than the
ones using waitcondition, where we signal back from the vm to the api


Not related to the problem below, but I believe that when signalling 
through the heat-cfn-api we use an arn to identify the stack, and I 
suspect that slashes in the arn are escaped at or near the source. So we 
may have no choice but to find a way to turn on AllowEncodedSlashes. Or 
is it in the query string part anyway?



services. I could see " curl: (7) Failed to connect to 10.0.1.78 port
80: No route to host" in the vm console logs[6].

It could connect to heat api services using ports 8004/8000 without this
patch, but not sure why not port 80? I tried testing this locally and
didn't see the issue though.

Is this due to some infra settings or something else?


[1] https://governance.openstack.org/tc/goals/pike/deploy-api-in-wsgi.html

[2] https://review.openstack.org/#/c/462216/

[3]
https://github.com/openstack/heat/blob/master/devstack/files/apache-heat-api.template#L9

[4]
http://logs.openstack.org/16/462216/6/check/gate-heat-dsvm-functional-convg-mysql-lbaasv2-non-apache-ubuntu-xenial/fbd06d6/logs/apache_config/heat-wsgi-api.conf.txt.gz

[5] https://review.openstack.org/#/c/463510/

[6]
http://logs.openstack.org/16/462216/11/check/gate-heat-dsvm-functional-convg-mysql-lbaasv2-non-apache-ubuntu-xenial/e7d9e90/console.html#_2017-05-20_07_04_30_718021


--
Regards,
Rabi Mishra



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Active or passive role with our database layer

2017-05-23 Thread Mike Bayer



On 05/23/2017 01:10 PM, Octave J. Orgeron wrote:

Comments below..

On 5/21/2017 1:38 PM, Monty Taylor wrote:


For example: An HA strategy using slave promotion and a VIP that 
points at the current write master paired with an application 
incorrectly configured to do such a thing can lead to writes to the 
wrong host after a failover event and an application that seems to be 
running fine until the data turns up weird after a while.


This is definitely a more complicated area that becomes more and more 
specific to the clustering technology being used. Galera vs. MySQL 
Cluster is a good example. Galera has an active/passive architecture 
where the above issues become a concern for sure. 


This is not my understanding; Galera is multi-master and if you lose a 
node, you don't lose any committed transactions; the writesets are 
validated as acceptable by, and pushed out to all nodes before your 
commit succeeds.   There's an option to make it wait until all those 
writesets are fully written to disk as well, but even with that option 
flipped off, if you COMMIT to one node then that node explodes, you lose 
nothing. your writesets have been verified as will be accepted by all 
the other nodes.


active/active is the second bullet point on the main homepage: 
http://galeracluster.com/products/





In the "active" approach, we still document expectations, but we also 
validate them. If they are not what we expect but can be changed at 
runtime, we change them overriding conflicting environmental config, 
and if we can't, we hard-stop indicating an unsuitable environment. 
Rather than providing helper tools, we perform the steps needed 
ourselves, in the order they need to be performed, ensuring that they 
are done in the manner in which they need to be done.


This might be a trickier situation, especially if the database(s) are in 
a separate or dedicated environment that the OpenStack service processes 
don't have access to. Of course for SQL commands, this isn't a problem. 
But changing the configuration files and restarting the database may be 
a harder thing to expect.


nevertheless the HA setup within tripleo does do this, currently using 
Pacemaker and resource agents.This is within the scope of at least 
parts of Openstack.






In either approach the OpenStack service has to be able to talk to 
both old and new versions of the schema. And in either approach we 
need to make sure to limit the schema change operations to the set 
that can be accomplished in an online fashion. We also have to be 
careful to not start writing values to new columns until all of the 
nodes have been updated, because the replication stream can't 
replicate the new column value to nodes that don't have the new column.


This is another area where something like MySQL Cluster (NDB) would 
operate differently because it's an active/active architecture. So 
limiting the number of online changes while a table is locked across the 
cluster would be very important. There is also the timeouts for the 
applications to consider, something that could be abstracted again with 
oslo.db.


So the DDL we do on Galera, to confirm but also clarify Monty's point, 
is under the realm of "total order isolation", which means it's going to 
hold up the whole cluster while DDL is applied to all nodes.   Monty 
says this disqualifies it as an "online upgrade", which is because if 
you emitted DDL that had to run default values into a million rows then 
your whole cluster would temporarily have to wait for that to happen; we 
handle that by making sure we don't do migrations with that kind of data 
requirement and while yes, the DB has to wait for a schema change to 
apply, they are at least very short (in theory).   For practical 
purposes, it is *mostly* an "online" style of migration because all the 
services that talk to the database can keep on talking to the database 
without being stopped, upgraded to new software version, and restarted, 
which IMO is what's really hard about "online" upgrades.   It does mean 
that services will just have a little more latency while operations 
proceed.  Maybe we need a new term called "quasi-online" or something 
like that.


Facebook has released a Python version of their "online" schema 
migration tool for MySQL which does the full blown "create a new, blank 
table" approach, e.g. which contains the newer version of the schema, so 
that nothing at all stops or slows down at all.  And then to manage 
between the two tables while everything is running it also makes a 
"change capture" table to keep track of what's going on, and then to 
wire it all together it uses...triggers! 
https://github.com/facebookincubator/OnlineSchemaChange/wiki/How-OSC-works. 
  Crazy Facebook kids.  How we know that "make two more tables and wire 
it all together with new triggers" in fact is more performant than just, 
"add a column to the table", I'm not sure how/when they make that 
determination.   I don't see a

Re: [openstack-dev] [TripleO] A proposal for hackathon to reduce deploy time of TripleO

2017-05-23 Thread Emilien Macchi
On Tue, May 23, 2017 at 9:40 AM, Emilien Macchi  wrote:
> On Tue, May 23, 2017 at 6:47 AM, Sagi Shnaidman  wrote:
>> Hi, all
>>
>> I'd like to propose an idea to make one or two days hackathon in TripleO
>> project with main goal - to reduce deployment time of TripleO.
>>
>> - How could it be arranged?
>>
>> We can arrange a separate IRC channel and Bluejeans video conference session
>> for hackathon in these days to create a "presence" feeling.
>
> +1 for IRC. We already have #openstack-sprint, that we could re-use.
> Also +1 for video conference, to get face to face interactions,
> promptly and unscheduled.
>
>> - How to participate and contribute?
>>
>> We'll have a few responsibility fields like tripleo-quickstart, containers,
>> storage, HA, baremetal, etc - the exact list should be ready before the
>> hackathon so that everybody could assign to one of these "teams". It's good
>> to have somebody in team to be stakeholder and responsible for organization
>> and tasks.
>
> Before running the sprint, we should first track bugs / blueprints
> related to deployment speed.
> Not everyone in our team understands why some parts of deployments
> take time, so we need to make it visible so everyone can know how they
> can help during the sprint.
>
> Maybe we could create a Launchpad tag "deployment-time" to track bugs
> related to it. We should also make prioritization so we can work on
> the most critical ones first.

Proposed here: https://review.openstack.org/467335
Discussion can happen in Gerrit for this one.

> I like the idea of breaking down the skills into small groups:
>
> - High Availability: deployment & runtime of Pacemaker optimization
> - Puppet: anything related to the steps (a bit more general but only a
> few of us have expertise on it, we could improve it).
> - Heat: work with the Heat team if we have some pending bugs about slowness.
> - Baremetal: ironic / workflows
> - tripleo-quickstart: tasks that can be improved / optimized
>
> This is a proposal ^ feel free to (comment,add|remove) anything.
>
>
>> - What is the goal?
>>
>> The goal of this hackathon to reduce deployment time of TripleO as much as
>> possible.
>>
>> For example part of CI team takes a task to reduce quickstart tasks time. It
>> includes statistics collection, profiling and detection of places to
>> optimize. After this tasks are created, patches are tested and submitted.
>>
>> The prizes will be presented to teams which saved most of time :)
>>
>> What do you think?
>
> Excellent idea, thanks Sagi for proposing it.
>
> Another thought: before doing the sprint, we might want to make sure
> our tripleo-ci is in stable shape (which is not the case right now, we
> have 4 alerts and one of them affects ovb-ha)...
>
>> Thanks
>> --
>> Best regards
>> Sagi Shnaidman
>
>
>
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][security] Rename openstack-ansible-security role?

2017-05-23 Thread Major Hayden
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 05/17/2017 12:25 PM, Major Hayden wrote:
> So my questions are:
> 
>   1) Should the openstack-ansible-security role be
>  renamed to alleviate confusion?
> 
>   2) If it should be renamed, what's your suggestion?

Thanks for all of the feedback!  Everyone seems to agree that a rename would be 
helpful to reduce confusion.

Here are the suggested names (in no particular order):

  - ansible-host-security
  - ansible-security
  - ansible-hardening
  - linux-ansible-security
  - ansible-host-hardening
  - ansible-server-security

I'm a sucker for short names, and 'ansible-hardening' is pretty brief. It also 
explains what the role does: Ansible that does hardening.  Also, a quick check 
of Google and GitHub doesn't come up with any matches.

I'll see if we can move forward with 'ansible-hardening' and keep everyone 
updated! :)

- --
Major Hayden
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJZJHAQAAoJEHNwUeDBAR+xMlcP/jLUG9IDLtHuqHGp9q06lKiP
LpiA4JnATk4oTIM9WHUqKErCkzgebBj+mOHpcXb3Fv7eIGfFTdNajWBOaRgrX89n
+zqelhCKTbLk7Ob1D3njRMUevSBu1HwBnBPep6m9uFmEnVrSqINvz/fgjhqvnPKm
5R7/giniBxHwVyH7ChErF2b68iwcctFcbHg1+NSaDWVMI7N3dly/IjEWrlIHe5Tn
0VfDxBaWyaDesChjZUxo8UPBLgBNxY8FjCnsHJO4+43iOilzU4Peg+/od6GDiVXB
kOdYMialw1bFEO2eNR2j1eGRpPMRIlm0VPByyN6kJdiK6cAszhzosn4OSUHzv0IJ
xS4KaDWcvxmhIXmKo+io3HwNPVbV6eg39ztYEjg0copZQ6nq90AiiCbSTR8BVb1q
Mw5W4Xig78yBM7VlKzAHMU+3/PFruLb5sv6RWFC+7Y6+eDkFcqfzVvQIDAUjRuaG
nfnr7lmM1YzZkA/BSSAEtzR+Sw+3GWgxwaq/zigs8zlQ2VhBMaKdvfzsL/uVvTHS
/brch/4jp4T0YEb+n0eYzniv7sbgAm8ialL1gTt7xgEECl33Z0qMTyR+G07NS4H6
mDGVwlPtqxvmk7g9srMD3LiMABHQF65+zYgXoVNIHx2GoeWakrOv9ue/txyMP/rz
77ekrvA7cjq2ISD2YEod
=GZDX
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Active or passive role with our database layer

2017-05-23 Thread Octave J. Orgeron

Comments below..

On 5/21/2017 1:38 PM, Monty Taylor wrote:

Hi all!

As the discussion around PostgreSQL has progressed, it has come clear 
to me that there is a decently deep philosophical question on which we 
do not currently share either definition or agreement. I believe that 
the lack of clarity on this point is one of the things that makes the 
PostgreSQL conversation difficult.


I believe the question is between these two things:

* Should OpenStack assume the existence of an external database 
service that it treat as an black-box on the other side of a 
connection string?


* Should OpenStack take an active and/or opinionated role in managing 
the database service?


A potentially obvious question about that (asked by Mike Bayer in a 
different thread) is: "what do you mean by managing?"


What I mean by managing is doing all of the things you can do related 
to database operational controls short of installing the software, 
writing the basic db config files to disk and stopping and starting 
the services. It means being much more prescriptive about what types 
of config we support, validating config settings that cannot be 
overridden at runtime and refusing to operate if they are unworkable.


I think it's helpful and important for us to have automation tooling 
like tripleo, puppet, etc. that can stand up a MySQL database. But we 
also have to realize that as shops mature, they will deploy more 
complicated database topologies, clustered configurations, and 
replication scenarios. So I think we shouldn't go overboard with being 
prescriptive. We also have to realize that in the enterprise space, 
databases are usually deployed and managed by a separate database team, 
which means less control over that layer. So we shouldn't force people 
into this model. We should provide best practice documentation, examples 
(tripleo, puppet, ansible, etc.), and leave it up to the operator.




Why would we want to be 'more active'? When managing and tuning 
databases, there are some things that are driven by the environment 
and some things that are driven by the application.


Things that are driven by the environment include things like the 
amount of RAM actually available, whether or not the machines running 
the database are dedicated or shared, firewall settings, selinux 
settings and what versions of software are available.


This is a good example of an area that we should focus on documenting 
best practices and leave it to the operator to implement. Guidelines 
around cpu, memory, security settings, tunables, etc. are what's needed 
here. Today, there isn't really any guidance or best practices on even 
sizing the database(s) for a given deployment size.




Things that are driven by the application are things like character 
set and collation, schema design, data types, schema upgrade and HA 
strategies.


These are things that we can have a bit more control or direction on.



One might argue that HA strategies are an operator concern, but in 
reality the set of workable HA strategies is tightly constrained by 
how the application works, and the pairing an application expecting 
one HA strategy with a deployment implementing a different one can 
have negative results ranging from unexpected downtime to data 
corruption.


For example: An HA strategy using slave promotion and a VIP that 
points at the current write master paired with an application 
incorrectly configured to do such a thing can lead to writes to the 
wrong host after a failover event and an application that seems to be 
running fine until the data turns up weird after a while.


This is definitely a more complicated area that becomes more and more 
specific to the clustering technology being used. Galera vs. MySQL 
Cluster is a good example. Galera has an active/passive architecture 
where the above issues become a concern for sure. While MySQL Cluster 
(NDB) is an active/active architecture, so losing a node only effects 
any uncommitted transactions, that could easily be addressed with a 
retry. These topologies will become more complicated as people start 
looking at cross regional replication and DR.




For the areas in which the characteristics of the database are tied 
closely to the application behavior, there is a constrained set of 
valid choices at the database level. Sometimes that constrained set 
only has one member.


The approach to those is what I'm talking about when I ask the 
question about "external" or "active".


In the "external" approach, we document the expectations and then 
write the code assuming that the database is set up appropriately. We 
may provide some helper tools, such as 'nova-manage db sync' and 
documentation on the sequence of steps the operator should take.


In the "active" approach, we still document expectations, but we also 
validate them. If they are not what we expect but can be changed at 
runtime, we change them overriding conflicting environmental config, 
and if we can't, 

Re: [openstack-dev] [tc] revised Postgresql deprecation patch for governance

2017-05-23 Thread Octave J. Orgeron
As OpenStack has evolved and grown, we are ending up with more and more 
MySQL-isms in the code. I'd love to see OpenStack support every database 
out there, but that is becoming more and more difficult. I've tried to 
get OpenStack to work with other databases like Oracle DB, MongoDB, 
TimesTen, NoSQL, and I can tell you that first hand it's not doable 
without making some significant changes. Some services would be easy to 
make more database agnostic, but most would require a lot of reworking. 
I think the pragmatic thing is to do is focus on supporting the MySQL 
dialect with the different engines and clustering technologies that have 
emerged. oslo_db is a great abstraction layer.  We should continue to 
build upon that and make sure that every OpenStack service uses it 
end-to-end. I've already seen plenty of cases where services like 
Barbican and Murano are not using it. I've also seen plenty of use cases 
where core services are using the older methods of connecting to the 
database and re-inventing the wheel to deal with things like retries. 
The more we use oslo_db and make sure that people are consistent with 
it's use and best practices, we better off we'll be in the long-run.


On the topic of doing live upgrades. I think it's a "nice to have" 
feature, but again we need a consistent framework that all services will 
follow. It's already complicated enough with how different services deal 
with parallelism and locking. So if we are going to go down this path 
across even the core services, we need to have a solid solution and 
framework. Otherwise, we'll end up with a hodgepodge of maturity levels 
between services. The expectation from operators is that if you say you 
can do live upgrades, they will expect that to be the case across all of 
OpenStack and not a buffet style feature. We would also have to take 
into consideration larger shops that have more distributed and 
scaled-out control planes. So we need be careful on this as it will have 
a wide impact on development, testing, and operating.


Octave


On 5/23/2017 6:00 AM, Sean Dague wrote:

On 05/22/2017 11:26 PM, Matt Riedemann wrote:

On 5/22/2017 10:58 AM, Sean Dague wrote:

I think these are actually compatible concerns. The current proposal to
me actually tries to address A1 & B1, with a hint about why A2 is
valuable and we would want to do that.

It feels like there would be a valuable follow on in which A2 & B2 were
addressed which is basically "progressive enhancements can be allowed to
only work with MySQL based backends". Which is the bit that Monty has
been pushing for in other threads.

This feels like what a Tier 2 support looks like. A basic SQLA and pray
so that if you live behind SQLA you are probably fine (though not
tested), and then test and advanced feature roll out on a single
platform. Any of that work might port to other platforms over time, but
we don't want to make that table stakes for enhancements.

I think this is reasonable and is what I've been hoping for as a result
of the feedback on this.

I think it's totally fine to say tier 1 backends get shiny new features.
I mean, hell, compare the libvirt driver in nova to all other virt
drivers in nova. New features are written for the libvirt driver and we
have to strong-arm them into other drivers for a compatibility story.

I think we should turn on postgresql as a backend in one of the CI jobs,
as I've noted in the governance change - it could be the nova-next
non-voting job which only runs on nova, but we should have something
testing this as long as it's around, especially given how easy it is to
turn this on in upstream CI (it's flipping a devstack variable).

Postgresql support shouldn't be in devstack. If we're taking a tier 2
approach, someone needs to carve out database plugins from devstack and
pg would be one (as could be galera, etc).

This historical artifact that pg was maintained in devstack, but much
more widely used backends were not, is part of the issue.

It would also be a good unit test case as to whether there are pg
focused folks around out there willing to do this basic devstack plugin
/ job setup work.

-Sean



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [go][all] Settling on gophercloud as the go based API library for all OpenStack projects

2017-05-23 Thread Melvin Hillsman
+1 as again it can assist us as an entire community (dev/non-dev) to
galvanize around one tool

On Tue, May 23, 2017 at 3:32 PM, Sean McGinnis 
wrote:

> On Tue, May 23, 2017 at 10:54:13AM -0400, Davanum Srinivas wrote:
> > Folks,
> >
> > This has come up several times in various conversations.
> >
> > Can we please stop activity on
> > https://git.openstack.org/cgit/openstack/golang-client/ and just
> > settle down on https://github.com/gophercloud/gophercloud ?
> >
>
> +1
>
> I think we are all better off if we can focus our efforts in one place.
>
> > This becomes important since new container-y projects like
> > stackube/fuxi/kuryr etc can just pick one that is already working and
> > not worry about switching later. This is also a NIH kind of behavior
> > (at least from a casual observer from outside).
> >
> > Thanks,
> > Dims
> >
> > --
> > Davanum Srinivas :: https://twitter.com/dims
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
-- 
Kind regards,

Melvin Hillsman
mrhills...@gmail.com
mobile: (832) 264-2646

Learner | Ideation | Belief | Responsibility | Command
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Supporting volume_type when booting from volume

2017-05-23 Thread Matt Riedemann

On 5/23/2017 9:56 AM, Duncan Thomas wrote:
Is it entirely unreasonable to turn the question around and ask why, 
given it is such a commonly requested feature, the Nova team are so 
resistant to it?


Because it's technical debt for one thing. Adding more orchestration 
adds complexity, which adds bugs. Also, as noted in the linked devref on 
this, when nova proxies something via the compute API to another 
service's API, if that other service changes their API (like with nova's 
image proxy API to glance v1 for example, and needing to get to glance 
v2), then we have this weird situation with compatibility. Which is more 
technical debt. Microversions should make that less of an issue, but 
it's still there.


It's also a slippery slope. Once you allow proxies and orchestration 
into part of the API, people use it as grounds for justifying doing more 
of it elsewhere, i.e. if we do this for volumes, when are we going to 
start seeing people asking for passing more detailed information about 
Neutron ports when creating a server?


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Missing answers on Pike release goals

2017-05-23 Thread John Dickinson


On 23 May 2017, at 8:05, Doug Hellmann wrote:

> Excerpts from Sean McGinnis's message of 2017-05-23 08:58:08 -0500:

 - Is it that the reporting process is too heavy ? (requiring answers
 from projects that are obviously unaffected)
>>>
>>> I've thought about this, OSC was unaffected by one of the goals but
>>> not the other, so I can't really hide in this bucket.  It really is
>>> not that hard to put up a review saying "not me".
>>>
 - Is it that people ignore the deadlines and missed the reminders ?
 (some unaffected project teams also do not do releases, and therefore
 ignore the release countdown emails)
>>>
>>> In my case, not so much "ignore" but "put off until tomorrow" where
>>> tomorrow turned in to 6 weeks.  I really don't have a hard reason
>>> other than simply not prioritizing it because I knew one of the goals
>>> was going to take some coordination work
>>>
>>
>> +1 - this has been my case, unfortunately.
>>
>> A patch submission has the feeling of a major thing that goes through
>> a lot of process (at least still in my head). I wonder if we would be
>> better off tracking some of this through a wiki page or even an
>> etherpad, with just the completion of the goal being something
>> submitted to the repo. Then it would be really easy to update at any
>> point with notes like "WIP patch put up but still working on it" along
>> the way.
>
> The review process for this type of governance patch is pretty light
> (they fall under the one-week-no-objections house rule), but I
> decided to use a patch instead of the wiki specifically because it
> allows for feedback. We've had several cases where teams didn't
> provide enough detail or didn't think a goal applied to them when
> it did (deploying with WSGI came up at least once).  Wiki changes
> can be tracked, but if someone has a question they have to go track
> down the author in some other venue to get it answered.
>
> I also didn't want teams to have to keep anything up to date during
> the cycle, because I didn't want this to be yet another "status
> report". Each goal needs at most 2 patches: one at the start of the
> cycle to acknowledge and point to whatever other artifacts are being
> used for tracking the work already, and then one at the end of the
> cycle to indicate how much of the work was completed and what the
> next steps are. We tied the process deadlines to existing deadlines
> when we thought teams would already be thinking of these sorts of
> topics (most teams have spec deadlines around milestone 1 and then
> everyone has the same release date at the end of the cycle).
>

I can sympathize with the "do it tomorrow" turns into 6 weeks later...

Part of the issue for me, personally, is that a governance patch does *not* 
feel simple or lightweight. I assume (in part based on experience) that any 
governance patch I propose will be closely examined and I will be forced to 
justify every corner case and comment made. Frankly, writing the patch that 
will stand up too a critical eye will take a long time. I'll do it tomorrow...

Let's take the py3 goal as an example. Note: I am *not* wanting to get into a 
discussion about particular py3 issues or whatever. This is a discussion on the 
goals process, and I'm only using one of the current goals as an example of why 
I haven't proposed a governance patch for it.

Swift does not support Py3. So clearly, there's work to be done to meet the 
goal. I've talked with others in the community about some of the blockers and 
concerns about porting to Py3. Several of the concerns are not trivial and will 
take substantial work to overcome[1]. A governance patch will need to list 
these issues, but I don't know if this is a complete list. If I propose a list 
that's incomplete, I feel like I'll be judged on the list I first proposed 
("you finished the list, why doesn't it work?") instead of being a more dynamic 
process. I need to spend more time understanding what the issues are to make 
sure I have a complete list. I'll propose that patch tomorrow...

The outstanding work to get Py3 support in Swift is very large. Yet there are 
more goals being discussed now, and there's no way I can get Py3 support in 
Swift in Pike. Or Queens. Or probably Rocky either. That's not to say it isn't 
an important goal, but the scope combined with the TC deadline means that my 
governance patch for this goal (the tl;dr version is "not gonna happen") has to 
address this in sufficient detail to stand up to review by TC members who are 
on the PSF! I guess I'll start writing that tomorrow...

While I know that Py3 support is important, I also have to prioritize it 
against other important things. My employer has prioritized certain features 
because that directly impacts our ability to add customers (which directly 
affects my ability to get paid). Other employers in the community are doing the 
same for their employees. In the broader community, as clusters have grown over 
the year

Re: [openstack-dev] [ironic] why is glance image service code so complex?

2017-05-23 Thread Dmitry Tantsur

On 05/23/2017 05:52 PM, Pavlo Shchelokovskyy wrote:

Hi all,

I've started to dig through the part of Ironic code that deals with glance and I 
am confused by some things:


1) Glance image service classes have methods to create, update and delete 
images. What's the use case behind them? Is ironic supposed to actively manage 
images? Besides, these do not seem to be used anywhere else in ironic code.


Yeah, I don't think we upload anything to glance. We may upload stuff to Swift 
though, but that's another story.




2) Some parts of code (and quite a handful of options in [glance] config 
section) AFAIU target a situation when both ironic and glance are deployed 
standalone with possibly multiple glance API services so there is no keystone 
catalog to discover the (load-balanced) glance endpoint from. We even have our 
own round-robin implementation for those multiple glance hosts o_0


3) Glance's direct_url handling - AFAIU this will work iff there is a single 
conductor service and single glance registry service configured with simple file 
backend deployed on the same host (with appropriate file access permissions 
between ironic and glance), and glance is configured to actually provide 
direct_url for the image - very much a DevStack (though with non-standard settings).


Do we actually have to support such narrow deployment scenarios as in 2) and 3)? 
While for 2) we probably should continue support standalone Glance, keeping 
implementations for our own round-robin load-balancing and retries seems out of 
ironic scope.


Yeah, I'd expect people to deploy HA proxy or something similar for 
load-balancing. Not sure what you mean by retries though.


Number 3, I suspect, is for simple all-in-one deployments. I don't remember the 
whole background, so I can't comment more.




Most of those do seem to be a legacy code crust from nova-baremetal era, but I 
might be missing something. I'm eager to hear your comments.


#1 and #2 probably. I'm fine with getting rid of them.



Cheers,

Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [murano] Meeting time

2017-05-23 Thread Paul Bourke

Hi Felipe / Murano community,

I was wondering how would people feel about revising the time for the 
Murano weekly meeting?


Personally the current time is difficult for me to attend as it falls at 
the end of a work day, I also have some colleagues that would like to 
attend but can't at the current time.


Given recent low attendance, would another time suit people better?

Thanks,
-Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Issues with reno

2017-05-23 Thread Doug Hellmann
Excerpts from Matt Riedemann's message of 2017-05-22 21:48:37 -0500:
> I think Doug and I have talked about this before, but it came up again 
> tonight.
> 
> There seems to be an issue where release notes for the current series 
> don't show up in the published release notes, but unreleased things do.
> 
> For example, the python-novaclient release notes:
> 
> https://docs.openstack.org/releasenotes/python-novaclient/
> 
> Contain Ocata series release notes and the currently unreleased set of 
> changes for Pike, but doesn't include the 8.0.0 release notes, which is 
> important for projects impacted by things we removed in the 8.0.0 
> release (lots of deprecated proxy APIs and CLIs were removed).
> 
> I've noticed the same for things in Nova's release notes where 
> everything between ocata and the p-1 tag is missing.
> 
> Is there already a bug for this?
> 

I don't think there is a bug, but I have it in my notes to look
into it this week based on our earlier conversation. Based purely
on the description, the problem might be related to a similar issue
the Ironic team reported in https://bugs.launchpad.net/reno/+bug/1682147

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] A proposal for hackathon to reduce deploy time of TripleO

2017-05-23 Thread Joe Talerico
On Tue, May 23, 2017 at 6:47 AM, Sagi Shnaidman  wrote:
> Hi, all
>
> I'd like to propose an idea to make one or two days hackathon in TripleO
> project with main goal - to reduce deployment time of TripleO.
>
> - How could it be arranged?
>
> We can arrange a separate IRC channel and Bluejeans video conference session
> for hackathon in these days to create a "presence" feeling.
>
> - How to participate and contribute?
>
> We'll have a few responsibility fields like tripleo-quickstart, containers,
> storage, HA, baremetal, etc - the exact list should be ready before the
> hackathon so that everybody could assign to one of these "teams". It's good
> to have somebody in team to be stakeholder and responsible for organization
> and tasks.
>
> - What is the goal?
>
> The goal of this hackathon to reduce deployment time of TripleO as much as
> possible.
>
> For example part of CI team takes a task to reduce quickstart tasks time. It
> includes statistics collection, profiling and detection of places to
> optimize. After this tasks are created, patches are tested and submitted.
>
> The prizes will be presented to teams which saved most of time :)
>
> What do you think?

Sounds like a great idea! Looking forward to contributing! Lets go
ahead and add this one :
https://bugs.launchpad.net/tripleo/+bug/1671859

;-)

>
> Thanks
> --
> Best regards
> Sagi Shnaidman
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-23 Thread Ildiko Vancsa
Hi Alex,

First of all thank you for writing this up the summary and list options with 
their expected impacts.

> 
> 1. We could combine all of the documentation builds, so that each project has 
> a single doc/source directory that includes developer, contributor, and user 
> documentation. This option would reduce the number of build jobs we have to 
> run, and cut down on the number of separate sphinx configurations in each 
> repository. It would completely change the way we publish the results, 
> though, and we would need to set up redirects from all of the existing 
> locations to the new locations and move all of the existing documentation 
> under the new structure.
> 
> 2. We could retain the existing trees for developer and API docs, and add a 
> new one for "user" documentation. The installation guide, configuration 
> guide, and admin guide would move here for all projects. Neutron's user 
> documentation would include the current networking guide as well. This option 
> would add 1 new build to each repository, but would allow us to easily roll 
> out the change with less disruption in the way the site is organized and 
> published, so there would be less work in the short term.

I’m fully in favor for option #1 and/or option #2. From the perspective of 
trying to move step-by-step and give a chance to project teams to acclimatize 
with the changes I think starting with #2 should be sufficient.

Although if we think that option #1 is doable as a starting point and also end 
goal, you have my support for that too.

> 
> 3. We could do option 2, but use a separate repository for the new 
> user-oriented documentation. This would allow project teams to delegate 
> management of the documentation to a separate review project-sub-team, but 
> would complicate the process of landing code and documentation updates 
> together so that the docs are always up to date. 
> 

As being one of the advocates on having the documentation living together with 
the code so we can give a chance to the experts of the code changes to add the 
corresponding documentation as well, I'm definitely against option #3. :)

Thanks and Best Regards,
Ildikó
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][infra][puppet][stable] Re: [Release-job-failures] Release of openstack/puppet-nova failed

2017-05-23 Thread Emilien Macchi
On Mon, May 22, 2017 at 3:43 PM, Doug Hellmann  wrote:
> Excerpts from Jeremy Stanley's message of 2017-05-22 19:16:34 +:
>> On 2017-05-22 12:31:49 -0600 (-0600), Alex Schultz wrote:
>> > On Mon, May 22, 2017 at 10:34 AM, Jeremy Stanley  wrote:
>> > > On 2017-05-22 09:06:26 -0600 (-0600), Alex Schultz wrote:
>> > > [...]
>> > >> We ran into this for the puppet-module-build check job so I created a
>> > >> puppet-agent-install builder.  Perhaps the job needs that added to it
>> > > [...]
>> > >
>> > > Problem here being these repos share the common tarball jobs used
>> > > for generating python sdists, with a little custom logic baked into
>> > > run-tarball.sh[*] for detecting and adjusting when the repo is for a
>> > > Puppet module. I think this highlights the need to create custom
>> > > tarball jobs for Puppet modules, preferably by abstracting this
>> > > custom logic into a new JJB builder.
>> >
>> > I assume you mean a problem if we added this builder to the job
>> > and it fails for some reason thus impacting the python jobs?
>>
>> My concern is more that it increases complexity by further embedding
>> package selection and installation choices into that already complex
>> script. We'd (Infra team) like to get more of the logic out of that
>> random pile of shell scripts and directly into job definitions
>> instead. For one thing, those scripts are only updated when we
>> regenerate our nodepool images (at best once a day) and leads to
>> significant job inconsistencies if we have image upload failures in
>> some providers but not others. In contrast, job configurations are
>> updated nearly instantly (and can even be self-tested in many cases
>> once we're on Zuul v3).
>>
>> > As far as adding to the builder to the job that's not really a
>> > problem and wouldn't change those jobs as they don't reference the
>> > installed puppet executable.
>>
>> It does risk further destabilizing the generic tarball jobs by
>> introducing more outside dependencies which will only be used by a
>> scant handful of the projects running them.
>>
>> > The problem I have with putting this in the .sh is that it becomes
>> > yet another place where we're doing this package installation (we
>> > already do it in puppet openstack in
>> > puppet-openstack-integration). I originally proposed the builder
>> > because it could be reused if a job requires puppet be available.
>> > ie. this case. I'd rather not do what we do in the builder in a
>> > shell script in the job and it seems like this is making it more
>> > complicated than it needs to be when we have to manage this in the
>> > long term.
>>
>> Agreed, I'm saying a builder which installs an unnecessary Puppet
>> toolchain for the generic tarball jobs is not something we'd want,
>> but it would be pretty trivial to make puppet-specific tarball jobs
>> which do use that builder (and has the added benefit that
>> Puppet-specific logic can be moved _out_ of run-tarballs.sh and into
>> your job configuration instead at that point).
>
> That approach makes sense.
>
> When the new job template is set up, let me know so I can add it to the
> release repo validation as a known way to release things.

https://review.openstack.org/467294

Any feedback is welcome,

Thanks!

> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] why is glance image service code so complex?

2017-05-23 Thread Pavlo Shchelokovskyy
Hi all,

I've started to dig through the part of Ironic code that deals with glance
and I am confused by some things:

1) Glance image service classes have methods to create, update and delete
images. What's the use case behind them? Is ironic supposed to actively
manage images? Besides, these do not seem to be used anywhere else in
ironic code.

2) Some parts of code (and quite a handful of options in [glance] config
section) AFAIU target a situation when both ironic and glance are deployed
standalone with possibly multiple glance API services so there is no
keystone catalog to discover the (load-balanced) glance endpoint from. We
even have our own round-robin implementation for those multiple glance
hosts o_0

3) Glance's direct_url handling - AFAIU this will work iff there is a
single conductor service and single glance registry service configured with
simple file backend deployed on the same host (with appropriate file access
permissions between ironic and glance), and glance is configured to
actually provide direct_url for the image - very much a DevStack (though
with non-standard settings).

Do we actually have to support such narrow deployment scenarios as in 2)
and 3)? While for 2) we probably should continue support standalone Glance,
keeping implementations for our own round-robin load-balancing and retries
seems out of ironic scope.

Most of those do seem to be a legacy code crust from nova-baremetal era,
but I might be missing something. I'm eager to hear your comments.

Cheers,

Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Save the Date- Queens PTG

2017-05-23 Thread Marcin Juszkiewicz
W dniu 06.04.2017 o 00:05, Erin Disney pisze:

> We will share registration and sponsorship information soon on this
> mailing list. Mark your calendars and we hope to see you in Denver!

Any update? I need to collect info about costs for my trip.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [go][all] Settling on gophercloud as the go based API library for all OpenStack projects

2017-05-23 Thread Sean McGinnis
On Tue, May 23, 2017 at 10:54:13AM -0400, Davanum Srinivas wrote:
> Folks,
> 
> This has come up several times in various conversations.
> 
> Can we please stop activity on
> https://git.openstack.org/cgit/openstack/golang-client/ and just
> settle down on https://github.com/gophercloud/gophercloud ?
> 

+1

I think we are all better off if we can focus our efforts in one place.

> This becomes important since new container-y projects like
> stackube/fuxi/kuryr etc can just pick one that is already working and
> not worry about switching later. This is also a NIH kind of behavior
> (at least from a casual observer from outside).
> 
> Thanks,
> Dims
> 
> -- 
> Davanum Srinivas :: https://twitter.com/dims
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Missing answers on Pike release goals

2017-05-23 Thread Sean McGinnis
> > > 
> > 
> > What if we also require +1 from the "core six" projects on goal proposals?
> > If we at least have buy in from those projects, then we can know that we
> > should be able to get them as a minimum, with other projects more than
> > likely to then follow suit.
> 
> Because we do not want to structure our governance in such a way that
> some projects are more equal than others.
> 
> Everyone in the community has an opportunity to respond to the goals
> through the review process. If we don't trust the TC to take those
> responses into account, then we might as well drop the whole idea of
> community goals.

Yeah, sorry, ignore that. After I sent it I didn't think it was such a
great idea. There really shouldn't be any special emphasis on a subset
of the projects.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is the pendulum swinging on PaaS layers?

2017-05-23 Thread Zane Bitter

On 22/05/17 22:58, Jay Pipes wrote:

On 05/22/2017 12:01 PM, Zane Bitter wrote:

On 19/05/17 17:59, Matt Riedemann wrote:

I'm not really sure what you're referring to here with 'update' and [1].
Can you expand on that? I know it's a bit of a tangent.


If the user does a stack update that changes the network from 'auto'
to 'none', or vice-versa.


Detour here, apologies...

Why would it matter whether a user changes a stack definition for some
resource from auto-created network to none? Why would you want
*anything* to change about instances that had already been created by
Heat with the previous version of the stack definition?


The short answer is that's just how Heat works. A large part of the 
value is the ability with Heat to make changes to your application over 
time by describing it declaratively. (In the past I've compared this to 
the advantage configuration management tools provided over shell scripts 
- e.g. in 
https://www.openstack.org/videos/atlanta-2013/introduction-to-openstack-orchestration).



In other words, why shouldn't the change to the stack simply affect
*new* resources that the stack might create?


Our job is to make the world look like the template the user provides. 
If the user changes something, Heat takes them seriously and does not 
imagine that it knows better than the user what the user wants. If the 
user doesn't want to change anything then they're welcome to not change 
the template.


(We *could* do better on protection against accidental changes... 
there's an update-preview command and ways of marking resources as 
immutable such that updates will fail if they try to change it, but I 
don't know that the workflow/UX is great. There are some technical 
limitations on how much we can even determine in update-preview.)



After all, get-me-a-network
is intended for instance *creation* and nothing else...


So it may be intended for that, but there's any number of legitimate 
reasons why a user might want to change things after the server is created:


* Server was created with network: none, but something went horribly 
wrong and now you need to ssh in to debug it.
* Server was created with network: auto, but it was compromised by an 
attacker and now you want to get it off the network while you conduct a 
post-mortem through the console.
* Server was created with network: auto, but now you need more 
sophisticated networking and you don't want to delete your server and 
all its data to change it.


&c.

That's why it's dangerous, as Matt said in another part of the thread, 
to just do the easy part of the job (create) and forget about how a 
feature will interact with all of the other things that can happen over 
time. At the very least you want a way for users to move from the 'easy' 
way to the 'full control' way without starting over. (Semi-professional 
cameras and digital oscilloscopes are a couple of examples of where this 
is routinely done very well.)


(None of this is to suggest that get-me-a-network is a particularly bad 
offender here - it isn't IMO.)



Why not treat already-provisioned resources of a stack as immutable once
provisioned? That is, after all, one of the primary benefits of a "cloud
native application" -- immutability of application images once deployed
and the clean separation of configuration from data.


I could equally ask why Nova and Neutron allow stuff to be changed after 
it has been provisioned? Heat is only providing an interface to public 
APIs that exist. You can bet that if we told our users that they can't 
use those APIs because we know better than them, we'd have a long list 
of feature request and many fewer users.


There are some things that cannot be changed through the underlying APIs 
once a resource is created, and in those cases we mark the property with 
'update_allowed=False' in the resource schema. However, if it _does_ 
change then Heat will create a _new_ resource with the property value 
you want, and delete the original. So we could have done that with the 
get-me-a-network thing, but it wouldn't have been the Right Thing for 
our users.



This is one of the reasons that the (application) container world has it
easy with regards to resource management.


Yes! Everything is much easier if you tell all the users to re-architect 
their applications from scratch :) Which, I mean, if you can... great! 
Meanwhile here on planet Earth, it's 2017 and 95% of payment card 
transactions are still processed using COBOL at some point. (Studies 
show that 79% of statistics are made up, but I actually legit read this 
last week.)


That's one reason I don't buy any of the 'OpenStack is dead' commentary. 
If we respond appropriately to the needs of users who run a *mixture* of 
legacy, cloud-aware, and cloud-native applications then OpenStack will 
be relevant for a very long time indeed.



If you need to change the
sizing of a deployment [1], Kubernetes doesn't need to go through all
the hoops we do in resize/migrate/li

[openstack-dev] [ironic] this week's priorities and subteam reports

2017-05-23 Thread Yeleswarapu, Ramamani
Hi,

We are glad to present this week's priorities and subteam report for Ironic. As 
usual, this is pulled directly from the Ironic whiteboard[0] and formatted.

This Week's Priorities (as of the weekly ironic meeting)

1. rolling upgrades
1.1. the next patch is ready for reviews: 
https://review.openstack.org/#/c/412397/
2. booting from volume:
2.1. the next patch: https://review.openstack.org/#/c/406290
3. review e-tags spec: https://review.openstack.org/#/c/381991/
4. driver composition documentation:
4.1. explaining the defaults: https://review.openstack.org/466741
4.2. ipmi docs update: https://review.openstack.org/466734


Bugs (dtantsur, vdrok, TheJulia)

- Stats (diff between 15 May 2017 and 22 May 2017)
- Ironic: 243 bugs (-9) + 251 wishlist items. 24 new (+3), 191 in progress 
(-9), 0 critical, 26 high and 32 incomplete
- Inspector: 12 bugs + 28 wishlist items. 3 new (+2), 12 in progress (-2), 0 
critical, 1 high (-1) and 3 incomplete
- Nova bugs with Ironic tag: 12 (+1). 2 new, 0 critical, 0 high

Essential Priorities


CI refactoring and missing test coverage

- Standalone CI tests (vsaienk0)
- next patch to be reviewed: https://review.openstack.org/#/c/429770/
- Missing test coverage (all)
- portgroups and attach/detach tempest tests: 
https://review.openstack.org/382476
- local boot with partition images: TODO 
https://bugs.launchpad.net/ironic/+bug/1531149
- adoption: https://review.openstack.org/#/c/344975/
- should probably be changed to use standalone tests

Generic boot-from-volume (TheJulia, dtantsur)
-
- specs and blueprints:
- 
http://specs.openstack.org/openstack/ironic-specs/specs/approved/volume-connection-information.html
- code: https://review.openstack.org/#/q/topic:bug/1526231
- 
http://specs.openstack.org/openstack/ironic-specs/specs/approved/boot-from-volume-reference-drivers.html
- code: https://review.openstack.org/#/q/topic:bug/1559691
- https://blueprints.launchpad.net/nova/+spec/ironic-boot-from-volume
- code: 
https://review.openstack.org/#/q/topic:bp/ironic-boot-from-volume
- status as of most recent weekly meeting:
- mjturek is working on getting together devstack config updates/script 
changes in order to support this configuration No updates
- hshiina is looking in Nova side changes and is attempting to obtain 
clarity on some of the issues that tenant network separation introduced into 
the deployment workflow.
- Patch/note tracking etherpad: https://etherpad.openstack.org/p/Ironic-BFV
Ironic Patches:
https://review.openstack.org/#/c/406290 Wiring in attach/detach 
operations
https://review.openstack.org/#/c/413324 iPXE template
https://review.openstack.org/#/c/454243/ - WIP logic changes for 
deployment process.  Tenant network separation introduced some additional 
complexity, quick conceptual feedback requested.
https://review.openstack.org/#/c/214586/ - Volume Connection 
Information Rest API Change
Additional patches exist, for python-ironicclient and one for nova.  
Links in the patch/note tracking etherpad.

Rolling upgrades and grenade-partial (rloo, jlvillal)
-
- spec approved; code patches: 
https://review.openstack.org/#/q/topic:bug/1526283
- status as of most recent weekly meeting:
- patches ready for reviews. Next one: 'Add version column': 
https://review.openstack.org/#/c/412397/
- Testing work: done? is there anything else needed?

Reference architecture guide (jroll, dtantsur)
--
- no updates, dtantsur plans to start working on some text for the 
install-guide this week

Python 3.5 compatibility (Nisha, Ankit)
---
- Topic: 
https://review.openstack.org/#/q/topic:goal-python35+NOT+project:openstack/governance+NOT+project:openstack/releases
- this include all projects, not only ironic
- please tag all reviews with topic "goal-python35"
- Nisha will be taking over this work(Nisha on leave from May 5 to May 22)
- Status as on May 5.  Raised patches in openstack-infra/project-config for 
adding experimental gates for the ironic governed modules
- https://review.openstack.org/462487  - python-ironicclient
- https://review.openstack.org/462511- IPA(has one +2)
- https://review.openstack.org/462695- ironic-inspector
- https://review.openstack.org/462701- ironic-lib
- https://review.openstack.org/#/c/462706/- python-ironic-inspector-client
- Not sure, if we want to do the same for ironic-staging-drivers module or 
not, hence not raised
- one day - yes, but it's not a priority at all
- Ankit took over this work
- Status as on May 12.
-

Re: [openstack-dev] [release][tc][infra][security][stable] Proposal for shipping binaries and containers

2017-05-23 Thread Doug Hellmann
Excerpts from Davanum Srinivas (dims)'s message of 2017-05-23 10:44:30 -0400:
> Team,
> 
> Background:
> For projects based on Go and Containers we need to ship binaries, for

Can you elaborate on the use of the term "need" here. Is that because
otherwise the projects can't be consumed? Is it the "norm" for
projects from those communities? Something else?

> example Kubernetes, etcd both ship binaries and maintain stable
> branches as well.
>   https://github.com/kubernetes/kubernetes/releases
>   https://github.com/coreos/etcd/releases/
> 
> Kubernetes for example ships container images to public registeries as well:
>   
> https://console.cloud.google.com/gcr/images/google-containers/GLOBAL/hyperkube?pli=1
>   
> https://github.com/kubernetes/kubernetes/tree/master/cluster/images/hyperkube

What are the support lifetimes for those images? Who maintains them?

> So here's a proposal based on the really long thread:
> http://lists.openstack.org/pipermail/openstack-dev/2017-May/thread.html#116677
> 
> The idea is to augment the existing processes for the new deliverables.
> 
> * Projects define CI jobs for generating binaries and containers (some
> already do!)
> * Release team automation will kick builds off when specific versions
> are released for the binaries and containers (Since Go based projects
> can do cross-builds, we won't need to run these jobs on multiple
> architectures which will keep the release process simple)

I see how this would work for Go builds, since we would be tagging the
thing being built. My understanding is that Kolla images are using the
Kolla version, not the version of the software inside the image, though.
How would that work? (Or maybe I misunderstood something from another
thread and that's not how the images are versioned?)

> * Just like we upload stuff to tarballs.openstack.org, we will upload
> binaries and containers there as well

I know there's an infra spec for doing some of this, so I assume we
anticipate having the storage capacity needed?

> * Just like we upload things to pypi, we will upload containers with
> specific versions to public repos.
> * Projects can choose from the existing release models to make this
> process as frequent as they need.
> 
> Please note that I am deliberately ruling out the following
> * Daily/Nightly releases that are accessible to end users, especially
> from stable branches.

The Kolla team did seem to want periodic builds for testing (to avoid
having to build images in the test pipeline, IIUC). Do we still want to
build those to tarballs.o.o? Does that even meet the needs of those test
jobs?

> * Project teams directly responsible for pushing stuff to end users
> 
> What do you think?
> 
> Thanks,
> Dims
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [murano] Murano meeting cancelled 5/23/2017

2017-05-23 Thread MONTEIRO, FELIPE C
The Murano meeting is cancelled today because I have a company-related event I 
must attend and cannot find anyone to cover the meeting for me. Should anyone 
have any questions, feel free to reach me on IRC (felipemonteiro).

Felipe

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][tc][infra][security][stable] Proposal for shipping binaries and containers

2017-05-23 Thread Flavio Percoco

On 23/05/17 10:44 -0400, Davanum Srinivas wrote:

Team,

Background:
For projects based on Go and Containers we need to ship binaries, for
example Kubernetes, etcd both ship binaries and maintain stable
branches as well.
 https://github.com/kubernetes/kubernetes/releases
 https://github.com/coreos/etcd/releases/

Kubernetes for example ships container images to public registeries as well:
 
https://console.cloud.google.com/gcr/images/google-containers/GLOBAL/hyperkube?pli=1
 https://github.com/kubernetes/kubernetes/tree/master/cluster/images/hyperkube

So here's a proposal based on the really long thread:
http://lists.openstack.org/pipermail/openstack-dev/2017-May/thread.html#116677

The idea is to augment the existing processes for the new deliverables.

* Projects define CI jobs for generating binaries and containers (some
already do!)
* Release team automation will kick builds off when specific versions
are released for the binaries and containers (Since Go based projects
can do cross-builds, we won't need to run these jobs on multiple
architectures which will keep the release process simple)
* Just like we upload stuff to tarballs.openstack.org, we will upload
binaries and containers there as well


If we upload the containers to a registry repo, I'm not sure we need to upload
them here too. This would also take too much space for not much gain since
consumers of these containers won't pull from tarballs.o.o but the registry
itself.


* Just like we upload things to pypi, we will upload containers with
specific versions to public repos.
* Projects can choose from the existing release models to make this
process as frequent as they need.


If releasing binaries is introduced, I think all projects that can produce
binaries (go, container images), should do it. I'd like this to be consistent.
We generate tarballs for every project, not some of them.


Please note that I am deliberately ruling out the following
* Daily/Nightly releases that are accessible to end users, especially
from stable branches.
* Project teams directly responsible for pushing stuff to end users

What do you think?


Without giving it too much thought and almost at the end of my day, I think I
like it. One thing to consider is that we'll also need a process to define what
kind of binaries we build and/or ship. I don't think we want to build rpms/debs
or other distro packages. Therefore, we need to explicitly list the type of
binaries we build.

As long as the binaries produced don't introduce any kind of bias, I think I'm
good.

Thanks for sending this out,
Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Missing answers on Pike release goals

2017-05-23 Thread Doug Hellmann
Excerpts from Sean McGinnis's message of 2017-05-23 08:58:08 -0500:
> > >
> > > - Is it that the reporting process is too heavy ? (requiring answers
> > > from projects that are obviously unaffected)
> > 
> > I've thought about this, OSC was unaffected by one of the goals but
> > not the other, so I can't really hide in this bucket.  It really is
> > not that hard to put up a review saying "not me".
> > 
> > > - Is it that people ignore the deadlines and missed the reminders ?
> > > (some unaffected project teams also do not do releases, and therefore
> > > ignore the release countdown emails)
> > 
> > In my case, not so much "ignore" but "put off until tomorrow" where
> > tomorrow turned in to 6 weeks.  I really don't have a hard reason
> > other than simply not prioritizing it because I knew one of the goals
> > was going to take some coordination work
> > 
> 
> +1 - this has been my case, unfortunately.
> 
> A patch submission has the feeling of a major thing that goes through
> a lot of process (at least still in my head). I wonder if we would be
> better off tracking some of this through a wiki page or even an
> etherpad, with just the completion of the goal being something
> submitted to the repo. Then it would be really easy to update at any
> point with notes like "WIP patch put up but still working on it" along
> the way.

The review process for this type of governance patch is pretty light
(they fall under the one-week-no-objections house rule), but I
decided to use a patch instead of the wiki specifically because it
allows for feedback. We've had several cases where teams didn't
provide enough detail or didn't think a goal applied to them when
it did (deploying with WSGI came up at least once).  Wiki changes
can be tracked, but if someone has a question they have to go track
down the author in some other venue to get it answered.

I also didn't want teams to have to keep anything up to date during
the cycle, because I didn't want this to be yet another "status
report". Each goal needs at most 2 patches: one at the start of the
cycle to acknowledge and point to whatever other artifacts are being
used for tracking the work already, and then one at the end of the
cycle to indicate how much of the work was completed and what the
next steps are. We tied the process deadlines to existing deadlines
when we thought teams would already be thinking of these sorts of
topics (most teams have spec deadlines around milestone 1 and then
everyone has the same release date at the end of the cycle).

> 
> > > - Is it that in periods of resource constriction, having release-wide
> > > goals is just too ambitious ? (although anecdotal data shows that most
> > > projects have already completed their goals)
> > 
> > While this may certainly be a possibility, I don't think we should
> > give in to the temptation to blame too much on losing people.  OSC was
> > hit by this too, yet the loss of core and contributors did not affect
> > the goals not getting done, that falls squarely on the PTL in this
> > case.
> > 
> > > - Is it that the goals should be more clearly owned by the community
> > > beyond just the TC? (and therefore the goals should be maintained in a
> > > repository with simpler approval rules and a larger approval group)
> > 
> > I do think that at least the perception of the goals being community
> > things should be increased if we can.  We fall in to the problem of
> > the TC proposing something and getting pushback about projects being
> > forced to do more work, yet we hear so much about how the TC needs to
> > take more leadership in technical direction (see TC vision feedback
> > for the latest round of this).
> > 
> > I'm not sure that the actual repo is the issue, are we having problems
> > getting reviews to approve these?  I don't see this but I'm also not
> > tracking the time to takes for them to get approved.
> > 
> > I believe it is just going to have to be a social thing that we need
> > to continue to push forward.
> > 
> 
> What if we also require +1 from the "core six" projects on goal proposals?
> If we at least have buy in from those projects, then we can know that we
> should be able to get them as a minimum, with other projects more than
> likely to then follow suit.

Because we do not want to structure our governance in such a way that
some projects are more equal than others.

Everyone in the community has an opportunity to respond to the goals
through the review process. If we don't trust the TC to take those
responses into account, then we might as well drop the whole idea of
community goals.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] [devstack] [infra] heat api services with uwsgi

2017-05-23 Thread Rabi Mishra
Apologies for the spam. Resending  with the earlier missed [openstack-dev]
tag to the subject for greater visibility.

On Tue, May 23, 2017 at 10:53 AM, Rabi Mishra  wrote:

> Hi All,
>
> As per the updated community goal[1]  for api deployment with wsgi, we've
> to transition to use uwsgi rather than mod_wsgi at the gate. It also seems
> mod_wsgi support would be removed from devstack in Queens.
>
> I've been working on a patch[2] for the transition and encountered a few
> issues as below.
>
> 1. We encode stack_indentifer( along with the path
> separator in heatclient. So, requests with encoded path separators are
> dropped by apache (with 404), if we don't have 'AllowEncodedSlashes On'
> directive in the site/vhost config[3].
>
> Setting this for mod_proxy_uwsgi[4] seems to work on fedora but not
> ubuntu.  From my testing It seems, it has to be set in 000-default.conf for
> ubuntu.
>
> Rather than messing with the devstack plugin code, I went ahead proposed a
> change to not encode the path separators in heatclient[5] ( Anyway they
> would be decoded by apache with the directive 'AllowEncodedSlashes On'
> before it's consumed by the service) which seem to have fixed those 404s.
>
> Is there a generic way to set the above directive (when using
> apache+mod_proxy_uwsgi) in the devstack plugin?
>
> 2.  With the above, most of the tests seem to work fine other than the
> ones using waitcondition, where we signal back from the vm to the api
> services. I could see " curl: (7) Failed to connect to 10.0.1.78 port 80:
> No route to host" in the vm console logs[6].
>
> It could connect to heat api services using ports 8004/8000 without this
> patch, but not sure why not port 80? I tried testing this locally and
> didn't see the issue though.
>
> Is this due to some infra settings or something else?
>
>
> [1] https://governance.openstack.org/tc/goals/pike/deploy-api-in-wsgi.html
>
> [2] https://review.openstack.org/#/c/462216/
>
> [3]  https://github.com/openstack/heat/blob/master/devstack/
> files/apache-heat-api.template#L9
>
> [4] http://logs.openstack.org/16/462216/6/check/gate-heat-dsvm-
> functional-convg-mysql-lbaasv2-non-apache-ubuntu-
> xenial/fbd06d6/logs/apache_config/heat-wsgi-api.conf.txt.gz
>
> [5] https://review.openstack.org/#/c/463510/
>
> [6] http://logs.openstack.org/16/462216/11/check/gate-heat-
> dsvm-functional-convg-mysql-lbaasv2-non-apache-ubuntu-
> xenial/e7d9e90/console.html#_2017-05-20_07_04_30_718021
>
>
> --
> Regards,
> Rabi Mishra
>
>


-- 
Regards,
Rabi Misra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-23 Thread Ildiko Vancsa

> On 2017. May 23., at 15:43, Sean McGinnis  wrote:
> 
> On Mon, May 22, 2017 at 05:50:50PM -0500, Anne Gentle wrote:
>> On Mon, May 22, 2017 at 5:41 PM, Sean McGinnis 
>> wrote:
>> 
>>> 
>>> [snip]
>>> 
>> 
>> Hey Sean, is the "right to merge" the top difficulty you envision with 1 or
>> 2? Or is it finding people to do the writing and reviews? Curious about
>> your thoughts and if you have some experience with specific day-to-day
>> behavior here, I would love your insights.
>> 
>> Anne
> 
> I think it's more about finding people to do the writing and reviews, though
> having incentives like having more say in that area of things could be
> beneficial for finding those people.

I think it is important to note here that by having the documentation (in it’s 
easily identifiable, own folder) living together with the code in the same 
repository you have the developer(s) of the feature as first line candidates on 
adding documentation to their change.

I know that writing good technical documentation is it’s own profession, but 
having the initial data there which can be fixed by experienced writers if 
needed is a huge win compared to anything separated, where you might not have 
any documentation at all.

So by having the ability to -1 a change because of the lack of documentation is 
on one hand might be a process change for reviewers, but gives you the docs 
contributors as well.

So to summarize, the changes what Alex described do not indicate that the core 
team has to write the documentation themselves or finding a team of technical 
writers before applying the changes, but be conscious about caring whether docs 
is added along with the code changes.

Thanks,
Ildikó



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Supporting volume_type when booting from volume

2017-05-23 Thread Duncan Thomas
On 23 May 2017 4:51 am, "Matt Riedemann"  wrote:



Is this really something we are going to have to deny at least once per
release? My God how is it that this is the #1 thing everyone for all time
has always wanted Nova to do for them?


Is it entirely unreasonable to turn the question around and ask why, given
it is such a commonly requested feature, the Nova team are so resistant to
it?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [go][all] Settling on gophercloud as the go based API library for all OpenStack projects

2017-05-23 Thread Davanum Srinivas
Folks,

This has come up several times in various conversations.

Can we please stop activity on
https://git.openstack.org/cgit/openstack/golang-client/ and just
settle down on https://github.com/gophercloud/gophercloud ?

This becomes important since new container-y projects like
stackube/fuxi/kuryr etc can just pick one that is already working and
not worry about switching later. This is also a NIH kind of behavior
(at least from a casual observer from outside).

Thanks,
Dims

-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Missing answers on Pike release goals

2017-05-23 Thread Doug Hellmann
Excerpts from Jeremy Stanley's message of 2017-05-23 13:57:53 +:
> On 2017-05-23 05:40:05 -0500 (-0500), Dean Troyer wrote:
> > On Tue, May 23, 2017 at 4:59 AM, Thierry Carrez  
> > wrote:
> [...]
> > > - Is it that the reporting process is too heavy ? (requiring answers
> > > from projects that are obviously unaffected)
> > 
> > I've thought about this, OSC was unaffected by one of the goals but
> > not the other, so I can't really hide in this bucket.  It really is
> > not that hard to put up a review saying "not me".
> 
> While not at all an excuse, that was entirely what I chalk my lapse
> up to this time. I had already commented on the governance reviews
> that I had discussed the proposed goals with the rest of the Infra
> team and we'd come to the conclusion that they were either
> inapplicable or already met for us. It just escaped my memory that I
> needed to go back and reassert that again once the goals were
> officially approved.
> 
> Also, I still agree that it's hard to figure out which teams
> actually are affected without asking them, and that's this step of
> the process: confirmation/denial on record.

Right. The goals process is not about anyone telling anyone else what to
do. It's about communicating with each other about a few central
priorities. Part of that communication requires going through some hoops
even when they seem trivial or unnecessary based on what you know,
because the rest of us are not inside your head and don't automatically
have that knowledge. :-)

Doug

> 
> > > - Is it that people ignore the deadlines and missed the reminders ?
> > > (some unaffected project teams also do not do releases, and therefore
> > > ignore the release countdown emails)
> [...]
> 
> Not so much ignore but because so little of the content is directly
> applicable to Infra I read them in the context of things we should
> be on the lookout for other teams working on, so I'm not in the
> mindset of expecting to actually find an action item in there. This
> is just a matter of retraining myself on what to look for in those
> announcements in the future.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][tc][infra][security][stable] Proposal for shipping binaries and containers

2017-05-23 Thread Davanum Srinivas
Team,

Background:
For projects based on Go and Containers we need to ship binaries, for
example Kubernetes, etcd both ship binaries and maintain stable
branches as well.
  https://github.com/kubernetes/kubernetes/releases
  https://github.com/coreos/etcd/releases/

Kubernetes for example ships container images to public registeries as well:
  
https://console.cloud.google.com/gcr/images/google-containers/GLOBAL/hyperkube?pli=1
  https://github.com/kubernetes/kubernetes/tree/master/cluster/images/hyperkube

So here's a proposal based on the really long thread:
http://lists.openstack.org/pipermail/openstack-dev/2017-May/thread.html#116677

The idea is to augment the existing processes for the new deliverables.

* Projects define CI jobs for generating binaries and containers (some
already do!)
* Release team automation will kick builds off when specific versions
are released for the binaries and containers (Since Go based projects
can do cross-builds, we won't need to run these jobs on multiple
architectures which will keep the release process simple)
* Just like we upload stuff to tarballs.openstack.org, we will upload
binaries and containers there as well
* Just like we upload things to pypi, we will upload containers with
specific versions to public repos.
* Projects can choose from the existing release models to make this
process as frequent as they need.

Please note that I am deliberately ruling out the following
* Daily/Nightly releases that are accessible to end users, especially
from stable branches.
* Project teams directly responsible for pushing stuff to end users

What do you think?

Thanks,
Dims


-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-23 Thread Anne Gentle
On Tue, May 23, 2017 at 8:56 AM, Zane Bitter  wrote:

> On 22/05/17 05:39, Alexandra Settle wrote:
>
>> 1. We could combine all of the documentation builds, so that each
>> project has a single doc/source directory that includes developer,
>> contributor, and user documentation. This option would reduce the number
>> of build jobs we have to run, and cut down on the number of separate
>> sphinx configurations in each repository. It would completely change the
>> way we publish the results, though, and we would need to set up
>> redirects from all of the existing locations to the new locations and
>> move all of the existing documentation under the new structure.
>>
>
> +0 in the short term, +1 for the long term
>
> 2. We could retain the existing trees for developer and API docs, and
>> add a new one for "user" documentation. The installation guide,
>> configuration guide, and admin guide would move here for all projects.
>> Neutron's user documentation would include the current networking guide
>> as well. This option would add 1 new build to each repository, but would
>> allow us to easily roll out the change with less disruption in the way
>> the site is organized and published, so there would be less work in the
>> short term.
>>
>
> +1, at least in the short term
>
> As we've been discussing since the summit, Heat has a bunch of
> documentation (specifically the Template Guide) that is end-user-facing but
> needs to be generated from the Heat repo (because it uses introspection on
> the code). Right now it's buried in the (OpenStack) developer-facing
> documentation, which is not very discoverable for end users. So generating
> the user guide from the project repos would allow us to move the Template
> Guide.


What prevents you from publishing to a different location? The landing page
or the URL or something else I'm not considering? The URL can be changed in
the publish job, I think, so what we really need are the "rules" and
organization. I already discovered at the PTG that we are not consistent
with version number and translation language in our URLs...

It's something we'll have to work on -- the usability of the landing pages
and the directories for the publish jobs and how those translate to URLs.
Thoughts?

Anne


>
>
> 3. We could do option 2, but use a separate repository for the new
>> user-oriented documentation. This would allow project teams to delegate
>> management of the documentation to a separate review project-sub-team,
>> but would complicate the process of landing code and documentation
>> updates together so that the docs are always up to date.
>>
>
> -1 for the reasons above.
>
> cheers,
> Zane.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

Read my blog: justwrite.click 
Subscribe to Docs|Code: docslikecode.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-23 Thread Zane Bitter

On 23/05/17 08:35, Alexandra Settle wrote:

So, I’ve been a docs core for the OpenStack-Ansible project for some time now 
and this works really well within our structure. I do not merge anything unless 
it has a dev +2 before I come along (unless it is a trivial doc-only 
spelling/grammar change). I think there is a lot of community fear that if you 
give a writer core status on a project, that they’re just going to run wild and 
pass things they don’t understand.


If it makes you feel better, I don't think this is specific to tech 
writers. There's a lot of (unjustified IMHO) fear in general about 
giving out core review rights to a subset of a repo when we don't have 
ACLs to enforce that.


Personally, I totally agree with you and John here - we already place a 
huge amount of trust in core reviewers. If we can't trust them to not 
randomly +2 stuff they don't understand then we have much, much bigger 
problems.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Supporting volume_type when booting from volume

2017-05-23 Thread Sean McGinnis
On Mon, May 22, 2017 at 10:47:44PM -0500, Matt Riedemann wrote:
> Just wanted to point out that someone else requested this again today:
> 
> https://review.openstack.org/#/c/466595/
> 
> 30 seconds going through launchpad for old blueprints found at least 4
> others:
> 
> https://blueprints.launchpad.net/nova/+spec/vol-type-with-blank-vol
> 
> https://blueprints.launchpad.net/nova/+spec/volume-support-for-multi-hypervisors
> 
> https://blueprints.launchpad.net/nova/+spec/support-boot-instance-set-store-type
> 
> https://blueprints.launchpad.net/nova/+spec/ec2-volume-type
> 
> And I know cburgess and garyk at least had one each of their own.
> 
> Is this really something we are going to have to deny at least once per
> release? My God how is it that this is the #1 thing everyone for all time
> has always wanted Nova to do for them?
> 
> I'm honestly starting to get concerned.

To add to this, I've had this question/ask coming in on the Cinder side
quite often as well. There's a definite desire from users to be able to
do this.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Missing answers on Pike release goals

2017-05-23 Thread Sean McGinnis
> >
> > - Is it that the reporting process is too heavy ? (requiring answers
> > from projects that are obviously unaffected)
> 
> I've thought about this, OSC was unaffected by one of the goals but
> not the other, so I can't really hide in this bucket.  It really is
> not that hard to put up a review saying "not me".
> 
> > - Is it that people ignore the deadlines and missed the reminders ?
> > (some unaffected project teams also do not do releases, and therefore
> > ignore the release countdown emails)
> 
> In my case, not so much "ignore" but "put off until tomorrow" where
> tomorrow turned in to 6 weeks.  I really don't have a hard reason
> other than simply not prioritizing it because I knew one of the goals
> was going to take some coordination work
> 

+1 - this has been my case, unfortunately.

A patch submission has the feeling of a major thing that goes through
a lot of process (at least still in my head). I wonder if we would be
better off tracking some of this through a wiki page or even an
etherpad, with just the completion of the goal being something
submitted to the repo. Then it would be really easy to update at any
point with notes like "WIP patch put up but still working on it" along
the way.

> > - Is it that in periods of resource constriction, having release-wide
> > goals is just too ambitious ? (although anecdotal data shows that most
> > projects have already completed their goals)
> 
> While this may certainly be a possibility, I don't think we should
> give in to the temptation to blame too much on losing people.  OSC was
> hit by this too, yet the loss of core and contributors did not affect
> the goals not getting done, that falls squarely on the PTL in this
> case.
> 
> > - Is it that the goals should be more clearly owned by the community
> > beyond just the TC? (and therefore the goals should be maintained in a
> > repository with simpler approval rules and a larger approval group)
> 
> I do think that at least the perception of the goals being community
> things should be increased if we can.  We fall in to the problem of
> the TC proposing something and getting pushback about projects being
> forced to do more work, yet we hear so much about how the TC needs to
> take more leadership in technical direction (see TC vision feedback
> for the latest round of this).
> 
> I'm not sure that the actual repo is the issue, are we having problems
> getting reviews to approve these?  I don't see this but I'm also not
> tracking the time to takes for them to get approved.
> 
> I believe it is just going to have to be a social thing that we need
> to continue to push forward.
> 

What if we also require +1 from the "core six" projects on goal proposals?
If we at least have buy in from those projects, then we can know that we
should be able to get them as a minimum, with other projects more than
likely to then follow suit.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Missing answers on Pike release goals

2017-05-23 Thread Jeremy Stanley
On 2017-05-23 05:40:05 -0500 (-0500), Dean Troyer wrote:
> On Tue, May 23, 2017 at 4:59 AM, Thierry Carrez  wrote:
[...]
> > - Is it that the reporting process is too heavy ? (requiring answers
> > from projects that are obviously unaffected)
> 
> I've thought about this, OSC was unaffected by one of the goals but
> not the other, so I can't really hide in this bucket.  It really is
> not that hard to put up a review saying "not me".

While not at all an excuse, that was entirely what I chalk my lapse
up to this time. I had already commented on the governance reviews
that I had discussed the proposed goals with the rest of the Infra
team and we'd come to the conclusion that they were either
inapplicable or already met for us. It just escaped my memory that I
needed to go back and reassert that again once the goals were
officially approved.

Also, I still agree that it's hard to figure out which teams
actually are affected without asking them, and that's this step of
the process: confirmation/denial on record.

> > - Is it that people ignore the deadlines and missed the reminders ?
> > (some unaffected project teams also do not do releases, and therefore
> > ignore the release countdown emails)
[...]

Not so much ignore but because so little of the content is directly
applicable to Infra I read them in the context of things we should
be on the lookout for other teams working on, so I'm not in the
mindset of expecting to actually find an action item in there. This
is just a matter of retraining myself on what to look for in those
announcements in the future.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-23 Thread Zane Bitter

On 22/05/17 05:39, Alexandra Settle wrote:

1. We could combine all of the documentation builds, so that each
project has a single doc/source directory that includes developer,
contributor, and user documentation. This option would reduce the number
of build jobs we have to run, and cut down on the number of separate
sphinx configurations in each repository. It would completely change the
way we publish the results, though, and we would need to set up
redirects from all of the existing locations to the new locations and
move all of the existing documentation under the new structure.


+0 in the short term, +1 for the long term


2. We could retain the existing trees for developer and API docs, and
add a new one for "user" documentation. The installation guide,
configuration guide, and admin guide would move here for all projects.
Neutron's user documentation would include the current networking guide
as well. This option would add 1 new build to each repository, but would
allow us to easily roll out the change with less disruption in the way
the site is organized and published, so there would be less work in the
short term.


+1, at least in the short term

As we've been discussing since the summit, Heat has a bunch of 
documentation (specifically the Template Guide) that is end-user-facing 
but needs to be generated from the Heat repo (because it uses 
introspection on the code). Right now it's buried in the (OpenStack) 
developer-facing documentation, which is not very discoverable for end 
users. So generating the user guide from the project repos would allow 
us to move the Template Guide.



3. We could do option 2, but use a separate repository for the new
user-oriented documentation. This would allow project teams to delegate
management of the documentation to a separate review project-sub-team,
but would complicate the process of landing code and documentation
updates together so that the docs are always up to date.


-1 for the reasons above.

cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-23 Thread Sean McGinnis
On Mon, May 22, 2017 at 05:50:50PM -0500, Anne Gentle wrote:
> On Mon, May 22, 2017 at 5:41 PM, Sean McGinnis 
> wrote:
> 
> >
> > [snip]
> >
> 
> Hey Sean, is the "right to merge" the top difficulty you envision with 1 or
> 2? Or is it finding people to do the writing and reviews? Curious about
> your thoughts and if you have some experience with specific day-to-day
> behavior here, I would love your insights.
> 
> Anne

I think it's more about finding people to do the writing and reviews, though
having incentives like having more say in that area of things could be
beneficial for finding those people.

No specific experience to back this up, just the thought that someone coming
in could see a narrowly scoped repo and think "oh, that looks easy. I can help
with that." versus someone coming in to the whole project repo and getting
scared away because there's a bunch of things they don't understand and are
not sure where they can most easily jump in and contribute.

To be fair, I think all three options are good and could work.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] A proposal for hackathon to reduce deploy time of TripleO

2017-05-23 Thread Emilien Macchi
On Tue, May 23, 2017 at 6:47 AM, Sagi Shnaidman  wrote:
> Hi, all
>
> I'd like to propose an idea to make one or two days hackathon in TripleO
> project with main goal - to reduce deployment time of TripleO.
>
> - How could it be arranged?
>
> We can arrange a separate IRC channel and Bluejeans video conference session
> for hackathon in these days to create a "presence" feeling.

+1 for IRC. We already have #openstack-sprint, that we could re-use.
Also +1 for video conference, to get face to face interactions,
promptly and unscheduled.

> - How to participate and contribute?
>
> We'll have a few responsibility fields like tripleo-quickstart, containers,
> storage, HA, baremetal, etc - the exact list should be ready before the
> hackathon so that everybody could assign to one of these "teams". It's good
> to have somebody in team to be stakeholder and responsible for organization
> and tasks.

Before running the sprint, we should first track bugs / blueprints
related to deployment speed.
Not everyone in our team understands why some parts of deployments
take time, so we need to make it visible so everyone can know how they
can help during the sprint.

Maybe we could create a Launchpad tag "deployment-time" to track bugs
related to it. We should also make prioritization so we can work on
the most critical ones first.

I like the idea of breaking down the skills into small groups:

- High Availability: deployment & runtime of Pacemaker optimization
- Puppet: anything related to the steps (a bit more general but only a
few of us have expertise on it, we could improve it).
- Heat: work with the Heat team if we have some pending bugs about slowness.
- Baremetal: ironic / workflows
- tripleo-quickstart: tasks that can be improved / optimized

This is a proposal ^ feel free to (comment,add|remove) anything.


> - What is the goal?
>
> The goal of this hackathon to reduce deployment time of TripleO as much as
> possible.
>
> For example part of CI team takes a task to reduce quickstart tasks time. It
> includes statistics collection, profiling and detection of places to
> optimize. After this tasks are created, patches are tested and submitted.
>
> The prizes will be presented to teams which saved most of time :)
>
> What do you think?

Excellent idea, thanks Sagi for proposing it.

Another thought: before doing the sprint, we might want to make sure
our tripleo-ci is in stable shape (which is not the case right now, we
have 4 alerts and one of them affects ovb-ha)...

> Thanks
> --
> Best regards
> Sagi Shnaidman



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack and OVN integration is failing on multi-node physical machines.(probably a bug)

2017-05-23 Thread pranab boruah
Hi,
We are building a multi-node physical set-up of OpenStack Newton. The
goal is to finally integrate the set-up with OVN.
Lab details:
1 Controller, 2 computes

CentOS-7.3, OpenStack Newton, separate network for mgmt and tunnel
OVS version: 2.6.1

I followed the following guide to deploy OpenStack Newton using the
PackStack utility:

http://networkop.co.uk/blog/2016/11/27/ovn-part1/

Before I started integrating with OVN, I made sure that the set-up(ML2
and OVS) was working by launching VMs. VMs on cross compute node were
able to ping each other.

Now, I followed the official guide for OVN integration:

http://docs.openstack.org/developer/networking-ovn/install.html

Error details :
Neutron Server log shows :

 ERROR networking_ovn.ovsdb.impl_idl_ovn [-] OVS database connection
to OVN_Northbound failed with error: '{u'error': u'unknown database',
u'details': u'get_schema request specifies unknown database
OVN_Northbound', u'syntax': u'["OVN_Northbound"]'}'. Verify that the
OVS and OVN services are available and that the 'ovn_nb_connection'
and 'ovn_sb_connection' configuration options are correct.

The issue is ovsdb-server on the controller binds with the port
6641.instead of 6640.

#  netstat -putna | grep 6641

tcp0  0 192.168.10.10:6641  0.0.0.0:*
LISTEN  809/ovsdb-server

# netstat -putna | grep 6640 (shows no output)

Now, OVN NB DB tries to listen on port 6641, but since it is used by
the ovsdb-server, it's unable to. PID of ovsdb-server is 809, while
the pid of OVN NB DB is 4217.

OVN NB DB logs shows this:

2017-05-23T12:58:09.444Z|01421|ovsdb_jsonrpc_server|ERR|ptcp:6641:0.0.0.0:
listen failed: Address already in use
2017-05-23T12:58:11.946Z|01422|socket_util|ERR|6641:0.0.0.0: bind:
Address already in use
2017-05-23T12:58:14.448Z|01423|socket_util|ERR|6641:0.0.0.0: bind:
Address already in use

Solutions I tried:
1) Completely fresh installing everything.
2) Tried with OVS 2.6.0 and 2.7, same issue on all.
3) Checked  and verified : SB and NB configuration options in
plugin.ini are exactly correct.

Please help. Let me know. if additional details are required.

Thanks,
Pranab

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] uWSGI help for Congress

2017-05-23 Thread Chris Dent

On Mon, 22 May 2017, Eric K wrote:


If someone out there knows uWSGI and has a couple spare cycles to help
Congress project, we'd super appreciate it.

The regular contributors to Congress don't have experience with uWSGI and
could definitely use some help getting started with this goal. Thanks a ton!


Is the issue that you need get WSGI working at all (that is, need to
create a WSGI app for running the api service), or existing WSGI
tooling, made to work with mod_wsgi, needs to be adapted to work
with uwsgi? In either case, if you're able to point me at existing
api service code I might be able to provide some pointers.

In the meantime some potentially useful links:

* some notes I took on switching nova and devstack over to uwsg:

https://etherpad.openstack.org/p/devstack-uwsgi

* devstack code for nova+uwsgi

https://review.openstack.org/#/c/457715/

* rewrite of nova's wsgi application to start up properly

https://review.openstack.org/#/c/457283/

This last one might be most useful as it looks like congress is
using an api startup model (for the non-WSGI case) similar to
nova's.


--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-23 Thread Alexandra Settle
 
> I prefer option 1, which should be obvious from Anne's reference to my 
exiting work to enable that. Option 2 seems yucky (to me) because it adds yet 
another docs tree and sphinx config to projects, and thus is counter to my hope 
that we'll have one single docs tree per repo.
> 
> I disagree with option 3. It seems to be a way to organize the content 
simply to wall-off access to parts of it; e.g. docs people can't land stuff in 
the code part and potentially some code people can't land stuff in the docs 
part. However, docs should always land with the code that changed them. 
Separating the docs into a separate repo removes the ability to land docs with 
code.
> 
> I really like the plan Alex has described about docs team representatives 
participating more directly with the projects. If those 

+1 for option #1. I strongly believe the best way to keep all a
project's docs up to date with ongoing code changes is to make those
changes to contain in-repo docs updates as well. And here developers
should use the chance and benefit from rich experience of docs team
representatives, as no one else knows more about writing technical
documentation best practices!

I must admit, I’m quite surprised by everyone’s preference for option 1. 
Although not disappointed. I’m interested to see where and how this goes!

Pros:
* Code review shall cover docs changes and code changes at once, which
is great

+1000 to this. This is a lot of what I’m pushing for. Teams that have already 
implemented our project-specific installation guides say this as their #1 
feedback.
I’m hoping we can get more positive responses for this too.

* Docs team contributors may choose to start acting as representatives,
which is become mentors and/or "docs guarding sentries" rather than
technical writers. This offloads writing to projects' devs and perhaps
resolves the issue as mentoring/reviewing requires less time, or more haha.
* Developers shall become technical docs writers as well, that's a
really exciting perspective to advance and know more about your
projects! And, who knows, this as well may end up bringing more man
power to the docs team, after all.

Cons:
there are none, let's be optimistic! Developers love document changes
for code, we all know that.

Haha amazing! No cons! I guess my only concern is that this going to be a *lot* 
of work, and it can’t just fall on the doc team. We will have to move all the 
documentation to the appropriate repos, and build this infrastructure. As 
previously noted, there has been a dip in contributions to dev and doc, and 
it’s been hard to get people to work as CPLs to the doc team.

Do we think it is possible to make this a goal for the cycle across the board, 
and ensure we have this completed by $RELEASE?

representatives should be able to add a +2 or -2 to project patches,
then make those representatives core reviewers for the respective
project. Like every other core reviewer, they should be trusted to use
good judgement for choosing what to review and what score to give it.

So, I’ve been a docs core for the OpenStack-Ansible project for some time now 
and this works really well within our structure. I do not merge anything unless 
it has a dev +2 before I come along (unless it is a trivial doc-only 
spelling/grammar change). I think there is a lot of community fear that if you 
give a writer core status on a project, that they’re just going to run wild and 
pass things they don’t understand. I can’t speak for everyone, but I can say 
that this has been working really well in the OSA community. We now have 3 doc 
cores. 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Active or passive role with our database layer

2017-05-23 Thread Chris Dent

On Tue, 23 May 2017, Sean Dague wrote:


Do you have an example of an Open Source project that (after it was
widely deployed) replaced their core storage engine for their existing
users?


That's not the point here. The point is that new deployments may
choose to use a different one and old ones can choose to change if
they like (but don't have to) if storage is abstracted.

The notion of a "core storage engine" is not something that I see as
currently existing in OpenStack. It is clear it is something that at
least you and likely several other people would like to see.

But it is most definitely not something we have now and as I
responded to Monty, getting there from where we are now would be a
huge undertaking with as yet unproven value [1].


I do get that when building more targeted things, this might be a value,
but I don't see that as a useful design constraint for OpenStack.


Completely the opposite from my point of view. When something is as
frameworky as OpenStack is (perhaps accidently and probably
unfortunately) then _of course_ replaceable DBs are the norm,
expected, useful and potentially required to satisfy more use cases.

Adding specialization (tier 1?) is probably something we want and
want to encourage but it is not something we should build into the
"core" of the "product".

But there's that philosophical disagreement again. I'm not sure we
can resolve that. What I'm hoping is that by starting the ball
rolling other people will join in and people like you and me can
step out of the way.

[1] Of the issues described elsewhere in the thread the only one
which seems to be a bit sticking point is the trigger thing, and
there's significant disagreement on that being "okay".

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Missing answers on Pike release goals

2017-05-23 Thread Dmitry Tantsur

Not an offender, apparently, but lemme throw some less optimistic views here.

On 05/23/2017 12:40 PM, Dean Troyer wrote:

OK, I'll bite, being one of the until-last-week offenders...

On Tue, May 23, 2017 at 4:59 AM, Thierry Carrez  wrote:

As part of release management we remind projects of the release cycle
deadlines, including the ones regarding the release goals process.

According to [1], "each PTL is responsible for adding their planning
artifact links to the goal document before the first milestone
deadline", and "if the goal does not apply to a project or the project
has already met the goal, the PTL should explain why that is the case,
instead of linking to planning artifacts".

However, for Pike goals we are 6 weeks past the pike-1 milestone, and we
still have about half the project teams that haven't provided answers
(despite two reminders posted in the release countdown emails). Such a
large share goes beyond the usual occasional misses, and points to a
more systemic issue, that we might want to address before the Queens
campaign starts.

A few questions to bootstrap the discussion:

- Is it that the reporting process is too heavy ? (requiring answers
from projects that are obviously unaffected)


I feel like the answer is somewhere here. Maybe filling in a field in a 
spreadsheet could be easier for folks?




I've thought about this, OSC was unaffected by one of the goals but
not the other, so I can't really hide in this bucket.  It really is
not that hard to put up a review saying "not me".


- Is it that people ignore the deadlines and missed the reminders ?
(some unaffected project teams also do not do releases, and therefore
ignore the release countdown emails)


In my case, not so much "ignore" but "put off until tomorrow" where
tomorrow turned in to 6 weeks.  I really don't have a hard reason
other than simply not prioritizing it because I knew one of the goals
was going to take some coordination work


- Is it that in periods of resource constriction, having release-wide
goals is just too ambitious ? (although anecdotal data shows that most
projects have already completed their goals)


While this may certainly be a possibility, I don't think we should
give in to the temptation to blame too much on losing people.  OSC was
hit by this too, yet the loss of core and contributors did not affect
the goals not getting done, that falls squarely on the PTL in this
case.


How do you define "too much" here? We've lost all people who committed to work 
on one of the goals. Does it count?


Also, I'm sorry, but OSC is a bad example here. The WSGI goal did not apply to 
you at all, and I suspect you were already more or less (or fully) Python 3 
compatible.





- Is it that the goals should be more clearly owned by the community
beyond just the TC? (and therefore the goals should be maintained in a
repository with simpler approval rules and a larger approval group)


I do think that at least the perception of the goals being community
things should be increased if we can.  We fall in to the problem of
the TC proposing something and getting pushback about projects being
forced to do more work, yet we hear so much about how the TC needs to
take more leadership in technical direction (see TC vision feedback
for the latest round of this).


I won't be surprised to learn that these are different people :) Or at least 
that some people do not understand "provide leadership" as "ask to do more work" 
(not meaning anything negative here, especially since I believe that the goals 
we have are important indeed).


But I do agree that there don't seem to be enough buy-in from the community in 
the goals. Probably the reason is well-known: companies backing people do pay 
for that, so they have to prove that working on the goals benefits their employers.




I'm not sure that the actual repo is the issue, are we having problems
getting reviews to approve these?  I don't see this but I'm also not
tracking the time to takes for them to get approved.

I believe it is just going to have to be a social thing that we need
to continue to push forward.

dt




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] revised Postgresql deprecation patch for governance

2017-05-23 Thread Sean Dague
On 05/22/2017 11:26 PM, Matt Riedemann wrote:
> On 5/22/2017 10:58 AM, Sean Dague wrote:
>> I think these are actually compatible concerns. The current proposal to
>> me actually tries to address A1 & B1, with a hint about why A2 is
>> valuable and we would want to do that.
>>
>> It feels like there would be a valuable follow on in which A2 & B2 were
>> addressed which is basically "progressive enhancements can be allowed to
>> only work with MySQL based backends". Which is the bit that Monty has
>> been pushing for in other threads.
>>
>> This feels like what a Tier 2 support looks like. A basic SQLA and pray
>> so that if you live behind SQLA you are probably fine (though not
>> tested), and then test and advanced feature roll out on a single
>> platform. Any of that work might port to other platforms over time, but
>> we don't want to make that table stakes for enhancements.
> 
> I think this is reasonable and is what I've been hoping for as a result
> of the feedback on this.
> 
> I think it's totally fine to say tier 1 backends get shiny new features.
> I mean, hell, compare the libvirt driver in nova to all other virt
> drivers in nova. New features are written for the libvirt driver and we
> have to strong-arm them into other drivers for a compatibility story.
> 
> I think we should turn on postgresql as a backend in one of the CI jobs,
> as I've noted in the governance change - it could be the nova-next
> non-voting job which only runs on nova, but we should have something
> testing this as long as it's around, especially given how easy it is to
> turn this on in upstream CI (it's flipping a devstack variable).

Postgresql support shouldn't be in devstack. If we're taking a tier 2
approach, someone needs to carve out database plugins from devstack and
pg would be one (as could be galera, etc).

This historical artifact that pg was maintained in devstack, but much
more widely used backends were not, is part of the issue.

It would also be a good unit test case as to whether there are pg
focused folks around out there willing to do this basic devstack plugin
/ job setup work.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Active or passive role with our database layer

2017-05-23 Thread Sean Dague
On 05/23/2017 07:23 AM, Chris Dent wrote:

>> Some operations have one and only one "right" way to be done. For
>> those operations if we take an 'active' approach, we can implement
>> them once and not make all of our deployers and distributors each
>> implement and run them. However, there is a cost to that. Automatic
>> and prescriptive behavior has a higher dev cost that is proportional
>> to the number of supported architectures. This then implies a need to
>> limit deployer architecture choices.
> 
> That "higher dev cost" is one of my objections to the 'active'
> approach but it is another implication that worries me more. If we
> limit deployer architecture choices at the persistence layer then it
> seems very likely that we will be tempted to build more and more
> power and control into the persistence layer rather than in the
> so-called "business" layer. In my experience this is a recipe for
> ossification. The persistence layer needs to be dumb and
> replaceable.

Why?

Do you have an example of an Open Source project that (after it was
widely deployed) replaced their core storage engine for their existing
users?

I do get that when building more targeted things, this might be a value,
but I don't see that as a useful design constraint for OpenStack.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] revised Postgresql deprecation patch for governance

2017-05-23 Thread Chris Dent

On Mon, 22 May 2017, Sean Dague wrote:


This feels like what a Tier 2 support looks like. A basic SQLA and pray
so that if you live behind SQLA you are probably fine (though not
tested), and then test and advanced feature roll out on a single
platform. Any of that work might port to other platforms over time, but
we don't want to make that table stakes for enhancements.


I've often wondered why what's being called "Tier 1" (advancec
features) here isn't something done downstream of "generic"
OpenStack.

Which is not to say it would have to be closed source or vendor
oriented. Simply not here. It may be we've got enough to deal with
here.

The 'external' model described by Monty makes things that are not
here easier to manage (but, to be fair, not necessarily easier to
make).

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Active or passive role with our database layer

2017-05-23 Thread Chris Dent

On Sun, 21 May 2017, Monty Taylor wrote:

As the discussion around PostgreSQL has progressed, it has come clear to me 
that there is a decently deep philosophical question on which we do not 
currently share either definition or agreement. I believe that the lack of 
clarity on this point is one of the things that makes the PostgreSQL 
conversation difficult.


Good analysis. I think this does hit to at least some of the core
differences, maybe even most. And as with so many other things we do
in OpenStack, because we have landed somewhere in the middle between
the two positions we find ourselves in a pickle (see, for example,
the different needs for and attitudes to orchestration underlying
this thread [1]).

You're right to say we need to pick one and move in that direction
but our standard struggles with reaching agreement across the entire
community, especially on an opinionated position, will need to be
overcome. Writing about it to make it visible is a good start.

In the "external" approach, we document the expectations and then write the 
code assuming that the database is set up appropriately. We may provide some 
helper tools, such as 'nova-manage db sync' and documentation on the sequence 
of steps the operator should take.


In the "active" approach, we still document expectations, but we also 
validate them. If they are not what we expect but can be changed at runtime, 
we change them overriding conflicting environmental config, and if we can't, 
we hard-stop indicating an unsuitable environment. Rather than providing 
helper tools, we perform the steps needed ourselves, in the order they need 
to be performed, ensuring that they are done in the manner in which they need 
to be done.


I think there's a middle ground here which is "externalize but
validate" which is:

* document expectations
* validate them
* do _not_ change at runtime, but tell people what's wrong

Some operations have one and only one "right" way to be done. For those 
operations if we take an 'active' approach, we can implement them once and 
not make all of our deployers and distributors each implement and run them. 
However, there is a cost to that. Automatic and prescriptive behavior has a 
higher dev cost that is proportional to the number of supported 
architectures. This then implies a need to limit deployer architecture 
choices.


That "higher dev cost" is one of my objections to the 'active'
approach but it is another implication that worries me more. If we
limit deployer architecture choices at the persistence layer then it
seems very likely that we will be tempted to build more and more
power and control into the persistence layer rather than in the
so-called "business" layer. In my experience this is a recipe for
ossification. The persistence layer needs to be dumb and
replaceable.

On the other hand, taking an 'external' approach allows us to federate the 
work of supporting the different architectures to the deployers. This means 
more work on the deployer's part, but also potentially a greater amount of 
freedom on their part to deploy supporting services the way they want. It 
means that some of the things that have been requested of us - such as easier 
operation and an increase in the number of things that can be upgraded with 
no-downtime - might become prohibitively costly for us to implement.


That's not necessarily the case. Consider that in an external
approach, where the persistence layer is opaque to the application, it
means that third parties (downstream consumers, the market, the
invisible hand, etc) have the option to do all kinds of wacky stuff.
Probably avec containers™.

In that model, the core functionality is simple and adequate but not
deluxe. Deluxe is an after-market add on.

BUT - without a decision as to what our long-term philosophical intent in 
this space is that is clear and understandable to everyone, we cannot have 
successful discussions about the impact of implementation choices, since we 
will not have a shared understanding of the problem space or the solutions 
we're talking about.


Yes.

For my part - I hear complaints that OpenStack is 'difficult' to operate and 
requests for us to make it easier. This is why I have been advocating some 
actions that are clearly rooted in an 'active' worldview.


If OpenStack were more of a monolith instead of a system with 3 to
many different databases, along with some optional number of other
ways to do other kinds of (short term) persistence, I would find the
'active' model a good option. If we were to start over I'd say let's
do that.

But as it stands implementing actually useful 'active' management of
the database feels like a very large amount of work that will take
so long that by the time we complete it it will be not just out of
date but also limit us.

External but validate feels much more viable. What we really want is
that people can get reasonably good results without trying that hard
and great (but also various) results w

[openstack-dev] [L2-Gateway] Query on redundant configuration of OpenStack's L2 gateway

2017-05-23 Thread Ran Xiao
Hi All,

  I have a query on usage of L2GW NB API. 
  I have to integrate L2GW with ODL. 
  And there are two L2GW nodes named l2gw1 and l2gw2.  
  OVS HW VTEP Emulator is running on each node.
  Does the following command work for configuring these two nodes a L2GW HA 
Cluster?
  
  neutron l2-gateway-create gw_name --device name=l2gw1,interface_names=eth2 \
--device name=l2gw2,interface_names=eth2

  Version : stable/ocata

  Thanks in advance.

BR,
Ran Xiao



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] A proposal for hackathon to reduce deploy time of TripleO

2017-05-23 Thread Sagi Shnaidman
Hi, all

I'd like to propose an idea to make one or two days hackathon in TripleO
project with main goal - to reduce deployment time of TripleO.

- How could it be arranged?

We can arrange a separate IRC channel and Bluejeans video conference
session for hackathon in these days to create a "presence" feeling.

- How to participate and contribute?

We'll have a few responsibility fields like tripleo-quickstart, containers,
storage, HA, baremetal, etc - the exact list should be ready before the
hackathon so that everybody could assign to one of these "teams". It's good
to have somebody in team to be stakeholder and responsible for organization
and tasks.

- What is the goal?

The goal of this hackathon to reduce deployment time of TripleO as much as
possible.

For example part of CI team takes a task to reduce quickstart tasks time.
It includes statistics collection, profiling and detection of places to
optimize. After this tasks are created, patches are tested and submitted.

The prizes will be presented to teams which saved most of time :)

What do you think?

Thanks
-- 
Best regards
Sagi Shnaidman
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Missing answers on Pike release goals

2017-05-23 Thread Dean Troyer
OK, I'll bite, being one of the until-last-week offenders...

On Tue, May 23, 2017 at 4:59 AM, Thierry Carrez  wrote:
> As part of release management we remind projects of the release cycle
> deadlines, including the ones regarding the release goals process.
>
> According to [1], "each PTL is responsible for adding their planning
> artifact links to the goal document before the first milestone
> deadline", and "if the goal does not apply to a project or the project
> has already met the goal, the PTL should explain why that is the case,
> instead of linking to planning artifacts".
>
> However, for Pike goals we are 6 weeks past the pike-1 milestone, and we
> still have about half the project teams that haven't provided answers
> (despite two reminders posted in the release countdown emails). Such a
> large share goes beyond the usual occasional misses, and points to a
> more systemic issue, that we might want to address before the Queens
> campaign starts.
>
> A few questions to bootstrap the discussion:
>
> - Is it that the reporting process is too heavy ? (requiring answers
> from projects that are obviously unaffected)

I've thought about this, OSC was unaffected by one of the goals but
not the other, so I can't really hide in this bucket.  It really is
not that hard to put up a review saying "not me".

> - Is it that people ignore the deadlines and missed the reminders ?
> (some unaffected project teams also do not do releases, and therefore
> ignore the release countdown emails)

In my case, not so much "ignore" but "put off until tomorrow" where
tomorrow turned in to 6 weeks.  I really don't have a hard reason
other than simply not prioritizing it because I knew one of the goals
was going to take some coordination work

> - Is it that in periods of resource constriction, having release-wide
> goals is just too ambitious ? (although anecdotal data shows that most
> projects have already completed their goals)

While this may certainly be a possibility, I don't think we should
give in to the temptation to blame too much on losing people.  OSC was
hit by this too, yet the loss of core and contributors did not affect
the goals not getting done, that falls squarely on the PTL in this
case.

> - Is it that the goals should be more clearly owned by the community
> beyond just the TC? (and therefore the goals should be maintained in a
> repository with simpler approval rules and a larger approval group)

I do think that at least the perception of the goals being community
things should be increased if we can.  We fall in to the problem of
the TC proposing something and getting pushback about projects being
forced to do more work, yet we hear so much about how the TC needs to
take more leadership in technical direction (see TC vision feedback
for the latest round of this).

I'm not sure that the actual repo is the issue, are we having problems
getting reviews to approve these?  I don't see this but I'm also not
tracking the time to takes for them to get approved.

I believe it is just going to have to be a social thing that we need
to continue to push forward.

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] Vitrage attendance in the PTG in September

2017-05-23 Thread Afek, Ifat (Nokia - IL/Kfar Sava)
Hi,

The next PTG (Project Team Gathering) will be held in Denver in September. We 
need to decide whether we would like to reserve a room for Vitrage, where we 
could hold design sessions for the next release.
What do you say? Are you interested in attending the PTG?

Thanks,
Ifat.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >