Re: [openstack-dev] [vitrage] scenario evaluator not enabled by default

2016-08-14 Thread Yujun Zhang
Thanks for the expanation, Elisha. I understand the design now.

But I could not find the statement which enables the evaluator after
initial phase.

Could you help to point it out?
--
Yujun

On Thu, Aug 11, 2016 at 11:42 PM Rosensweig, Elisha (Nokia - IL) <
elisha.rosensw...@nokia.com> wrote:

> This is on purpose.
>
> When Vitrage is started, it first runs a "consistency" round where it gets
> all the resources from its datasources and inserts them into the entity
> graph. Once this initial phase is over, the evaluator is run over all the
> entity graph to check for meaningful patterns based on it's templates.
>
> The reason for this process is to avoid too much churn during the initial
> phase when Vitrage comes up. With so many changes done to the entity graph,
> it's best to wait for the initial collection phase to finish and then to do
> the analysis.
>
> Elisha
>
> > From: Yujun Zhang [mailto:zhangyujun+...@gmail.com]
> > Sent: Thursday, August 11, 2016 5:49 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [vitrage] scenario evaluator not enabled by
> default
> >
> > Sorry for having put a url from forked repo. It should be
> https://github.com/openstack/vitrage/commit/bdba10cb71b2fa3744e4178494fa860303ae0bbe#diff>
> -6f1a277a2f6e9a567b38d646f19728bcL36
>
> > But the content is the same
> > --
> > Yujun
>
> > On Thu, Aug 11, 2016 at 10:43 PM Yujun Zhang 
> wrote:
> > It seems the scenario evaluator is not enabled when vitrage is started
> in devstack installer.
> >
> > I dig a bit in the history, it seems the default value for the evaluator
> is changed from True to False > in a history commit [1].
> >
> > Is it breaking the starting of evaluator or I have missed some steps to
> enable it explictily?
> >
> > - [1]
> https://github.com/openzero-zte/vitrage/commit/bdba10cb71b2fa3744e4178494fa860303ae0bbe#diff-
> 6f1a277a2f6e9a567b38d646f19728bcL36
>
> > --
> > Yujun
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Porting security-related utils, Context and dependencies to Mistral-Lib

2016-08-14 Thread Renat Akhmerov
Hi Ryan,

Keeping in mind that 'mistral-lib' must not depend on ‘mistral’ below are my 
comments:
I think porting keystone utils over to mistral-lib is OK, I don’t see any other 
options (‘mistral’ will depend on ‘mistral-lib’ but not the way around)
Porting the entire mistral.context is OK too for the same reason.
Porting the entire exceptions.py module is OK. But.. All general exceptions not 
related to actions should not be under ‘actions/api’ because this package 
should contain only stuff needed for implementing new actions. I would suggest 
we move all the exceptions into ‘mistral_lib/exceptions.py’ but keep 
ActionException (and other possible exceptions inherited from it) in 
‘mistral_lib/actions/api.py”. That way the design would stay clean. As a rule 
of thumb: we need to keep under ‘api’ as little as possible, only that stuff 
that is really supposed to be stable and hence can be treated as API.

What do you think?

Any other comments are very welcome.

Renat Akhmerov
@Nokia

> 8 авг. 2016 г., в 21:33, Ryan Brady  написал(а):
> 
> In accordance with the spec[1], I started a patch[2] to port security related 
> items from mistral to mistral-lib.  This may not be the right way to approach 
> this task and I'm hoping the patch provides a means to illustrate the problem 
> and starts a discussion on the right solution.
> 
> A custom action that creates a client that requires keystone auth will need 
> to get an endpoint for a given project to create a client object, so I ported 
> over the utility class[3] that deals with keystone.  That utility class 
> requires the mistral.context.
> 
> I started looking at the context requirements from two separate points of 
> view:
>  - create a security context in mistral lib that could be an attribute in the 
> mistral context
>  - port entire mistral context over
> 
> When I looked at the attributes[4] currently in the mistral.context, most if 
> not all of them seem to be security related anyway.  I chose to port the 
> entire context over, but that also required dependencies on 4 threading 
> utility methods[5] and mistral.exceptions[6], so those were also ported over.
> 
> I would appreciate any feedback or discussion on the current proposed design.
> 
> Thanks,
> 
> Ryan
> 
> 
> [1] 
> https://specs.openstack.org/openstack/mistral-specs/specs/newton/approved/mistral-custom-actions-api.html
>  
> 
> 
> [2] https://review.openstack.org/#/c/352435/ 
> 
> 
> [3] 
> https://github.com/openstack/mistral/blob/master/mistral/utils/openstack/keystone.py
>  
> 
> 
> [4] 
> https://github.com/openstack/mistral/blob/master/mistral/context.py#L76-L87 
> 
> 
> [5] 
> https://github.com/openstack/mistral/blob/master/mistral/utils/__init__.py#L49-L94
>  
> 
> 
> [6] https://github.com/openstack/mistral/blob/master/mistral/exceptions.py 
> 
> 
> -- 
> Ryan Brady
> Cloud Engineering
> rbr...@redhat.com  
> 919.890.8925
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] os-capabilities library created

2016-08-14 Thread Yingxin Cheng
Hi,

I'm concerned with the dependencies between "os-capabilities" library and
all the other OpenStack services such as Nova, Placement, Ironic, etc.

Rather than embedding the universal "os-capabilities" in Nova, Cinder,
Glance, Ironic services that will introduce complexities if the library
versions are different, I'd prefer to hide this library behind the
placement service and expose consistent interfaces as well as caps to all
the other services. But the drawback here is also obvious: for example,
when nova wants to support a new capability, the development will require
os-capabilities updates, and related lib version bumps, which is
inconvenient and seems unnecessary.

So IMHO, the possible solution is:
* Let each services (Nova, Ironic ...) themselves manage their capabilities
under proper namespaces such as "compute", "ironic" or "storage";
* Let os-capabilities define as few as caps possible that are only
cross-project;
* And also use os-capabilities to convert service-defined or user-defined
caps to a standardized and distinct form that can be understood by the
placement engine.


My two cents,
Yingxin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] RPC call not appearing to retry

2016-08-14 Thread Eric K
Hi all, I'm running into an issue with oslo-messaging PRC call not
appearing to retry. If I do oslo_messaging.RPCClient(transport, target,
timeout=5, retry=10).call(self.context, method, **kwargs) using a topic
with no listeners, I consistently get the MessagingTimeout exception in 5
seconds, with no apparent retry attempt. Any tips on whether this is a
user error or a bug or a feature? Thanks so much!

It happens with both drivers rabbit and kombu+memory (oslo.messaging:
5.7.0; rabbitmq-server: 3.2.4-1).




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [daisycloud-core] Daisy installer demo is out

2016-08-14 Thread hu . zhijiang
Hi All,

Daisycloud-core team are pleased to annonce the first release of Daisy 
OpenStack Installer. You can download the demo from 
http://www.daisycloud.org/static/files/installdaisy_el7_noarch.bin. and 
the corresponding document is here: 
http://www.daisycloud.org/static/files/demo_how_to.docx

In this phase, Daisy OpenStack Installer is just a friendly web UI for 
deploying openstack mitaka by using kolla. It will support baremetal 
deployment by using ironic (also bifrost may be). To sum up, Daisy 
installer are trying to make the most use of the upstream projects for 
deploying OpenStack and make them easy to use.

Thanks!


B.R.,
Zhijiang


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] [Magnum] Using common tooling for API docs

2016-08-14 Thread Anne Gentle
Hi Hongbin,

Thanks for asking. I'd like for teams to look for ways to innovate and
integrate with the navigation as a good entry point for OpenAPI to become a
standard for OpenStack to use. That said, we have to move forward and make
progress.

Is Lars or anyone on the Magnum team interested in the web development work
to integrate with the sidebar? See the work at
https://review.openstack.org/#/c/329508 and my comments on
https://review.openstack.org/#/c/351800/ saying that I would like teams to
integrate first to provide the best web experience for people consuming the
docs.

Anne

On Fri, Aug 12, 2016 at 4:43 PM, Hongbin Lu  wrote:

> Hi team,
>
>
>
> As mentioned in the email below, Magnum are not using common tooling for
> generating API docs, so we are excluded from the common navigation of
> OpenStack API. I think we need to prioritize the work to fix it. BTW, I
> notice there is a WIP patch [1] for generating API docs by using Swagger.
> However, I am not sure if Swagger belongs to “common tooling” (docs team,
> please confirm).
>
>
>
> [1] https://review.openstack.org/#/c/317368/
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Anne Gentle [mailto:annegen...@justwriteclick.com]
> *Sent:* August-10-16 3:50 PM
> *To:* OpenStack Development Mailing List; openstack-docs@lists.
> openstack.org
> *Subject:* [openstack-dev] [api] [doc] API status report
>
>
>
> Hi all,
>
> I wanted to report on status and answer any questions you all have about
> the API reference and guide publishing process.
>
>
>
> The expectation is that we provide all OpenStack API information on
> developer.openstack.org. In order to meet that goal, it's simplest for
> now to have all projects use the RST+YAML+openstackdocstheme+os-api-ref
> extension tooling so that users see available OpenStack APIs in a sidebar
> navigation drop-down list.
>
>
>
> --Migration--
>
> The current status for migration is that all WADL content is migrated
> except for trove. There is a patch in progress and I'm in contact with the
> team to assist in any way. https://review.openstack.org/#/c/316381/
>
>
>
> --Theme, extension, release requirements--
>
> The current status for the theme, navigation, and Sphinx extension tooling
> is contained in the latest post from Graham proposing a solution for the
> release number switchover and offers to help teams as needed:
> http://lists.openstack.org/pipermail/openstack-dev/2016-August/101112.html
> I hope to meet the requirements deadline to get those changes landed.
> Requirements freeze is Aug 29.
>
>
>
> --Project coverage--
>
> The current status for project coverage is that these projects are now
> using the RST+YAML in-tree workflow and tools and publishing to
> http://developer.openstack.org/api-ref/ so they will be
> included in the upcoming API navigation sidebar intended to span all
> OpenStack APIs:
>
>
>
> designate http://developer.openstack.org/api-ref/dns/
>
> glance http://developer.openstack.org/api-ref/image/
> heat http://developer.openstack.org/api-ref/orchestration/
> ironic http://developer.openstack.org/api-ref/baremetal/
> keystone http://developer.openstack.org/api-ref/identity/
> manila http://developer.openstack.org/api-ref/shared-file-systems/
> neutron-lib http://developer.openstack.org/api-ref/networking/
> nova http://developer.openstack.org/api-ref/compute/
> sahara http://developer.openstack.org/api-ref/data-processing/
> senlin http://developer.openstack.org/api-ref/clustering/
> swift http://developer.openstack.org/api-ref/object-storage/
> zaqar http://developer.openstack.org/api-ref/messaging/
>
>
>
> These projects are using the in-tree workflow and common tools, but do not
> have a publish job in project-config in the jenkins/jobs/projects.yaml file.
>
>
>
> ceilometer
>
>
>
> --Projects not using common tooling--
>
> These projects have API docs but are not yet using the common tooling, as
> far as I can tell. Because of the user experience, I'm making a judgement
> call that these cannot be included in the common navigation. I have patched
> the projects.yaml file in the governance repo with the URLs I could
> screen-scrape, but if I'm incorrect please do patch the projects.yaml in
> the governance repo.
>
>
>
> astara
>
> cloudkitty
>
> congress
>
> magnum
>
> mistral
>
> monasca
>
> solum
>
> tacker
>
> trove
>
>
>
> Please reach out if you have questions or need assistance getting started
> with the new common tooling, documented here: http://docs.openstack.
> org/contributor-guide/api-guides.html.
>
>
>
> For searchlight, looking at http://developer.openstack.org/api-ref/search/
> they have the build job, but the info is not complete yet.
>
>
>
> One additional project I'm not sure what to do with is networking-nfc,
> since I'm not sure it is considered a neutron API. Can I get help to sort
> that question out?
>
>
> --Redirects from old pages--
>
> We have been adding .htaccess redirects from the old
> api-ref-servicename.html on 

Re: [openstack-dev] [cinder] [nova] locking concern with os-brick

2016-08-14 Thread Patrick East
In-case folks are not following comments on all the various forums for this
discussion. We've got changes up to address the concerns raised so far on
the immediate problem:

Devstack (changing default config option to be shared):
https://review.openstack.org/341744
Cinder (release note):  https://review.openstack.org/354501
Nova (release note):  https://review.openstack.org/354502
oslo.concurrency (updated config description):
https://review.openstack.org/355269

For addressing concerns like

I have enough experience to know that the notes will not be read.


Any suggestions on where we should document it? I am also not entirely
convinced a release note is the best, especially since it isn't really new
for this release, and may be a thing for future ones too. Theoretically
this has been a problem since Cinder and Nova split off way back when, any
of the volume attach/detach operations have been at risk for this when run
on the same host. With os-brick in liberty and its named locks we got a
mechanism to prevent it, but this wasn't really documented anywhere (or
known to be an issue, as it turns out). I'm not sure what the normal
strategy is to give a heads up to folks using older releases that there
might be issues like this. Suggestions welcome.

Clint Byrum wrote:
>
>> Excerpts from Joshua Harlow's message of 2016-08-13 20:04:13 -0700:
>>
>>> The larger issue here IMHO is that there is now a  API
>>> around locking that might be better suited targeting an actual lock
>>> management system (say redis or zookeeper or etcd or ...).
>>>
>>
>> The more I look at this, the more I think this is just evidence that
>> the compute node itself needs to be an API unto itself. Whether it's
>> Neutron agents, cinder volumes, or what, nova-compute has a bunch of
>> under-the-covers interactions with things like this. It would make more
>> sense to put that into its own implementation behind a real public API
>> than what we have now: processes that just magically expect to be run
>> together with shared filesystems, lock dirs, network interfaces, etc.
>>
>> That would also go a long way to being able to treat the other components
>> more like microservices.
>>
>>
> I very much agree, the amount of interactions 'under-the-covers' makes it
> really hard to do many things (including understanding what those
> interactions even are). For example, how does someone even install
> 'os-brick' at this point, if it requires as a prerequisite that cinder and
> nova-compute be pre-setup with the ? Sucks I guess for
> people/operators/anyone using both components, that are already running
> those with different lock directories...
>
> IMHO the amount of time done 'hacking in solutions' like a shared lock
> directory (or moving both projects to share the same configuration somehow)
> would be better spent on an actual locking solution/service and thinking
> about microservices and ... but meh, what can u do...


I like the sound of a more unified way to interact with compute node
services. Having a standardized approach for inter-service synchronization
for controlling system resources would be sweet (even if it is just a more
sane way of using local file locks). Anyone know if there is existing work
in this area we can build off of? Or is the path forward a new
cross-project spec to try and lock down some requirements, use-cases, etc.?

As far as spending time to hack together solutions via the config settings
for this.. we'll its pretty minimal wrt size of effort compared to solving
the large issue. Don't get me wrong though, I'm a fan of doing both in
parallel. Even if we have resources jump on board immediately I'm not
convinced we have a great chance to "fix" this for N in a more elegant
fashion, much less any of the older releases affected by this. That leads
me to believe we still need the shared config setting for at least a little
while in Devstack, and documentation for existing deployments or ones going
up with N.

-Patrick
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] locking concern with os-brick

2016-08-14 Thread Sean McGinnis
On Sat, Aug 13, 2016 at 08:04:13PM -0700, Joshua Harlow wrote:
> Sean McGinnis wrote:
> >On Fri, Aug 12, 2016 at 05:55:47AM -0400, Sean Dague wrote:
> >>A devstack patch was pushed earlier this cycle around os-brick -
> >>https://review.openstack.org/341744
> >>
> >>Apparently there are some os-brick operations that are only safe if the
> >>nova and cinder lock paths are set to be the same thing. Though that
> >>hasn't yet hit release notes or other documentation yet that I can see.
> >
> >Patrick East submitted a patch to add a release note on the Cinder side
> >last night: https://review.openstack.org/#/c/354501/
> >
> >>Is this a thing that everyone is aware of at this point? Are project
> >>teams ok with this new requirement? Given that lock_path has no default,
> >>this means we're potentially shipping corruption by default to users.
> >>The other way forward would be to revisit that lock_path by default
> >>concern, and have a global default. Or have some way that users are
> >>warned if we think they aren't in a compliant state.
> >
> >This is a very good point that we are shipping corruption by default. I
> >would actually be in favor of having a global default. Other than
> >requiring tooz for default global locking (with a lot of extra overhead
> >for small deployments), I don't see a better way of making sure the
> >defaults are safe for those not aware of the issue.
> 
> What is this 'lot of extra overhead' you might be talking about here?
> 
> You're free when using tooz to pick (or recommend) the backend that
> is the best for the API that you're trying to develop:
> 
> http://docs.openstack.org/developer/tooz/drivers.html
> 
> http://docs.openstack.org/developer/tooz/drivers.html#file is
> similar to the one that oslo.concurrency provides (they both share
> the same underlying lock impl via
> https://pypi.python.org/pypi/fasteners).
> 
> The larger issue here IMHO is that there is now a 
> API around locking that might be better suited targeting an actual
> lock management system (say redis or zookeeper or etcd or ...).

This is what I'm referring to by overhead. I agree, tooz with the file
driver is very low overhead. But if the projects are still using files
separately in lock_dir/nova and lock_dir/cinder, then it doesn't matter.

What I was (very poorly) trying to point out was this last paragraph.
There would be overhead in requiring zk to be set up to get distrubuted
locking, even for a single node deployment.

So basically, I agree with you completely. ;)

> 
> For example we could have the following lock hierarchy convention:
> 
> openstack/
> ├── cinder
> ├── glance
> ├── neutron
> ├── nova
> └── shared
> 
> The *shared* 'folder' there (not really a folder in some of the
> backends) would be where shared locks (ideally with sub-folders
> defining categories that provide useful context/names describing
> what is being shared) would go, with project-specific locks using
> there respective folders (and so-on).
> 
> Using http://docs.openstack.org/developer/tooz/drivers.html#file u
> could even create the above directory structure as is (right now);
> oslo.concurrency doesn't provide the right ability to do this since
> it has only one configuration option 'lock_path' (and IMHO although
> we could tweak oslo.concurrency more and more to do something like
> that it starts to enter the territory of 'if all you have is a
> hammer, everything looks like a nail').
> 
> That's my 3 cents :-P
> 
> -Josh
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][cinder][oslo] Lets move forward on making oslo.privsep logging more sane

2016-08-14 Thread Matt Riedemann
If you hadn't noticed yet, the n-cpu logs are super chatty due to 
privsep logging everything from debug=True at warning level:


http://logs.openstack.org/15/355215/1/check/gate-tempest-dsvm-neutron-src-python-novaclient/b92bced/logs/screen-n-cpu.txt.gz?level=TRACE

I opened a bug for this awhile back:

https://bugs.launchpad.net/os-brick/+bug/1593743

There are some competing patches between Angus and Walter:

https://review.openstack.org/#/c/350415/

https://review.openstack.org/#/c/339275/

Both turn down the noise so we need to get one of the merged for Newton 
before the non-client library release freeze because I don't want to 
ship Newton with noise in the logs like this.


I think we could approve Angus' and maybe iterate/improve on it later 
for what Walter wants, which is making oslo.privsep logging more 
configurable like processutils so the caller can get the captured 
stdout/stderr and decide what to do with it.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Thoughts on testing novaclient functional with neutron

2016-08-14 Thread Matt Riedemann

On 8/12/2016 1:03 PM, Dean Troyer wrote:

On Fri, Aug 12, 2016 at 10:13 AM, Matt Riedemann
> wrote:

Another idea is the base functional test that sets up the client
just checks the keystone service catalog for a 'network' service
entry, somewhere in here:


This is exactly the route OSC takes for those CLI commands that work
against both nova-network and neutron.  It's only been released since
earlier this year but appears to be working well in the field.  It boils
down to:

  if 'network' in service_catalog.get_endpoints():
  # neutron
  else:
  # nova-net

(service_catalog is from KSA's AccessInfo class)

dt

--

Dean Troyer
dtro...@gmail.com 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Thanks, I got something working with keystoneclient, which we already 
had in the functional tests:


https://review.openstack.org/#/c/355215/

Now it's just a matter of resolving whatever is making 12 tests fail:

http://paste.openstack.org/show/557065/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] os-capabilities library created

2016-08-14 Thread Jay Pipes
Word up, Travis :) A few comments inline, but overall I'm looking 
forward to collaborating with you, Steve, and all the other Searchlight 
contributors on os-capabilities (or os-caps as Sean wants to rename it ;)


On 08/11/2016 08:00 PM, Tripp, Travis S wrote:

[Graffit] was originally co-sponsored by Intel
to help expose out all the CPU capabilities in Nova. The constants in
the metadef catalog all come from combing through the code in Nova
was a complete maze and were not available at the time from
Nova (or cinder or glance or …) See overview here [2]:

 [2] https://wiki.openstack.org/wiki/Graffiti


Yep, I'm thoroughly familiar with the maze-ishness of the code in Nova 
and os-capabilities was borne out of an attempt to curate/catalog a 
number of collections of string metadata, constants spread over modules, 
and various hardcoded feature flags/strings in the virt drivers and 
elsewhere.



2) It uses a custom JSON format instead of JSONSchema, so we now need to
figure out the schema for these metadef documents and keep up to date
with that schema as it changes.


It uses JSON schema, but surrounds it with a very lightweight envelope.
The envelope is called a namespace and is simply a container of JSON
schema, allowing us to manage it as a programmatic unit and as a way
for cloud deployers to share the capabilities across clouds very easily.

We did place a limitation on it that it cannot support nested objects. This
was primarily due to the extreme difficulty of representing that construct
to users in an easy to understand way:

http://docs.openstack.org/developer/glance/metadefs-concepts.html#catalog-terminology


I actually do not think there is a need for a JSON schema for any of the 
capability strings in os-caps. I wasn't planning on supporting anything 
more than simple strings with a string prefix for "namespaces" and a 
common delimiter (I chose ':'). I'd like to keep things as simple as 
possible.



3) It mixes qualitative things -- CPU model, features, etc -- with
quantitative things -- amount of cores, threads, etc. These two things
are precisely what we are trying to decouple from each other in the next
generation of Nova's "flavors".


I noticed you didn't respond to this part of my email (from a year ago). 
It's actually a really important point. The mixing of quantitative and 
qualitative things in the Nova flavor extra specs as well as the Glance 
metadefs stuff is a real problem we're trying to fix with the new 
placement API.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] locking concern with os-brick

2016-08-14 Thread Joshua Harlow

Clint Byrum wrote:

Excerpts from Joshua Harlow's message of 2016-08-13 20:04:13 -0700:

The larger issue here IMHO is that there is now a  API
around locking that might be better suited targeting an actual lock
management system (say redis or zookeeper or etcd or ...).


The more I look at this, the more I think this is just evidence that
the compute node itself needs to be an API unto itself. Whether it's
Neutron agents, cinder volumes, or what, nova-compute has a bunch of
under-the-covers interactions with things like this. It would make more
sense to put that into its own implementation behind a real public API
than what we have now: processes that just magically expect to be run
together with shared filesystems, lock dirs, network interfaces, etc.

That would also go a long way to being able to treat the other components
more like microservices.



I very much agree, the amount of interactions 'under-the-covers' makes 
it really hard to do many things (including understanding what those 
interactions even are). For example, how does someone even install 
'os-brick' at this point, if it requires as a prerequisite that cinder 
and nova-compute be pre-setup with the ? Sucks I guess 
for people/operators/anyone using both components, that are already 
running those with different lock directories...


IMHO the amount of time done 'hacking in solutions' like a shared lock 
directory (or moving both projects to share the same configuration 
somehow) would be better spent on an actual locking solution/service and 
thinking about microservices and ... but meh, what can u do...


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][drivers] Backend and volume health reporting

2016-08-14 Thread John Griffith
On Sun, Aug 14, 2016 at 2:11 AM, Avishay Traeger 
wrote:

> Hi all,
> I would like to propose working on a new feature for Ocata to provide
> health information for Cinder backends and volumes.  Currently, a volume's
> status basically reflects the last management operation performed on it -
> it will be in error state only as a result of a failed management
> operation.  There is no indication as to whether or not a backend or volume
> is "healthy" - i.e., the data exists and is accessible.
>
> The basic idea would be to add a "health" property for both backends and
> volumes.
>
> For backends, this may be something like:
> - "healthy"
> - "warning" (something is wrong and the admin should check the storage)
> - "management unavailable" (there is no management connectivity)
> - "data unavailable" (there is no data path connectivity)
>
> For volumes:
> - "healthy"
> - "degraded" (i.e., not at full redundancy)
> - "error" (in case of a data loss event)
> - "management unavailable" (there is no management connectivity)
> - "data unavailable" (there is no data path connectivity)
>
> Before I start working on a spec, I wanted to get some feedback,
> especially from driver owners:
> 1. What useful information can you provide at the backend level?
> 2. And at the volume level?
> 3. How would you obtain this information?  Querying the storage (poll)?
> Registering for events?  Something else?
> 4. Other feedback?
>
> Thank you,
> Avishay
>
> --
> *Avishay Traeger, PhD*
> *System Architect*
>
> Mobile: +972 54 447 1475
> E-mail: avis...@stratoscale.com
>
>
>
> Web  | Blog
>  | Twitter
>  | Google+
> 
>  | Linkedin 
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ​I'd like to get a more detailed use case and example of a problem you
want to solve with this.  I have a number of concerns including those I
raised in your "list manageable volumes" proposal.​  Most importantly
there's really no clear definition of what these fields mean and how they
should be interpreted.

For backends, I'm not sure what you want to solve that can't be handled
already by the scheduler and report-capabilities periodic job?  You can
already report back from your backend to the scheduler that you shouldn't
be used for any scheduling activities going forward.  More detailed info
than that might be useful, but I'm not sure it wouldn't fall into an
already existing OpenStack monitoring project like Monasca?

As far as volumes, I personally don't think volumes should have more than a
few states.  They're either "ok" and available for an operation or they're
not.  The list you have seems ok to me, but I don't see a ton of value in
fault prediction or going to great lengths to avoid something failing. The
current model we have of a volume being "ok" until it's "not" seems
perfectly reasonable to me.  Typically my experience is that trying to be
clever and polling/monitoring to try and preemptively change the status of
a volume does little more than result in complexity, confusion and false
status changes of resources.  I'm pretty strongly opposed to having a level
of granularity of the volume here.  At least for now, I'd rather see what
you have in mind for the backend and nail that down to something that's
solid and basically bullet proof before trying to tackle thousands of
volumes which have transient states.  And of course the biggest question I
have still "what problem" you hope to solve here?

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][cinder] tag:follows-standard-deprecation should be removed

2016-08-14 Thread Jay Bryant
I agree with Duncan here. Driver removal has been an important tool to keep
only maintained drivers in the tree and a way to get attention to ignored
drivers. I hate to see the tag removed but think we need to stay true to
the approach we have been taking.

Re-addressing the meaning of the tag with regards to drivers will be
necessary so that we can communicate that Cinder Core is compliant though
we can't guarantee driver compliance.

Jay

On Thu, Aug 11, 2016 at 9:48 AM Duncan Thomas 
wrote:

> Given the options, I'd agree with Sean and John that removing the tag is a
> far lesser evil than changing our policy.
>
> If we leave broken drivers in the tree, the end user (operator) is no
> better off - the thing they evaluated won't work - but it will be harder to
> tell why. The storage vendor won't suffer the pressure that comes from
> driver removal, so will have less incentive to fix their driver (there's
> enough examples of the threat of driver removal causing the immediate fix
> of things that have remained broken for months that we know, for certain
> that the policy works).
>
> I'd prefer to make the meaning of the tag sane WRT third party drivers,
> which I think would help other projects to be able to police their drivers
> and CI better too, without risking losing / not gaining the tag, which is
> likely to hurt a newer project far more than it will cinder.
>
>
>
> On 11 August 2016 at 17:29, John Griffith 
> wrote:
>
>>
>>
>> On Thu, Aug 11, 2016 at 7:14 AM, Erno Kuvaja  wrote:
>>
>>> On Thu, Aug 11, 2016 at 2:47 PM, Sean McGinnis 
>>> wrote:
>>> >> >>
>>> >> >> As follow up on the mailing list discussion [0], gerrit activity
>>> >> >> [1][2] and cinder 3rd party CI policy [3] I'd like to initiate
>>> >> >> discussion how Cinder follows, or rather does not follow, the
>>> standard
>>> >> >> deprecation policy [4] as the project has been tagged on the assert
>>> >> >> page [5].
>>> >> >>
>>> > 
>>> >> >>
>>> >> >> [0]
>>> http://lists.openstack.org/pipermail/openstack-dev/2016-August/100717.html
>>> >> >> [1] https://review.openstack.org/#/c/348032/
>>> >> >> [2] https://review.openstack.org/#/c/348042/
>>> >> >> [3] https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers
>>> >> >> [4]
>>> https://governance.openstack.org/reference/tags/assert_follows-standard-deprecation.html#requirements
>>> >> >> [5]
>>> https://governance.openstack.org/reference/tags/assert_follows-standard-deprecation.html#application-to-current-projects
>>> >> >>
>>> >> >
>>> >> > Can you be more specific about what you mean? Are you saying that
>>> >> > the policy isn't being followed because the drivers were removed
>>> >> > without a deprecation period, or is there something else to it?
>>> >> >
>>> >> > Doug
>>> >> >
>>> >>
>>> >> Yes, that's how I see it. Cinder's own policy is that the drivers can
>>> >> be removed without any warning to the consumers while the standard
>>> >> deprecation policy defines quite strict lines about informing the
>>> >> consumer of the functionality deprecation before it gets removed.
>>> >>
>>> >> - Erno
>>> >
>>> > It is a good point. I think it highlights a common thread though with
>>> > the other discussion that, at least so far, third party drivers are
>>> > treated differently than the rest of the code.
>>> >
>>> > For any other functionality we certainly follow the deprecation policy.
>>> > Even in existing drivers we try to enforce that any driver renames,
>>> > config setting changes, and similar non-backwards compatible changes go
>>> > through the normal deprecation cycle before being removed.
>>> >
>>> > Ideally I would love it if we could comply with the deprecation policy
>>> > with regards to driver removal. But the reality is, if we don't see
>>> that
>>> > a driver is being supported and maintained by its vendor, then that
>>> > burden can't fall on the wider OpenStack and Cinder community that has
>>> > no way of validating against physical hardware.
>>> >
>>> > I think third party drivers need to be treated differently when it
>>> comes
>>> > to the deprecation policy. If that is not acceptable, then I suppose we
>>> > do need to remove that tag. Tag removal would be the lesser of the two
>>> > versus keeping around drivers that we know aren't really being
>>> > maintained.
>>> >
>>> > If it came to that, I would also consider creating a new cinder-drivers
>>> > project under the Cinder umbrella and move all of the drivers not
>>> tested
>>> > by Jenkins over to that. That wouldn't be a trivial undertaking, so I
>>> > would try to avoid that if possible. But it would at least allow us to
>>> > still get code reviews and all of the benefits of being in tree. Just
>>> > some thoughts.
>>> >
>>> > Sean
>>> >
>>>
>>> Sean,
>>>
>>> As said on my initial opening, I do understand and agree with the
>>> reasoning/treatment of the 3rd party drivers. My request for that tag
>>> 

[openstack-dev] [neutron] Newton Midcycle - Tuesday dinner

2016-08-14 Thread John Schwarz
Hi guys,

For those of us who'll arrive on Tuesday to Cork, Martin Hickey has
arranged a dinner at "Gourmet Burger Bistro" [1], 8 Bridge Street, at
19:30. Last I heard the reservation was for 15 people so this should
accommodate all who filled out in [2] that they will arrive on
Tuesday.

[1]: http://www.gourmetburgerbistro.ie/
[2]: https://etherpad.openstack.org/p/newton-neutron-midcycle

See you then,

-- 
John Schwarz,
Red Hat.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][drivers] Backend and volume health reporting

2016-08-14 Thread Avishay Traeger
Hi all,
I would like to propose working on a new feature for Ocata to provide
health information for Cinder backends and volumes.  Currently, a volume's
status basically reflects the last management operation performed on it -
it will be in error state only as a result of a failed management
operation.  There is no indication as to whether or not a backend or volume
is "healthy" - i.e., the data exists and is accessible.

The basic idea would be to add a "health" property for both backends and
volumes.

For backends, this may be something like:
- "healthy"
- "warning" (something is wrong and the admin should check the storage)
- "management unavailable" (there is no management connectivity)
- "data unavailable" (there is no data path connectivity)

For volumes:
- "healthy"
- "degraded" (i.e., not at full redundancy)
- "error" (in case of a data loss event)
- "management unavailable" (there is no management connectivity)
- "data unavailable" (there is no data path connectivity)

Before I start working on a spec, I wanted to get some feedback, especially
from driver owners:
1. What useful information can you provide at the backend level?
2. And at the volume level?
3. How would you obtain this information?  Querying the storage (poll)?
Registering for events?  Something else?
4. Other feedback?

Thank you,
Avishay

-- 
*Avishay Traeger, PhD*
*System Architect*

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com



Web  | Blog 
 | Twitter  | Google+

 | Linkedin 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev