[openstack-dev] Ops Midcycle

2017-07-18 Thread Melvin Hillsman
Hey everyone,

I am sending this to dev ML hoping to get some projects to add questions
for folks attending the midcycle which may be beneficial. The etherpad for
sessions can be found at https://etherpad.openstack.org/p/MEX-ops-meetup
feel free to add new or add to existing sessions.

-- 
Kind regards,

Melvin Hillsman
mrhills...@gmail.com
mobile: (832) 264-2646

Learner | Ideation | Belief | Responsibility | Command
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [os-vif] 1.6.1 release for pike.

2017-07-18 Thread Matt Riedemann

On 7/18/2017 12:07 PM, Mooney, Sean K wrote:

Resending with correct subject line


The real correct subject line tag would be [nova] or [nova][neutron]. :P

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] weekly meetings on #tripleo

2017-07-18 Thread Emilien Macchi
On Mon, Jul 17, 2017 at 12:10 PM, Emilien Macchi  wrote:
> Since we have mixed feelings but generally agree that we should give
> it a try, let's give it a try and see how it goes, at least one time,
> tomorrow.

So we tried and did the meeting on #tripleo.
I noticed more people participated and were present to the meeting and
I didn't notice any interruption.

Please bring any feedback (positive / negative) so we decide if
whether or not we continue that way.

> On Mon, Jul 10, 2017 at 10:01 AM, Michele Baldessari  
> wrote:
>> On Mon, Jul 10, 2017 at 11:36:03AM -0230, Brent Eagles wrote:
>>> +1 for giving it a try.
>>
>> Agreed.
>>
>>>
>>> On Wed, Jul 5, 2017 at 2:26 PM, Emilien Macchi  wrote:
>>>
>>> > After reading http://lists.openstack.org/pipermail/openstack-dev/2017-
>>> > June/118899.html
>>> > - we might want to collect TripleO's community feedback on doing
>>> > weekly meetings on #tripleo instead of #openstack-meeting-alt.
>>> >
>>> > I see some direct benefits:
>>> > - if you come up late in meetings, you could easily read backlog in
>>> > #tripleo
>>> > - newcomers not aware about the meeting channel wouldn't have to search
>>> > for it
>>> > - meeting would maybe get more activity and we would expose the
>>> > information more broadly
>>> >
>>> > Any feedback on this proposal is welcome before we make any change (or
>>> > not).
>>> >
>>> > Thanks,
>>> > --
>>> > Emilien Macchi
>>> >
>>> > __
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> --
>> Michele Baldessari
>> C2A5 9DA3 9961 4FFB E01B  D0BC DDD4 DCCB 7515 5C6D
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proposal to change integrated neutron grenade gate job to multi-node

2017-07-18 Thread Jeremy Stanley
On 2017-07-18 12:05:24 -0700 (-0700), Ihar Hrachyshka wrote:
> I don't see any issue with it, but it would be great to hear from
> infra. They may have their reservations because the change may
> somewhat raise pressure on the number of nodes (that being said, there
> are some savings too for projects that currently run both versions of
> the job at the same time - one via integrated gate and another
> project-specific).

As an Infra contributor (not speaking for the rest of the team) I
don't see any problem with the suggestion as long as the job is
sufficiently stable for all projects involved. For pressure on our
quota, I don't think there's a good way to know what the impact will
be without going ahead and trying.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Feature proposal freeze exception request for change

2017-07-18 Thread Ravi, Goutham
Hello Manila reviewers,

It has been a few days past the feature proposal freeze, but I would like to 
request an extension for an enhancement to the NetApp driver in Manila. [1] 
implements a low-impact blueprint [2] that was approved for the Pike release. 
The code change is contained within the driver and would be a worthwhile 
addition to users of this driver in Manila/Pike.

[1] https://review.openstack.org/#/c/484933/
[2] https://blueprints.launchpad.net/openstack/?searchtext=netapp-cdot-qos

Thanks,
Goutham
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-07-18 Thread Amrith Kumar
Doug,

I agree, VM/baremetal, shared VM, object for backup storage or block device
for backup storage, those are implementation choices. We should not have
quota/billing depend on those; Trove should expose counters and
space-time-products of the things that Trove users should be billed for. At
issue is the old habit of taking compute out of the tenants compute quota
(or not), storage out of the tenants storage quota (or not). We did it one
way, and in retrospect I think it was the wrong way. Trove should consume
resources it needs and users should be billed for databasey things (not
compute, block and object storage, network traffic but database cluster
time, backups, data, queries, etc.,).

Thanks,

-amrith


On Wed, Jul 12, 2017 at 9:57 AM, Doug Hellmann 
wrote:

> Excerpts from Amrith Kumar's message of 2017-07-12 06:14:28 -0500:
> > All:
> >
> > First, let me thank all of you who responded and provided feedback
> > on what I wrote. I've summarized what I heard below and am posting
> > it as one consolidated response rather than responding to each
> > of your messages and making this thread even deeper.
> >
> > As I say at the end of this email, I will be setting up a session at
> > the Denver PTG to specifically continue this conversation and hope
> > you will all be able to attend. As soon as time slots for PTG are
> > announced, I will try and pick this slot and request that you please
> > attend.
> >
> > 
> >
> > Thierry: naming issue; call it Hoard if it does not have a migration
> > path.
> >
> > 
> >
> > Kevin: use a container approach with k8s as the orchestration
> > mechanism, addresses multiple issues including performance. Trove to
> > provide containers for multiple components which cooperate to provide
> > a single instance of a database or cluster. Don't put all components
> > (agent, monitoring, database) in a single VM, decoupling makes
> > migraiton and upgrades easier and allows trove to reuse database
> > vendor supplied containers. Performance of databases in VM's poor
> > compared to databases on bare-metal.
> >
> > 
> >
> > Doug Hellmann:
> >
> > > Does "service VM" need to be a first-class thing?  Akanda creates
> > > them, using a service user. The VMs are tied to a "router" which is
> > > the billable resource that the user understands and interacts with
> > > through the API.
> >
> > Amrith: Doug, yes because we're looking not just for service VM's but all
> > resources provisioned by a service. So, to Matt's comment about a
> > blackbox DBaaS, the VM's, storage, snapshots, ... they should all be
> > owned by the service, charged to a users quota but not visible to the
> > user directly.
>
> I still don't understand. If you have entities that represent the
> DBaaS "host" or "database" or "database backup" or whatever, then
> you put a quota on those entities and you bill for them. If the
> database actually runs in a VM or the backup is a snapshot, those
> are implementation details. You don't want to have to rewrite your
> quota management or billing integration if those details change.
>
> Doug
>
> >
> > 
> >
> > Jay:
> >
> > > Frankly, I believe all of these types of services should be built
> > > as applications that run on OpenStack (or other)
> > > infrastructure. In other words, they should not be part of the
> > > infrastructure itself.
> > >
> > > There's really no need for a user of a DBaaS to have access to the
> > > host or hosts the DB is running on. If the user really wanted
> > > that, they would just spin up a VM/baremetal server and install
> > > the thing themselves.
> >
> > and subsequently in follow-up with Zane:
> >
> > > Think only in terms of what a user of a DBaaS really wants. At the
> > > end of the day, all they want is an address in the cloud where they
> > > can point their application to write and read data from.
> > > ...
> > > At the end of the day, I think Trove is best implemented as a hosted
> > > application that exposes an API to its users that is entirely
> > > separate from the underlying infrastructure APIs like
> > > Cinder/Nova/Neutron.
> >
> > Amrith: Yes, I agree, +1000
> >
> > 
> >
> > Clint (in response to Jay's proposal regarding the service making all
> > resources multi-tenant) raised a concern about having multi-tenant
> > shared resources. The issue is with ensuring separation between
> > tenants (don't want to use the word isolation because this is database
> > related).
> >
> > Amrith: yes, definitely a concern and one that we don't have today
> > because each DB is a VM of its own. Personally, I'd rather stick with
> > that construct, one DB per VM/container/baremetal and leave that be
> > the separation boundary.
> >
> > 
> >
> > Zane: Discomfort over throwing out working code, grass is greener on
> > the other side, is there anything to salvage?
> >
> > Amrith: Yes, there is certainly a 'grass is greener with a rewrite'
> > fallacy. But, there is stuff that can be salvaged. 

[openstack-dev] [infra][python3][congress] locally successful devstack setup fails in check-job

2017-07-18 Thread Eric K
Hi all, looking for some hints/tips. Thanks so much in advance.

My local python3 devstack setup [2] succeeds, but in check-job a similarly
configured devstack setup [1] fails for not installing congress client.

./stack.sh:1439:check_libs_from_git
/opt/stack/new/devstack/inc/python:401:die
[ERROR] /opt/stack/new/devstack/inc/python:401 The following LIBS_FROM_GIT
were not installed correct: python-congressclient


It seems that the devstack setup in check-job never attempted to install
congress client. Comparing the log [4] in my local run to the log in
check-job [3], all these steps in my local log are absent from the
check-job log:
++/opt/stack/congress/devstack/settings:source:9
CONGRESSCLIENT_DIR=/opt/stack/python-congressclient

++/opt/stack/congress/devstack/settings:source:52
CONGRESSCLIENT_REPO=git://git.openstack.org/openstack/python-congressclient
.git

Cloning into '/opt/stack/python-congressclient'Š

Check python version for : /opt/stack/python-congressclient
Automatically using 3.5 version to install
/opt/stack/python-congressclient based on classifiers


Installing collected packages: python-congressclient
  Running setup.py develop for python-congressclient
Successfully installed python-congressclient


[1] Check-job config:
https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/
congress.yaml#L65
https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/
congress.yaml#L111

[2] Local devstack local.conf:
https://pastebin.com/qzuYTyAE   

[3] Check-job devstack log:
http://logs.openstack.org/49/484049/1/check/gate-congress-dsvm-py35-api-mys
ql-ubuntu-xenial-nv/7ae2814/logs/devstacklog.txt.gz

[4] Local devstack log:
https://ufile.io/c9jhm



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Logging in containerized services

2017-07-18 Thread Lars Kellogg-Stedman
Our current model for logging in a containerized deployment has pretty much
everything logging to files in a directory that has been bind-mounted from
the host.  This has some advantages: primarily, it makes it easy for an
operator on the local system to find logs, particularly if they have had
some previous exposure to non-containerized deployments.

There is strong demand for a centralized logging solution.  We've got one
potential solution right now in the form of the fluentd service introduced
in Newton, but this requires explicit registration of log files for every
service.  I don't think it's an ideal solution, and I would like to explore
some alternatives.

Logging via syslog
==

For the purposes of the following, I'm going to assume that we're deploying
on an EL-variant (RHEL/CentOS/etc), which means (a) journald owns /dev/log
and (b) we're running rsyslog on the host and using the omjournal plugin to
read messages from journald.

If we bind mount /dev/log into containers and configure openstack services
to log via syslog rather than via files, we get the following for free:

- We get message-based rather than line-based logging.  This means that
multiline tracebacks are handled correctly.

- A single point of collection for logs.  If your host has been configured
to ship logs to a centralized collector, logs from all of your services
will be sent there without any additional configuration.

- We get per-service message rate limiting from journald.

- Log messages are annotated by journald with a variety of useful metadata,
including the container id and a high resolution timestamp.

- We can configure the syslog service on the host to continue to write
files into legacy locations, so an operator looking to run grep against
local log files will still have that ability.

- Ryslog itself can send structured messages directly to an Elastic
instance, which means that in a many deployments we would not require
fluentd and its dependencies.

- This plays well in environments where some services are running in
containers and others are running on the host, because everything simply
logs to /dev/log.

Logging via stdin/stdout
==

A common pattern in the container world is to log everything to
stdout/stderr.  This has some of the advantages of the above:

- We can configure the container orchestration service to send logs to the
journal or to another collector.

- We get a different set of annotations on log messages.

- This solution may play better with frameworks like Kubernetes that tend
to isolate containers from the host a little more than using Docker or
similar tools straight out of the box.

But there are some disadvantages:

- Some services only know how to log via syslog (e.g., swift and haproxy)

- We're back to line-based vs. message-based logging.

- It ends up being more difficult to expose logs at legacy locations.

- The container orchestration layer may not implement the same message rate
limiting we get with fluentd.

Based on the above, I would like to suggest exploring a syslog-based
logging model moving forward. What do people think about this idea? I've
started putting together a spec at https://review.openstack.org/#/c/484922/
and I would welcome your input.

Cheers,

-- 
Lars Kellogg-Stedman 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proposal to change integrated neutron grenade gate job to multi-node

2017-07-18 Thread Ihar Hrachyshka
I don't see any issue with it, but it would be great to hear from
infra. They may have their reservations because the change may
somewhat raise pressure on the number of nodes (that being said, there
are some savings too for projects that currently run both versions of
the job at the same time - one via integrated gate and another
project-specific).

Ihar

On Fri, Jul 14, 2017 at 2:05 PM, Brian Haley  wrote:
> Hi,
>
> While looking at ways to reduce the number of jobs we run in the Neutron
> gate, I found we ran two very similar jobs for some projects:
>
> gate-grenade-dsvm-neutron-ubuntu-xenial (single-node job)
> gate-grenade-dsvm-neutron-multinode-ubuntu-xenial (2-node job)
>
> We talked about this in the Neutron CI meeting this week [1] and felt it
> best to remove the single-node job and just run the multi-node job, mostly
> because it more mimics a "real" Neutron deployment where there are separate
> controller and compute nodes.  Looking at the Neutron grafana dashboard [2]
> the two jobs have about the same failure rate in the gate (~0), so I don't
> think there will be any problems with the switch.
>
> This has an impact on the integrated gate since it currently runs the
> single-node job, so I wanted ot get thoughts on any issues they'd have with
> this change [3].
>
> Thanks,
>
> -Brian
>
> [1]
> http://eavesdrop.openstack.org/meetings/neutron_ci/2017/neutron_ci.2017-07-11-16.00.log.html#l-112
> [3] http://grafana.openstack.org/dashboard/db/neutron-failure-rate
> [2] https://review.openstack.org/#/c/483600/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] PTL nominations

2017-07-18 Thread Joshua Harlow

Thanks much for the service (and time and effort and more) gcb!!! :)

ChangBo Guo wrote:

Hi oslo folks,

The PTL nomination week is fast approaching [0], and as you might have
guessed by the subject of this email, I am not planning to run for
Queens, I'm still in the team and give some guidance about oslo PTL's
daily work as previous PTL did before .
It' my honor to be oslo PTL, I learned a lot  and grew quickly. It's
time to give someone else the opportunity to grow in the amazing role of
oslo PTL

[0]https://review.openstack.org/#/c/481768/4/configuration.yaml

--
ChangBo Guo(gcb)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][stable][ptls] Tagging mitaka as EOL

2017-07-18 Thread Lance Bragstad


On 07/18/2017 08:21 AM, Andy McCrae wrote:
>
>
>
> The branches have now been retired, thanks to Joshua Hesketh!
>
>
> Thanks Josh, Andreas, Tony, and the rest of the Infra crew for sorting
> this out.

++ thanks all!

>
> Andy
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] how to deprecate a name but still have it as conf.

2017-07-18 Thread Doug Hellmann
Excerpts from Michael Bayer's message of 2017-07-18 13:21:30 -0400:
> On Tue, Jul 18, 2017 at 1:02 PM, Doug Hellmann  wrote:
> 
> > Option renaming was originally meant as an operatior-facing feature
> > to handle renames for values coming from the config file, but not
> > as they are used in code.  mtreinish added
> > https://review.openstack.org/#/c/357987/ to address this for Tempest,
> > so it's possible there's a bug in the logic in oslo.config somewhere
> > (or that oslo.db's case is a new one).
> 
> OK, patch set 5 at
> https://review.openstack.org/#/c/334182/5/oslo_db/options.py shows
> what I'm trying to do to make this work, however the test case added
> in test_options still fails.   If this is supposed to "just work" then
> I hope someone can confirm that.

You found a bug related to the fact that these options are registered
using an OptGroup() not just a simple string name.
https://review.openstack.org/484897 should fix it.

> 
> Alternatively, a simple flag in DeprecatedOpt  "alias_on_conf=True"
> would be super easy here so that specific names in our DeprecatedOpt
> could be mirrored because we know projects are consuming them on conf.
> 
> >
> > That said, the options defined by a library are *NOT* part of its
> > API, and should never be used by code outside of the library. The
> > whole point of isolating options like that is to give operators a
> > way to change the way an app uses a library (drivers, credentials,
> > etc.) without the app having to know the details.  Ideally the nova
> > tests that access oslo.db configuration options directly would
> > instead use an API in oslo.db to do the same thing (that API may
> > need to be written, if it doesn't already exist).
> 
> OK, that is I suppose an option, but clearly a long and arduous one at
> this point (add new API to oslo.db, find all projects looking at
> conf., submit gerrits, somehow make sure projects never talk to
> conf. directly?   how would we ensure that?  shouldn't
> oslo.config allow the library that defines the options to plug in its
> own "private" namespace so that consuming projects don't make this
> mistake?)
> 
> >
> > At that point, only oslo.db code would refer to the option, and you
> > could use the deprecated_name and deprecated_group settings to
> > describe the move and change the references to oslo.db within the
> > library using a single patch to oslo.db.
> >
> > Doug
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] how to deprecate a name but still have it as conf.

2017-07-18 Thread Michael Bayer
On Tue, Jul 18, 2017 at 1:02 PM, Doug Hellmann  wrote:

> Option renaming was originally meant as an operatior-facing feature
> to handle renames for values coming from the config file, but not
> as they are used in code.  mtreinish added
> https://review.openstack.org/#/c/357987/ to address this for Tempest,
> so it's possible there's a bug in the logic in oslo.config somewhere
> (or that oslo.db's case is a new one).

OK, patch set 5 at
https://review.openstack.org/#/c/334182/5/oslo_db/options.py shows
what I'm trying to do to make this work, however the test case added
in test_options still fails.   If this is supposed to "just work" then
I hope someone can confirm that.

Alternatively, a simple flag in DeprecatedOpt  "alias_on_conf=True"
would be super easy here so that specific names in our DeprecatedOpt
could be mirrored because we know projects are consuming them on conf.


>
> That said, the options defined by a library are *NOT* part of its
> API, and should never be used by code outside of the library. The
> whole point of isolating options like that is to give operators a
> way to change the way an app uses a library (drivers, credentials,
> etc.) without the app having to know the details.  Ideally the nova
> tests that access oslo.db configuration options directly would
> instead use an API in oslo.db to do the same thing (that API may
> need to be written, if it doesn't already exist).

OK, that is I suppose an option, but clearly a long and arduous one at
this point (add new API to oslo.db, find all projects looking at
conf., submit gerrits, somehow make sure projects never talk to
conf. directly?   how would we ensure that?  shouldn't
oslo.config allow the library that defines the options to plug in its
own "private" namespace so that consuming projects don't make this
mistake?)



>
> At that point, only oslo.db code would refer to the option, and you
> could use the deprecated_name and deprecated_group settings to
> describe the move and change the references to oslo.db within the
> library using a single patch to oslo.db.
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures][telemetry][aodh] Tag of openstack/aodh failed

2017-07-18 Thread Andreas Jaeger
On 2017-07-18 19:06, Doug Hellmann wrote:
> The release notes failed to build for aodh's release with the error
> "Translations exist and locale_dirs missing in source/conf.py".
> 
> Unfortunately, I don't know what that means.
> 

releasenotes/source/conf.py misses:

# -- Options for Internationalization output --
locale_dirs = ['locale/']

That should have failed the post jobs - or any other check job - for
releasenotes already.

But this one *does* exist in master of aodh...

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [os-vif] 1.6.1 release for pike.

2017-07-18 Thread Mooney, Sean K
Resending with correct subject line

From: Mooney, Sean K
Sent: Tuesday, July 18, 2017 4:54 PM
To: openstack-dev@lists.openstack.org
Cc: Jay Pipes ; 'Stephen Finucane' ; 
Moshe Levi ; 'rbry...@redhat.com' ; 
'maxime.le...@6wind.com' ; sahid 
; 'jan.gut...@netronome.com' 

Subject: os-vif 1.6.1 release for pike.


Hi

We are approaching the non-client library freeze on Thursday.
Below are a list of pending patches that I think will be good to review for 
inclusion in pike.

Should have:
Improve OVS Representor Lookup  https://review.openstack.org/#/c/484051/
Add support for VIFPortProfileOVSRepresentor 
https://review.openstack.org/#/c/483921/
unplug_vf_passthrough: don't try to delete representor netdev 
https://review.openstack.org/#/c/478820/


Nice to have:
Add memoize function using oslo.cache https://review.openstack.org/#/c/472773/
Read datapath_type from VIF object https://review.openstack.org/#/c/474914/
doc: Remove cruft from releasenotes conf.py 
https://review.openstack.org/#/c/480092/2

Queens:
add host port profile info class https://review.openstack.org/#/c/441590/
Add abstract OVSDB API https://review.openstack.org/#/c/476612/
Add native implementation OVSDB API https://review.openstack.org/#/c/482226/
*Migration from 'ip' commands to pyroute2 
https://review.openstack.org/#/c/484386/
*Convert all 'ip-link set' commands to pyroute2 
https://review.openstack.org/#/c/451433/
Add Virtual Ethernet device pair  https://review.openstack.org/#/c/484726/
objects: Add 'dns_domain' attribute to 'Network' 
https://review.openstack.org/#/c/480630/
Add Constraints support https://review.openstack.org/#/c/413325/


*These do the same thing.
The items in the should have list are required to complete the netronome and 
mellanox hardware
Accelerated ovs integration.

The nice to have items are small cleanup that are not vitial but would be nice 
to merge sooner
Rather than later. The remaining items while I would like to see merged, I 
think need
More work so I would suggest moving to queens.

If people have time it would be good to review these items today and I will 
submit at patch to
https://github.com/openstack/releases/blob/master/deliverables/pike/os-vif.yaml 
to introduce
version 1.6.1 tomorrow.

Regards
Sean.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures][telemetry][aodh] Tag of openstack/aodh failed

2017-07-18 Thread Doug Hellmann
Excerpts from jenkins's message of 2017-07-17 12:09:25 +:
> Build failed.
> 
> - aodh-releasenotes 
> http://logs.openstack.org/bb/bbc8065ae2db109f415ff1c03f40015a75e5afe7/tag/aodh-releasenotes/1f93729/
>  : FAILURE in 2m 58s
> 

The release notes failed to build for aodh's release with the error
"Translations exist and locale_dirs missing in source/conf.py".

Unfortunately, I don't know what that means.

Fortunately, I think only the release notes build failed, but the actual
release is fine.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] how to deprecate a name but still have it as conf.

2017-07-18 Thread Doug Hellmann
Excerpts from Michael Bayer's message of 2017-07-18 12:34:31 -0400:
> In oslo.db, I'd like to rename the option "idle_timeout" to
> "connection_recycle_time".
> 
> Following the pattern of using DeprecatedOpt, we get this:
> 
> cfg.IntOpt('connection_recycle_time',
>default=3600,
>deprecated_opts=[cfg.DeprecatedOpt('idle_timeout',
>   group="DATABASE"),
> cfg.DeprecatedOpt('idle_timeout',
>   group="database"),
> cfg.DeprecatedOpt('sql_idle_timeout',
>   group='DEFAULT'),
> cfg.DeprecatedOpt('sql_idle_timeout',
>   group='DATABASE'),
> cfg.DeprecatedOpt('idle_timeout',
>   group='sql')],
> 
> 
> However, Nova is accessing "conf.idle_timeout" directly in
> nova/db/sqlalcemy/api.py -> _get_db_conf.  Tempest run fails.
> 
> Per the oslo.config documentation, the "deprecated_name" flag would
> create an alias on the conf. namespace.  However, this has no effect,
> even if I remove the other deprecated parameters completely:
> 
> cfg.IntOpt('connection_recycle_time',
>default=3600,
>deprecated_name='idle_timeout',
> 
> a simple unit test fails to see a value for
> conf.connection_recycle_time, including if I add
> "deprecated_group='DATABASE'" which is the group that's in this
> specific test (however this would not be a solution anyway because
> projects use different group names).
> 
> From this, it would appear that oslo.config has made it impossible to
> deprecate the name of an option because DeprecatedOpt() has no means
> of providing the value as an alias on the conf. object.   There's not
> even a way I could have projects like nova make a forwards-compatible
> change here.
> 
> Is this a bug in oslo.config or in oslo.db's usage of oslo.confg?
> 

Option renaming was originally meant as an operatior-facing feature
to handle renames for values coming from the config file, but not
as they are used in code.  mtreinish added
https://review.openstack.org/#/c/357987/ to address this for Tempest,
so it's possible there's a bug in the logic in oslo.config somewhere
(or that oslo.db's case is a new one).

That said, the options defined by a library are *NOT* part of its
API, and should never be used by code outside of the library. The
whole point of isolating options like that is to give operators a
way to change the way an app uses a library (drivers, credentials,
etc.) without the app having to know the details.  Ideally the nova
tests that access oslo.db configuration options directly would
instead use an API in oslo.db to do the same thing (that API may
need to be written, if it doesn't already exist).

At that point, only oslo.db code would refer to the option, and you
could use the deprecated_name and deprecated_group settings to
describe the move and change the references to oslo.db within the
library using a single patch to oslo.db.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]notification update week 29

2017-07-18 Thread Matt Riedemann

On 7/18/2017 7:22 AM, Balazs Gibizer wrote:
Unfortunately the assignee left OpenStack during Pike so that BP did not 
progress. We definitely cannot make this to Pike. However I don't even 
know if there is somebody who will have time to work with it in Queens. 
Can we move this to the backlog somehow?


I've deferred the blueprint to Queens. We can re-assess at the PTG.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] PTL nominations

2017-07-18 Thread Doug Hellmann
Excerpts from ChangBo Guo's message of 2017-07-17 23:14:54 +0800:
> Hi oslo folks,
> 
> The PTL nomination week is fast approaching [0], and as you might have
> guessed by the subject of this email, I am not planning to run for Queens,
> I'm still in the team and give some guidance about oslo PTL's daily work as
> previous PTL did before .
> It' my honor to be oslo PTL, I learned a lot  and grew quickly. It's time
> to give someone else the opportunity to grow in the amazing role of oslo PTL
> 
> [0]https://review.openstack.org/#/c/481768/4/configuration.yaml
> 

Thank you for leading the team, gcb! I'm glad to know that although
you won't be serving as PTL, you are not stepping away from the
team entirely.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.config] how to deprecate a name but still have it as conf.

2017-07-18 Thread Michael Bayer
In oslo.db, I'd like to rename the option "idle_timeout" to
"connection_recycle_time".

Following the pattern of using DeprecatedOpt, we get this:

cfg.IntOpt('connection_recycle_time',
   default=3600,
   deprecated_opts=[cfg.DeprecatedOpt('idle_timeout',
  group="DATABASE"),
cfg.DeprecatedOpt('idle_timeout',
  group="database"),
cfg.DeprecatedOpt('sql_idle_timeout',
  group='DEFAULT'),
cfg.DeprecatedOpt('sql_idle_timeout',
  group='DATABASE'),
cfg.DeprecatedOpt('idle_timeout',
  group='sql')],


However, Nova is accessing "conf.idle_timeout" directly in
nova/db/sqlalcemy/api.py -> _get_db_conf.  Tempest run fails.

Per the oslo.config documentation, the "deprecated_name" flag would
create an alias on the conf. namespace.  However, this has no effect,
even if I remove the other deprecated parameters completely:

cfg.IntOpt('connection_recycle_time',
   default=3600,
   deprecated_name='idle_timeout',

a simple unit test fails to see a value for
conf.connection_recycle_time, including if I add
"deprecated_group='DATABASE'" which is the group that's in this
specific test (however this would not be a solution anyway because
projects use different group names).

From this, it would appear that oslo.config has made it impossible to
deprecate the name of an option because DeprecatedOpt() has no means
of providing the value as an alias on the conf. object.   There's not
even a way I could have projects like nova make a forwards-compatible
change here.

Is this a bug in oslo.config or in oslo.db's usage of oslo.confg?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] os-vif 1.6.1 release for pike.

2017-07-18 Thread Mooney, Sean K

Hi

We are approaching the non-client library freeze on Thursday.
Below are a list of pending patches that I think will be good to review for 
inclusion in pike.

Should have:
Improve OVS Representor Lookup  https://review.openstack.org/#/c/484051/
Add support for VIFPortProfileOVSRepresentor 
https://review.openstack.org/#/c/483921/
unplug_vf_passthrough: don't try to delete representor netdev 
https://review.openstack.org/#/c/478820/


Nice to have:
Add memoize function using oslo.cache https://review.openstack.org/#/c/472773/
Read datapath_type from VIF object https://review.openstack.org/#/c/474914/
doc: Remove cruft from releasenotes conf.py 
https://review.openstack.org/#/c/480092/2

Queens:
add host port profile info class https://review.openstack.org/#/c/441590/
Add abstract OVSDB API https://review.openstack.org/#/c/476612/
Add native implementation OVSDB API https://review.openstack.org/#/c/482226/
*Migration from 'ip' commands to pyroute2 
https://review.openstack.org/#/c/484386/
*Convert all 'ip-link set' commands to pyroute2 
https://review.openstack.org/#/c/451433/
Add Virtual Ethernet device pair  https://review.openstack.org/#/c/484726/
objects: Add 'dns_domain' attribute to 'Network' 
https://review.openstack.org/#/c/480630/
Add Constraints support https://review.openstack.org/#/c/413325/


*These do the same thing.

The items in the should have list are required to complete the netronome and 
mellanox hardware
Accelerated ovs integration.

The nice to have items are small cleanup that are not vitial but would be nice 
to merge sooner
Rather than later. The remaining items while I would like to see merged, I 
think need
More work so I would suggest moving to queens.

If people have time it would be good to review these items today and I will 
submit at patch to
https://github.com/openstack/releases/blob/master/deliverables/pike/os-vif.yaml 
to introduce
version 1.6.1 tomorrow.

Regards
Sean.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Persistent application credentials

2017-07-18 Thread Zane Bitter

On 18/07/17 10:55, Lance Bragstad wrote:


Would Keystone folks be happy to allow persistent credentials once
we have a way to hand out only the minimum required privileges?


If I'm understanding correctly, this would make application 
credentials dependent on several cycles of policy work. Right?


I think having the ability to communicate deprecations though 
oslo.policy would help here. We could use it to move towards better 
default roles, which requires being able to set minimum privileges.


Using the current workflow requires operators to define the minimum 
privileges for whatever is using the application credential, and work 
that into their policy. Is that the intended workflow that we want to 
put on the users and operators of application credentials?


The plan is to add an authorisation mechanism that is user-controlled 
and independent of the (operator-controlled) policy. The beginnings of 
this were included in earlier drafts of the spec, but were removed in 
patch set 19 in favour of leaving them for a future spec:


https://review.openstack.org/#/c/450415/18..19/specs/keystone/pike/application-credentials.rst

- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Persistent application credentials

2017-07-18 Thread Zane Bitter

On 17/07/17 23:12, Lance Bragstad wrote:

Would Keystone folks be happy to allow persistent credentials once
we have a way to hand out only the minimum required privileges?


If I'm understanding correctly, this would make application credentials 
dependent on several cycles of policy work. Right?


My thought here was that if this were the case (i.e. persistent 
credentials are OK provided the user can lock down the privileges) then 
you could make a case that the current spec is on the right track. For 
now we implement the application credentials as non-persistent, people 
who know about it use at their own risk, and for people who don't 
there's no exposure. Later on we add the authorisation stuff and relax 
the non-persistence requirement.


On further reflection, I'm not convinced by this - if we care about 
protecting people who don't intentionally use/know about the feature 
now, then we should probably still care once the tools are in place for 
the people who are using it intentionally to lock it down tightly.


So I'm increasingly convinced that we need to do one of two things. Either:

* Agree with Colleen (elsewhere in the thread) that persistent 
application credentials are still better than the status quo and 
reinstate the project-scoped lifecycle in accordance with original 
intent of the spec; or


* Agree that the concerns raised by Morgan & Adam will always apply, and 
look for a solution that gives us automatic key rotation - which might 
be quite different in shape (I can elaborate if necessary).


(That said, I chatted about this briefly with Monty yesterday and he 
said that his recollection was that there is a long-term solution that 
will keep everyone happy. He'll try to remember what it is once he's 
finished on the version discovery stuff he's currently working on.)



I'm trying to avoid taking a side here because everyone is right. 
Currently anybody who want to do anything remotely 'cloudy' (i.e. have 
the application talk to OpenStack APIs) has to either share their 
personal password with the application (and by extension their whole 
team) or to do the thing that is the polar opposite of cloud: file a 
ticket with IT to get a service user account added desk> and share that password instead. And this really is a disaster for 
OpenStack. On the other hand, allowing the creation of persistent 
application credentials in the absence of regular automatic rotation 
does create risk for those folks who are not aggressively auditing them 
(perhaps because they have no legitimate use for them) and the result is 
likely to be lots of clouds disabling them by policy, keeping their 
users in the dark age of IT-ticket-filing  and frustrating 
our interoperability goals.


It is possible in theory to satisfy both via the 'instance users' 
concept, but the Nova team's response to this has consistently been 
"prove to us that this has to be in Nova". Well, here's your answer.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Persistent application credentials

2017-07-18 Thread Lance Bragstad


On 07/17/2017 10:12 PM, Lance Bragstad wrote:
>
>
> On Mon, Jul 17, 2017 at 6:39 PM, Zane Bitter  > wrote:
>
> So the application credentials spec has merged - huge thanks to
> Monty and the Keystone team for getting this done:
>
> https://review.openstack.org/#/c/450415/
> 
> 
> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/pike/application-credentials.html
> 
> 
>
> However, it appears that there was a disconnect in how two groups
> of folks were reading the spec that only became apparent towards
> the end of the process. Specifically, at this exact moment:
>
> 
> http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2017-06-09.log.html#t2017-06-09T17:43:59
> 
> 
>
> To summarise, Keystone folks are uncomfortable with the idea of
> application credentials that share the lifecycle of the project
> (rather than the user that created them), because a consumer could
> surreptitiously create an application credential and continue to
> use that to access the OpenStack APIs even after their User
> account is deleted. The agreed solution was to delete the
> application credentials when the User that created them is
> deleted, thus tying the lifecycle to that of the User.
>
> This means that teams using this feature will need to audit all of
> their applications for credential usage and rotate any credentials
> created by a soon-to-be-former team member *before* removing said
> team member's User account, or risk breakage. Basically we're
> relying on users to do the Right Thing (bad), but when they don't
> we're defaulting to breaking [some of] their apps over leaving
> them insecure (all things being equal, good).
>
> Unfortunately, if we do regard this as a serious problem, I don't
> think this solution is sufficient. Assuming that application
> credentials are stored on VMs in the project for use by the
> applications running on them, then anyone with access to those
> servers can obtain the credentials and continue to use them even
> if their own account is deleted. The solution to this is to rotate
> *all* application keys when a user is deleted. So really we're
> relying on users to do the Right Thing (bad), but when they don't
> we're defaulting to breaking [some of] their apps *and*
> [potentially] leaving them insecure (worst possible combination).
>
> (We're also being inconsistent, because according to the spec if
> you revoke a role from a User then any application credentials
> they've created that rely on that role continue to work. It's only
> if you delete the User that they're revoked.)
>
>
> As far as I can see, there are only two solutions to the
> fundamental problem:
>
> 1) Fine-grained user-defined access control. We can minimise the
> set of things that the application credentials are authorised to
> do. That's out of scope for this spec, but something we're already
> planning as a future enhancement.
> 2) Automated regular rotation of credentials. We can make sure
> that whatever a departing team member does manage to hang onto
> quickly becomes useless.
>
> By way of comparison, AWS does both. There's fine-grained defined
> access control in the form of IAM Roles, and these Roles can be
> associated with EC2 servers. The servers have an account with
> rotating keys provided through the metadata server. I can't find
> the exact period of rotation documented, but it's on the order of
> magnitude of 1 hour.
>
> There's plenty not to like about this design. Specifically, it's
> 2017 not 2007 and the idea that there's no point offering to
> segment permissions at a finer grained level than that of a VM no
> longer holds water IMHO, thanks to SELinux and containers. It'd be
> nice to be able to provide multiple sets of credentials to
> different services running on a VM, and it's probably essential to
> our survival that we find a way to provide individual credentials
> to containers. Nevertheless, what they have does solve the problem.
>
> Note that there's pretty much no sane way for the user to automate
> credential rotation themselves, because it's turtles all the way
> down. e.g. it's easy in principle to set up a Heat template with a
> Mistral workflow that will rotate the credentials for you, but
> they'll do so using trusts that are, in turn, tied back to the
> consumer who created the stack. (It suddenly occurs to me that
> this is a problem that 

Re: [openstack-dev] [Vitrage] Collectd notification isn't shown on the Vitrage Graph

2017-07-18 Thread Mytnyk, VolodymyrX
Hi Ifat,

The temporary fix works. Alarm is cleared when ovs port back to 
up.

Thank you!

Regards,
Volodymyr

From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.a...@nokia.com]
Sent: Monday, July 17, 2017 12:18 PM
To: OpenStack Development Mailing List (not for usage questions) 

Cc: Tahhan, Maryam 
Subject: Re: [openstack-dev] [Vitrage] Collectd notification isn't shown on the 
Vitrage Graph

Hi Volodymyr,

Thinking about this again, I understand that Vitrage is not supposed to use the 
Collectd message as part of the alarm unique key. We currently have a bug with 
clearing Collectd alarms, as a result of the Vitrage ID refactoring that 
happened in Pike. Until we fix it, you can you this patch[1] for your demo.

[1] https://review.openstack.org/#/c/484300

Let me know if it helped,
Ifat.


From: "Afek, Ifat (Nokia - IL/Kfar Sava)" 
>
Date: Sunday, 16 July 2017 at 12:28
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Cc: "Tahhan, Maryam" >
Subject: Re: [openstack-dev] [Vitrage] Collectd notification isn't shown on the 
Vitrage Graph

Hi Volodymyr,

According to the vitrage-collector.log, when the alarm is cleared it has a 
different message:

Raise alarm:
{'vitrage_datasource_action': 'update', 'resource_name': u'qvo818dd156-be', 
u'severity': u'WARNING', u'plugin': u'ovs_events', 'vitrage_entity_type': 
'collectd', u'id': u'd211725834f26fa268016d8b23adf7d7', 'vitrage_sample_date': 
'2017-07-14 07:31:21.405670+00:00', u'host': u'silpixa00399503', u'time': 
1500017481.363748, u'collectd_type': u'gauge', u'plugin_instance': 
u'qvo818dd156-be', u'type_instance': u'link_status', 'vitrage_event_type': 
u'collectd.alarm.warning', u'message': u'link state of "qvo818dd156-be" 
interface has been changed to "DOWN"', 'resource_type': u'neutron.port'}

Clear alarm:
{'vitrage_datasource_action': 'update', 'resource_name': u'qvo818dd156-be', 
u'severity': u'OK', u'plugin': u'ovs_events', 'vitrage_entity_type': 
'collectd', u'id': u'd211725834f26fa268016d8b23adf7d7', 'vitrage_sample_date': 
'2017-07-14 07:31:35.851112+00:00', u'host': u'silpixa00399503', u'time': 
1500017495.841522, u'collectd_type': u'gauge', u'plugin_instance': 
u'qvo818dd156-be', u'type_instance': u'link_status', 'vitrage_event_type': 
u'collectd.alarm.ok', u'message': u'link state of "qvo818dd156-be" interface 
has been changed to "UP"', 'resource_type': u'neutron.port'}

The ‘message’ is converted to the name of the alarm, which is considered part 
of its unique key. If the message is changed from “DOWN” to “UP”, we don’t 
recognize that it’s the same alarm.
Any idea how this can be solved? Can you modify the message so it will be the 
same in both cases? Or is there another field that can uniquely identify the 
alarm?

Thanks,
Ifat.


From: "Mytnyk, VolodymyrX" 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Friday, 14 July 2017 at 10:56
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Cc: "Tahhan, Maryam" >
Subject: Re: [openstack-dev] [Vitrage] Collectd notification isn't shown on the 
Vitrage Graph

Hi Ifat,

Thank you for fixing the issue. The patch works and I’m able to 
map the alarm to port now. Also, as a workaround, I was able to fix/resolve the 
issue by creating the static datasource (attached static_port.yaml) and 
disabling the neutron port datasource in the vitrage.conf.

Another issue that I still observe is the deleting of the alarm from the graph 
when OK collectd notification is sent (port is becomes up). Currently, it is 
not removed from the entity graph. Is it an issue in the Vitrage too? Attaching 
all logs (collected using the fix provided by you).

The 3rd issue is the Vitrage-Mistral integration, but I will describe this as a 
separate mail thread.

Thanks and Regards,
Volodymyr

From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.a...@nokia.com]
Sent: Thursday, July 13, 2017 5:47 PM
To: OpenStack Development Mailing List (not for usage questions) 
>
Cc: Tahhan, Maryam >
Subject: Re: [openstack-dev] [Vitrage] Collectd notification isn't shown on the 
Vitrage Graph

Hi Volodymyr,

I believe that this change[1] will fix your problem.

[1] https://review.openstack.org/#/c/482212/

Best Regards,
Ifat.

From: "Mytnyk, VolodymyrX" 

[openstack-dev] [RFC v2] [OVS/NOVA] Vhost-user backends cross-version migration support

2017-07-18 Thread Maxime Coquelin

This is an revival from a thread I initiated earlier this year [0], that
I had to postpone due to other priorities.

First, I'd like to thanks reviewers of my first proposal, this new
version tries to address the comments made:
 1.This is Nova's role and not Libvirt's to query hosts supported
compatibility modes and to select one, since Nova adds the vhost-user
ports and has visibility on other hosts. Hence I remove libvirt ML and
add Openstack one in the recipient list.
 2. By default, the compatibility version selected is the most recent
one, except if the admin selects on older compat version.

The goal of this thread is to draft a solution based on the outcomes
of discussions with contributors of the different parties (DPDK/OVS
/Nova/...).

I'm really interested on feedback from OVS & Nova contributors,
as my experience with these projects is rather limited.

Problem statement:
==

When migrating a VM from one host to another, the interfaces exposed by
QEMU must stay unchanged in order to guarantee a successful migration.
In the case of vhost-user interface, parameters like supported Virtio
feature set, max number of queues, max vring sizes,... must remain
compatible. Indeed, the frontend not being re-initialized, no
re-negotiation happens at migration time.

For example, we have a VM that runs on host A, which has its vhost-user
backend advertising VIRTIO_F_RING_INDIRECT_DESC feature. Since the Guest
also support this feature, it is successfully negotiated, and guest
transmit packets using indirect descriptor tables, that the backend
knows to handle.

At some point, the VM is being migrated to host B, which runs an older
version of the backend not supporting this VIRTIO_F_RING_INDIRECT_DESC
feature. The migration would break, because the Guest still have the
VIRTIO_F_RING_INDIRECT_DESC bit sets, and the virtqueue contains some
decriptors pointing to indirect tables, that backend B doesn't know to
handle.
This is just an example about Virtio features compatibility, but other
backend implementation details could cause other failures. (e.g.
configurable queues sizes)

What we need is to be able to query the destination host's backend to
ensure migration is possible before it is initiated.

The below proposal has been drafted based on how Qemu manages machine types:

Proposal


The idea is to have a table of supported version strings in OVS,
associated to key/value pairs. Nova or any other management tool could
query OVS for the list of supported versions strings for each hosts.
By default, the latest compatibility version will be selected, but the
admin can select manually an older compatibility mode in order to ensure
successful migration to an older destination host.

Then, Nova would add OVS's vhost-user port with adding the selected
version (compatibility mode) as an extra parameter.

Before starting the VM migration, Nova will ensure both source and
destination hosts' vhost-user interfaces run in the same compatibility
modes, and will prevent it if this is not the case.

For example host A runs OVS-2.7, and host B OVS-2.6.
Host A's OVS-2.7 has an OVS-2.6 compatibility mode (e.g. with indirect
descriptors disabled), which should be selected at vhost-user port add
time to ensure migration will succeed to host B.

Advantage of doing so is that Nova does not need any update if new keys
are introduced (i.e. it does not need to know how the new keys have to
be handled), all these checks remain in OVS's vhost-user implementation.

Ideally, we would support per vhost-user interface compatibility mode,
which may have an impact also on DPDK API, as the Virtio feature update
API is global, and not per port.

- Implementation:
-

Goal here is just to illustrate this proposal, I'm sure you will have
good suggestion to improve it.
In OVS vhost-user library, we would introduce a new structure, for
example (neither compiled nor tested):

struct vhostuser_compat {
  char *version;
  uint64_t virtio_features;
  uint32_t max_rx_queue_sz;
  uint32_t max_nr_queues;
};

*version* field is the compatibility version string. It could be
something like: "upstream.ovs-dpdk.v2.6". In case for example Fedora
adds some more patches to its package that would break migration to
upstream version, it could have a dedicated compatibility string:
"fc26.ovs-dpdk.v2.6". In case OVS-v2.7 does not break compatibility with
previous OVS-v2.6 version, then no need to create a new entry, just keep
v2.6 one.

*virtio_features* field is the Virtio features set for a given
compatibility version. When an OVS tag is to be created, it would be
associated to a DPDK version. The Virtio features for these version
would be stored in this field. It would allow to upgrade the DPDK
package for example from v16.07 to v16.11 without breaking migration.
In case the distribution wants to benefit from latests Virtio
features, it would have to create a new entry to ensure migration
won't be broken.

*max_rx_queue_sz*

Re: [openstack-dev] [keystone][nova] Persistent application credentials

2017-07-18 Thread Colleen Murphy
On Tue, Jul 18, 2017 at 1:39 AM, Zane Bitter  wrote:

> So the application credentials spec has merged - huge thanks to Monty and
> the Keystone team for getting this done:
>
> https://review.openstack.org/#/c/450415/
> http://specs.openstack.org/openstack/keystone-specs/specs/
> keystone/pike/application-credentials.html
>
> However, it appears that there was a disconnect in how two groups of folks
> were reading the spec that only became apparent towards the end of the
> process. Specifically, at this exact moment:
>
> http://eavesdrop.openstack.org/irclogs/%23openstack-keystone
> /%23openstack-keystone.2017-06-09.log.html#t2017-06-09T17:43:59
>
> To summarise, Keystone folks are uncomfortable with the idea of
> application credentials that share the lifecycle of the project (rather
> than the user that created them), because a consumer could surreptitiously
> create an application credential and continue to use that to access the
> OpenStack APIs even after their User account is deleted. The agreed
> solution was to delete the application credentials when the User that
> created them is deleted, thus tying the lifecycle to that of the User.
>
> This means that teams using this feature will need to audit all of their
> applications for credential usage and rotate any credentials created by a
> soon-to-be-former team member *before* removing said team member's User
> account, or risk breakage. Basically we're relying on users to do the Right
> Thing (bad), but when they don't we're defaulting to breaking [some of]
> their apps over leaving them insecure (all things being equal, good).
>
> Unfortunately, if we do regard this as a serious problem, I don't think
> this solution is sufficient. Assuming that application credentials are
> stored on VMs in the project for use by the applications running on them,
> then anyone with access to those servers can obtain the credentials and
> continue to use them even if their own account is deleted. The solution to
> this is to rotate *all* application keys when a user is deleted. So really
> we're relying on users to do the Right Thing (bad), but when they don't
> we're defaulting to breaking [some of] their apps *and* [potentially]
> leaving them insecure (worst possible combination).
>
> (We're also being inconsistent, because according to the spec if you
> revoke a role from a User then any application credentials they've created
> that rely on that role continue to work. It's only if you delete the User
> that they're revoked.)
>
>
> As far as I can see, there are only two solutions to the fundamental
> problem:
>
> 1) Fine-grained user-defined access control. We can minimise the set of
> things that the application credentials are authorised to do. That's out of
> scope for this spec, but something we're already planning as a future
> enhancement.
> 2) Automated regular rotation of credentials. We can make sure that
> whatever a departing team member does manage to hang onto quickly becomes
> useless.
>
> By way of comparison, AWS does both. There's fine-grained defined access
> control in the form of IAM Roles, and these Roles can be associated with
> EC2 servers. The servers have an account with rotating keys provided
> through the metadata server. I can't find the exact period of rotation
> documented, but it's on the order of magnitude of 1 hour.
>
> There's plenty not to like about this design. Specifically, it's 2017 not
> 2007 and the idea that there's no point offering to segment permissions at
> a finer grained level than that of a VM no longer holds water IMHO, thanks
> to SELinux and containers. It'd be nice to be able to provide multiple sets
> of credentials to different services running on a VM, and it's probably
> essential to our survival that we find a way to provide individual
> credentials to containers. Nevertheless, what they have does solve the
> problem.
>
> Note that there's pretty much no sane way for the user to automate
> credential rotation themselves, because it's turtles all the way down. e.g.
> it's easy in principle to set up a Heat template with a Mistral workflow
> that will rotate the credentials for you, but they'll do so using trusts
> that are, in turn, tied back to the consumer who created the stack. (It
> suddenly occurs to me that this is a problem that all services using trusts
> are going to need to solve.) Somewhere it all has to be tied back to
> something that survives the entire lifecycle of the project.
>
> Would Keystone folks be happy to allow persistent credentials once we have
> a way to hand out only the minimum required privileges?
>

I agree that in the haste of getting this approved before the spec freeze
deadline we took this in the wrong direction. I think that this spec was
fine before the addition of "Will be deleted when the associated User is
deleted" constraint.

As I understand it, the worry coming from the team is that a user who
sneakily copies the secret keys to an offsite 

Re: [openstack-dev] [all][stable][ptls] Tagging mitaka as EOL

2017-07-18 Thread Andy McCrae
>
>
>
> The branches have now been retired, thanks to Joshua Hesketh!
>

Thanks Josh, Andreas, Tony, and the rest of the Infra crew for sorting this
out.

Andy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Migration from Neutron ML2OVS to OVN

2017-07-18 Thread Numan Siddique
On Thu, Jul 13, 2017 at 3:02 PM, Saravanan KR  wrote:

> On Tue, Jul 11, 2017 at 11:40 PM, Ben Nemec 
> wrote:
> >
> >
> > On 07/11/2017 10:17 AM, Numan Siddique wrote:
> >>
> >> Hello Tripleo team,
> >>
> >> I have few questios regarding migration from neutron ML2OVS to OVN.
> Below
> >> are some of the requirements
> >>
> >>   - We want to migrate an existing depoyment from Neutroon default
> ML2OVS
> >> to OVN
> >>   - We are targetting this for tripleo Queen's release.
> >>   - The plan is to first upgrade the tripleo deployment from Pike to
> >> Queens with no changes to neutron. i.e with neutron ML2OVS. Once the
> upgrade
> >> is done, we want to migrate to OVN.
> >>   - The migration process will stop all the neutron agents, configure
> >> neutron server to load OVN mechanism driver and start OVN services
> (with no
> >> or very limited datapath downtime).
> >>   - The migration would be handled by an ansible script. We have a PoC
> >> ansible script which can be found here [1]
> >>
> >> And the questions are
> >> -  (A broad question) - What is the right way to migrate and switch the
> >> neutron plugin ? Can the stack upgrade handle the migration as well ?
> This is going to be a broader problem as it is also require to migrate
> ML2OvS to ODL for NFV deployments, pretty much at the same timeline.
> If i understand correctly, this migration involves stopping services
> of ML2OVS (like neutron-ovs-agent) and starting the corresponding new
> ML2 (OVN or ODL), along with few parameter additions and removals.
>
> >> - The migration procedure should be part of tripleo ? or can it be a
> >> standalone ansible script ? (I presume it should be former).
> Each service has upgrade steps which can be associated via ansible
> steps. But this is not a service upgrade. It disables an existing
> service and enables a new service. So I think, it would need an
> explicit disabled service [1], stop the required service. And enabled
> the new service.
>
> >> - If it should be part of the tripleo then what would be the command to
> do
> >> it ? A update stack command with appropriate environment files for OVN ?
> >> - In case the migration can be done  as a standalone script, how to
> handle
> >> later updates/upgrades since tripleo wouldn't be aware of the migration
> ?
> >
> I would also discourage doing it standalone.
>
> Another area which needs to be looked is that, should it be associated
> with containers upgrade? May be OVN and ODL can be migrated as
> containers only instead of baremetal by default (just a thought, could
> have implications to be worked/discussed out).
>
> Regards,
> Saravanan KR
>
> [1] https://github.com/openstack/tripleo-heat-templates/tree/
> master/puppet/services/disabled
>
> >
> > This last point seems like the crux of the discussion here.  Sure, you
> can
> > do all kinds of things to your cloud using standalone bits, but if any of
> > them affect things tripleo manages (which this would) then you're going
> to
> > break on the next stack update.
> >
> > If there are things about the migration that a stack-update can't handle,
> > then the migration process would need to be twofold: 1) Run the
> standalone
> > bits to do the migration 2) Update the tripleo configuration to match the
> > migrated config so stack-updates work.
> >
> > This is obviously a complex and error-prone process, so I'd strongly
> > encourage doing it in a tripleo-native fashion instead if at all
> possible.
> >
>


Thanks Ben and Saravanan for your comments.

I did some testing. I first deployed an overcloud with the command [1] and
then I ran the command [2] which enables the OVN services. After the
completion of [2], all the neutron agents were stopped and all the OVN
services were up.

The question is is this the right way to disable some services and enable
some ? or "openstack overcloud update stack" is the right command ?


[1] - openstack overcloud deploy \
--templates /usr/share/openstack-tripleo-heat-templates \
--libvirt-type qemu --control-flavor oooq_control --compute-flavor
oooq_compute --ceph-storage-flavor oooq_ceph --block-storage-flavor
oooq_blockstorage --swift-storage-flavor oooq_objectstorage --timeout 90 -e
/home/stack/cloud-names.yaml-e
/usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml
-e
/usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml
-e /home/stack/network-environment.yaml -e
/usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml
 -e
/usr/share/openstack-tripleo-heat-templates/environments/low-memory-usage.yaml
-e
/usr/share/openstack-tripleo-heat-templates/environments/disable-telemetry.yaml
  --validation-warnings-fatal   --compute-scale 1 --control-scale 3
--ntp-server pool.ntp.org \
${DEPLOY_ENV_YAML:+-e $DEPLOY_ENV_YAML} "$@" && status_code=0 ||
status_code=$?


[2] - openstack overcloud deploy \
--templates 

Re: [openstack-dev] [OpenStack-Ansible] Proposing Markos Chandras for osa-core

2017-07-18 Thread Major Hayden
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 07/18/2017 04:23 AM, Andy McCrae wrote:
> Following on from last week's meeting I'd like to propose Markos (hwoarang) 
> for OSA core.
> 
> Markos has done a lot of good reviews and commits over an extended period of 
> time, and has shown interest in the project as a whole. (Not to mention the 
> addition of SUSE support)
> 
> We already have quite a few +1's from the meeting itself, but opening up to 
> everybody who wasn't available at the meeting!

+1 here!  Anyone that offers to help with the ansible-hardening role is solid 
in my book. ;)

Markos has been doing great work and he's automated quite a few things that we 
used to push around manually. SUSE support has been building out *really* 
quickly, too.

- --
Major Hayden
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJZbgXFAAoJEHNwUeDBAR+x0b0P/jYbzThAhWkmb0qbdzQcHYoc
9Tgx6eymEU8HEs+wC74+r1JBAfUF32thymwM1ToZv7RT+3w0KT3ArEadEmo2+BSz
TtNCsg4adCHVQHdPnHeFor0jT9PHXYlMzRwfU4UHEjFkcDBX4iHNvUIYkOp/NSy2
OAZE3YmYxPRUbw87VeIOi2lLhhbdoJJWlFJbHRH4xY2jjl6Le6UjdVpgErhzHcaP
3VhJI5mR4bKLhjrnJmgMVC6ECxZ4PDMa3uzfpJ+STWVzgOODk6FQ89AfcOTwbX8K
/m3aw6e9+KyiacrriK6xZJlTzBpWZyj17V9V6xb4hzHZMkSn0X0OJD0L6YYp1k+r
YBXB4kPFeX4KMoxpp5Xu0COu0cjLF3rqb0tZHsh0B8dDjYcXs+SY1QEoQyEvyX1q
2kqbNS4+rg0uNO0ioddAG+mwJZ7oX+b3kHeJT6XLhkXgyLnBVXC9lCvbNrOJUuwa
HHcNj/Xxti1fZT51/TvtKM/ou1gdWPbW3NGAwp0+d5oEiy2mUnL/p1J8i1T1c3V/
kA5zWcY9UX+WbArwmxRtoOIJn5CAOSccii8Uc2HCx89au7BBxtA3k7LNNWo9B2jF
S4KcWUZi7EnWyFOw4+VcW2OctCxKeEuO7yCxaW5ffrHeYl2GjXoKupPanvVMq/Pq
WphlU0lHsiNNTXrghFaw
=2Zxr
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] scheduling with custom resouce classes

2017-07-18 Thread Balazs Gibizer



On Mon, Jul 17, 2017 at 6:40 PM, Jay Pipes  wrote:

On 07/17/2017 11:31 AM, Balazs Gibizer wrote:
> On Thu, Jul 13, 2017 at 11:37 AM, Chris Dent 


> wrote:
>> On Thu, 13 Jul 2017, Balazs Gibizer wrote:
>>
>>> 
/placement/allocation_candidates?resources=CUSTOM_MAGIC%3A512%2CMEMORY_MB%3A64%2CVCPU%3A1"

>>> but placement returns an empty response. Then nova scheduler falls
>>> back to legacy behavior [4] and places the instance without
>>> considering the custom resource request.
>>
>> As far as I can tell at least one missing piece of the puzzle here
>> is that your MAGIC provider does not have the
>> 'MISC_SHARES_VIA_AGGREGATE' trait. It's not enough for the compute
>> and MAGIC to be in the same aggregate, the MAGIC needs to announce
>> that its inventory is for sharing. The comments here have a bit 
more

>> on that:
>>
>> 
https://github.com/openstack/nova/blob/master/nova/objects/resource_provider.py#L663-L678

>
> Thanks a lot for the detailed answer. Yes, this was the missing 
piece.
> However I had to add that trait both the the MAGIC provider and to 
my

> compute provider to make it work. Is it intentional that the compute
> also has to have that trait?

No. The compute node doesn't need that trait. It only needs to be
associated to an aggregate that is associated to the provider that is
marked with the MISC_SHARES_VIA_AGGREGATE trait.

In other words, you need to do this:

1) Create the provider record for the thing that is going to share the
CUSTOM_MAGIC resources

2) Create an inventory record on that provider

3) Set the MISC_SHARES_VIA_AGGREGATE trait on that provider

4) Create an aggregate

5) Associate both the above provider and the compute node provider 
with

the aggregate

That's it. The compute node provider will now have access to the
CUSTOM_MAGIC resources that the other provider has in inventory.


Something doesn't add up. We tried exactly your order of actions (see 
the script [1]) but placement returns an empty result (see the logs of 
the script[2], of the scheduler[3], of the placement[4]). However as 
soon as we add the MISC_SHARES_VIA_AGGREGATE trait to the compute 
provider as well then placement-api returns allocation candidates as 
expected.


We are trying to get some help from the related functional test [5] but 
honestly we still need some time to digest that LOCs. So any direct 
help is appreciated.


BTW, should I open a bug for it?


As a related question. I looked at the claim in the scheduler patch 
https://review.openstack.org/#/c/483566 and I wondering if that patch 
wants to claim not just the resources a compute provider provides but 
also custom resources like MAGIC at [6]. In the meantime I will go and 
test that patch to see what it actually does with some MAGIC. :)


Thanks for the help!
Cheers,
gibi

[1] http://paste.openstack.org/show/615707/
[2] http://paste.openstack.org/show/615708/
[3] http://paste.openstack.org/show/615709/
[4] http://paste.openstack.org/show/615710/
[5] 
https://github.com/openstack/nova/blob/0e6cac5fde830f1de0ebdd4eebc130de1eb0198d/nova/tests/functional/db/test_resource_provider.py#L1969
[6] 
https://review.openstack.org/#/c/483566/3/nova/scheduler/filter_scheduler.py@167



Magic. :)

Best,
-jay

> I updated my script with the trait. [3]
>
>>
>> It's quite likely this is not well documented yet as this style of
>> declaring that something is shared was a later development. The
>> initial code that added the support for GET /resource_providers
>> was around, it was later reused for GET /allocation_candidates:
>>
>> https://review.openstack.org/#/c/460798/
>
> What would be a good place to document this? I think I can help with
> enhancing the documentation from this perspective.
>
> Thanks again.
> Cheers,
> gibi
>
>>
>> --
>> Chris Dent  ┬──┬◡ノ(° -°ノ) 
https://anticdent.org/
>> freenode: cdent tw: 
@anticdent

>
> [3] http://paste.openstack.org/show/615629/
>
>
>
>
> 
__

> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][stable][ptls] Tagging mitaka as EOL

2017-07-18 Thread Andreas Jaeger
On 2017-07-17 19:09, Andreas Jaeger wrote:
> On 2017-07-17 15:51, Andy McCrae wrote:
>> We held back the openstack-ansible repo to allow us to point to the
>> mitaka-eol tag for the other role/project repos.
>> That's been done - I've created the mitaka-eol tag in the
>> openstack-ansible repo, can we remove the stable/mitaka branch from
>> openstack-ansible too.
> 
> So, only from openstack/openstack-ansible?
> 
>> Thanks again to all who have helped EOL all the branches etc!
> 
> I'll add to the list,
> 
> Andreas
> 

The branches have now been retired, thanks to Joshua Hesketh!

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible][nova][libvirt] Passing a hypervisor 'raw' disk through to one of its guests - libvirt errors with ...

2017-07-18 Thread Lawrence J. Albinson
Hello Colleagues,

I have made some progress with passing through a 'raw' scsi disk from an 
openstack hypervisor
to one of its guests. However, I've now hit a libvirt problem and am wondering 
whether others may
have dealt with this already in the ubuntu world (I believe it is resolved in 
the centos world).

On a (recently built) openstack-ansible 15.1.6 environment where one compute 
node is
also a storage node I do the following:

1. define an image of xenial-server with the following properties set:

  hw_disk_bus='scsi'
  hw_scsi_model='virtio-scsi'

2. create a volume on a given compute node:

  # openstack volume create --size 269 --availability-zone special v0
  # openstack volume show v0 -f json | jq .id
  "6dc7de73-7d5b-4410-8c62-a06e722187e9"

3. create an instance on the same compute node:

  # nova boot --flavor xenial-server --image xenial --key-name test-key \
  --nic net-id=0395285e-b548-458b-9db5-f8d869e49a06 --availability-zone 
nova:compute1 \
  --block-device 
id=6dc7de73-7d5b-4410-8c62-a06e722187e9,source=volume,dest=volume,bus=scsi,type=lun
 \
  xenial0

4. Step 3 attempts to build the instance but errors out with the following in 
the
   nova-compute.log on compute1:

  [instance: 97b87f5e-b146-4c40-8128-6c996b8e] libvirtError: 
unsupported configuration: \
scsi-block 'lun' devices do not support the serial property

(nb: this is the expected instance id)

It seems that this is a bug in libvirt that is fixed in the centos/rhel world.

The hypervisor 'compute1' is built with 16.04.2 LTS and fully updated.

Does anyone know of a fix in the ubuntu/debian world?

Or should I be raising this on a xenial forum?

Kind regards, Lawrence

Lawrence J Albinson
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][trove] Retiring the openstack/trove-integration repository

2017-07-18 Thread Jeremy Stanley
On 2017-07-18 07:18:03 -0400 (-0400), Amrith Kumar wrote:
[...]
> Honestly, I think there will be wild partying in the streets when
> trove-integration is finally retired but as I learned from the mysql.qcow2
> fiasco (sorry folks) it is unwise to assume that something is unused.

If it helps, we believe the last patches needed to purge mysql.qcow2
assumptions from stable devstack branches merged yesterday and I
finally took the file back offline again roughly 16 hours ago. No
new complaints so far, at least!
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]notification update week 29

2017-07-18 Thread Balazs Gibizer



On Mon, Jul 17, 2017 at 9:32 PM, Matt Riedemann  
wrote:

On 7/17/2017 2:36 AM, Balazs Gibizer wrote:
> Hi,
>
> Here is the status update / focus setting mail about notification 
work

> for week 29.
>
> Bugs
> 
> [Undecided] https://bugs.launchpad.net/nova/+bug/1684860 Versioned
> server notifications don't include updated_at
> The fix https://review.openstack.org/#/c/475276/ is in focus but
> comments needs to be addressed.
>
> [Low] https://bugs.launchpad.net/nova/+bug/1696152 nova 
notifications

> use nova-api as binary name instead of nova-osapi_compute
> Agreed not to change the binary name in the notifications. Instead 
we

> make an enum for that name to show that the name is intentional.
> Patch needs review:  https://review.openstack.org/#/c/476538/
>
> [Undecided] https://bugs.launchpad.net/nova/+bug/1702667 
publisher_id of
> the versioned instance.update notification is not consistent with 
other

> notifications
> The inconsistency of publisher_ids was revealed by #1696152. Patch 
needs

> review: https://review.openstack.org/#/c/480984
>
> [Undecided] https://bugs.launchpad.net/nova/+bug/1699115 api.fault
> notification is never emitted
> Still no response on the ML thread about the way forward.
> 
http://lists.openstack.org/pipermail/openstack-dev/2017-June/118639.html

>
> [Undecide] https://bugs.launchpad.net/nova/+bug/1700496 
Notifications

> are emitted per-cell instead of globally
> Fix is to configure a global MQ endpoint for the notifications in 
cells

> v2. Patch looks good from notification perspective but affects other
> part of the system as well: https://review.openstack.org/#/c/477556/
>
>
> Versioned notification transformation
> -
> The last week's merge conflicts are mostly cleaned up and there is 
11

> patches that are waiting for core revew:
> 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/versioned-notification-transformation-pike+label:Code-Review%253E%253D%252B1+label:Verified%253E%253D1+AND+NOT+label:Verified%253C0+AND+NOT+label:Code-Review%253C0

>
>
> If you are affraid of the long list then here is a short list of 
live

> migration related transformations to look at:
> * https://review.openstack.org/#/c/480214/
> * https://review.openstack.org/#/c/420453/
> * https://review.openstack.org/#/c/480119/
> * https://review.openstack.org/#/c/469784/
>
>
> Searchlight integration
> ---
> bp additional-notification-fields-for-searchlight
> ~
> The BDM addition has been merged.
>
> As a last piece of the bp we are still missing the Add tags to
> instance.create Notification 
https://review.openstack.org/#/c/459493/

> patch but that depends on supporting tags and instance boot
> https://review.openstack.org/#/c/394321/ which is getting closer to 
be

> merged. Focus is on these patches.
>
> There are a set of follow up patches for the BDM addition to 
optimize
> the payload generation but these are not mandatory for the 
functionality

> https://review.openstack.org/#/c/483324/
>
>
> Instability of the notification sample tests
> 
> Multiple instability of the sample test was detected last week. The 
nova
> functional test fails intermittenly at least for two distinct 
reasons:

> * https://bugs.launchpad.net/nova/+bug/1704423 _test_unshelve_server
> intermittently fails in functional versioned notification tests
> Possible solution found, fix proposed and it only needs a second +2:
> https://review.openstack.org/#/c/483986/
> * https://bugs.launchpad.net/nova/+bug/1704392
> TestInstanceNotificationSample.test_volume_swap_server fails with
> "testtools.matchers._impl.MismatchError: 7 != 6"
> Patch that improves logging of the failure has been merged
> https://review.openstack.org/#/c/483939/ and detailed log now 
available

> to look at
> 
http://logs.openstack.org/82/482382/4/check/gate-nova-tox-functional-ubuntu-xenial/38a4cb4/console.html#_2017-07-16_01_14_36_313757

>
>
>
> Small improvements
> ~~
> * https://review.openstack.org/#/c/428199/ Improve assertJsonEqual
> error reporting
> * 
https://review.openstack.org/#/q/topic:refactor-notification-samples

> Factor out duplicated notification sample data
> This is a start of a longer patch series to deduplicate notification
> sample data. The third patch already shows how much sample data can 
be

> deleted from nova tree. We added a minimal hand rolled json ref
> implementation to notification sample test as the existing python 
json

> ref implementations are not well maintained.
>
>
> Weekly meeting
> --
> The notification subteam holds it's weekly meeting on Tuesday 17:00 
UTC
> on openstack-meeting-4. The next meeting will be held on 18th of 
July.
> 
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170718T17

>
> Cheers,
> gibi
>
>
>
>
> 

Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-18 Thread Sanjay Upadhyay
On Tue, Jul 18, 2017 at 12:21 PM, Flavio Percoco  wrote:

> On 17/07/17 16:48 -0400, Ryan Hallisey wrote:
>
>> One other thing to mention. Maybe folks can speed up writing these
>> playbooks by using kolla-ansible's playbooks as a shell. Here's an
>> example: [1] Take lines 1-16 and replace it with helm install mariadb
>> or
>> kubectl create -f mariabd-pod.yaml and set inventory to localhost.
>> Just a thought.
>>
>> There may be some other playbooks out there I don' know about that you
>> can use, but that could at least get some of the collaboration started
>> so folks don't have to start from scratch.
>>
>> [1] - https://github.com/openstack/kolla-ansible/blob/afdd11b9a22e
>> cca70962a4637d89ad50b7ded2e5/ansible/roles/mariadb/tasks/start.yml#L1-L16
>>
>
> +1 This is why I think there's still room for collaboration and we can
> re-use
> several of the existing things. I don't think everything would have to be
> written from scratch.
>
>

Not sure if this is an important criteria, however, are we also looking at
using OCI, so that users can perhaps choose between Container platforms?

On the upgrade path we are already using ansible playbooks. How hard would
it be to do a major migrate from current tripleo to a new standard,
whatever is chosen.

One more thing I needed to emphasize is that probably *only* tripleo as it
is now, addresses datacenter/telco specific network paradigm ie what is
done via os-net-config (bonding,sriov,dpdk). We would want to have that
addressed, or at least have a plan in whatever path we chose.

regards
/sanjay


> Flavio
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Ansible] Proposing Markos Chandras for osa-core

2017-07-18 Thread Amy Marrich
+1 from the meeting and prosperity!

Amy (spotz)

On Tue, Jul 18, 2017 at 4:23 AM, Andy McCrae  wrote:

> Following on from last week's meeting I'd like to propose Markos
> (hwoarang) for OSA core.
>
> Markos has done a lot of good reviews and commits over an extended period
> of time, and has shown interest in the project as a whole. (Not to mention
> the addition of SUSE support)
>
> We already have quite a few +1's from the meeting itself, but opening up
> to everybody who wasn't available at the meeting!
>
> Thanks,
> Andy
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][trove] Retiring the openstack/trove-integration repository

2017-07-18 Thread Amrith Kumar
This is an early warning that the openstack/trove-integration repository
was last used for Trove in the Newton release. Ocata, and now Pike use
elements and things that are in the trove repository.

This is a heads-up that when Newton goes EOL, the trove-integration
repository will be retired.

If you have anything which depends on this repository, or things that are
generated from this repository, speak sometime between now and early
November this year, or forever hold your peace.

Honestly, I think there will be wild partying in the streets when
trove-integration is finally retired but as I learned from the mysql.qcow2
fiasco (sorry folks) it is unwise to assume that something is unused.

Thanks,

-amrith
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Ansible] Proposing Markos Chandras for osa-core

2017-07-18 Thread Yolanda Robla Mota
+1!
I work with Markos in other OPNFV projects, and he is great.

On Tue, Jul 18, 2017 at 11:57 AM, Jean-Philippe Evrard <
jean-phili...@evrard.me> wrote:

> +1 Thanks for all the work!
>
> On Tue, Jul 18, 2017 at 10:33 AM, Nicolas Bock <
> nicolasbock.openst...@gmail.com> wrote:
>
>> On Tue, Jul 18, 2017 at 10:23:46AM +0100, Andy McCrae wrote:
>>
>>> Following on from last week's meeting I'd like to propose Markos
>>> (hwoarang)
>>> for OSA core.
>>>
>>
>> +1 !!
>>
>> Markos has done a lot of good reviews and commits over an extended period
>>> of time, and has shown interest in the project as a whole. (Not to
>>> mention
>>> the addition of SUSE support)
>>>
>>> We already have quite a few +1's from the meeting itself, but opening up
>>> to
>>> everybody who wasn't available at the meeting!
>>>
>>> Thanks,
>>> Andy
>>>
>>
>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Yolanda Robla Mota

Principal Software Engineer, RHCE

Red Hat



C/Avellana 213

Urb Portugal

yrobl...@redhat.comM: +34605641639


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Ansible] Proposing Markos Chandras for osa-core

2017-07-18 Thread Jean-Philippe Evrard
+1 Thanks for all the work!

On Tue, Jul 18, 2017 at 10:33 AM, Nicolas Bock <
nicolasbock.openst...@gmail.com> wrote:

> On Tue, Jul 18, 2017 at 10:23:46AM +0100, Andy McCrae wrote:
>
>> Following on from last week's meeting I'd like to propose Markos
>> (hwoarang)
>> for OSA core.
>>
>
> +1 !!
>
> Markos has done a lot of good reviews and commits over an extended period
>> of time, and has shown interest in the project as a whole. (Not to mention
>> the addition of SUSE support)
>>
>> We already have quite a few +1's from the meeting itself, but opening up
>> to
>> everybody who wasn't available at the meeting!
>>
>> Thanks,
>> Andy
>>
>
> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Ansible] Proposing Markos Chandras for osa-core

2017-07-18 Thread Jesse Pretorius
In case the meeting vote is lost somehow, you still have my +1 – looking 
forward to working more with Markos!

From: Andy McCrae 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, July 18, 2017 at 10:23 AM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] [OpenStack-Ansible] Proposing Markos Chandras for 
osa-core

Following on from last week's meeting I'd like to propose Markos (hwoarang) for 
OSA core.

Markos has done a lot of good reviews and commits over an extended period of 
time, and has shown interest in the project as a whole. (Not to mention the 
addition of SUSE support)

We already have quite a few +1's from the meeting itself, but opening up to 
everybody who wasn't available at the meeting!

Thanks,
Andy






Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Ansible] Proposing Markos Chandras for osa-core

2017-07-18 Thread Nicolas Bock

On Tue, Jul 18, 2017 at 10:23:46AM +0100, Andy McCrae wrote:

Following on from last week's meeting I'd like to propose Markos (hwoarang)
for OSA core.


+1 !!


Markos has done a lot of good reviews and commits over an extended period
of time, and has shown interest in the project as a whole. (Not to mention
the addition of SUSE support)

We already have quite a few +1's from the meeting itself, but opening up to
everybody who wasn't available at the meeting!

Thanks,
Andy



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Ansible] Proposing Markos Chandras for osa-core

2017-07-18 Thread Andy McCrae
Following on from last week's meeting I'd like to propose Markos (hwoarang)
for OSA core.

Markos has done a lot of good reviews and commits over an extended period
of time, and has shown interest in the project as a whole. (Not to mention
the addition of SUSE support)

We already have quite a few +1's from the meeting itself, but opening up to
everybody who wasn't available at the meeting!

Thanks,
Andy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Blazar] skip weekly meeting

2017-07-18 Thread Masahito MUROI

Hi Blazar folks,

Based on the discussion on previous meeting, Today's weekly meeting is 
skipped because some of core member can't join the meeting.


best regards,
Masahito


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glare][TC] Application for inclusion of Glare in the list of official projects

2017-07-18 Thread Thierry Carrez
Erno Kuvaja wrote:
> [...] 
> So this raises a question: If it is integral part of Glare becoming
> part of big tent, does it mean that we as OpenStack Community
> fundamentally agree on that goal? because in that case I would be way
> less supportive. [...]
Approving Glare as an official OpenStack project wouldn't bless it as
Glance NG, it would just be recognizing that Glare is being developed by
members of the OpenStack community, following our open development
principles, and that the scope of the project is helping with the
OpenStack mission.

I think we are mostly comfortable with the first part (the principles
match). We are slightly less comfortable with the second part (the scope
match). If we determined that Glare is actively hurting OpenStack
mission (by adding confusion or gratuitously duplicating effort or
introducing interoperability issues) we would likely have to reject it.

Mike's answers are reassuring, so I'm looking forward to continuing this
discussion on this list and at the Denver PTG, and finally make the call
at the start of the Queens cycle.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-18 Thread Flavio Percoco

On 17/07/17 16:48 -0400, Ryan Hallisey wrote:

One other thing to mention. Maybe folks can speed up writing these
playbooks by using kolla-ansible's playbooks as a shell. Here's an
example: [1] Take lines 1-16 and replace it with helm install mariadb
or
kubectl create -f mariabd-pod.yaml and set inventory to localhost.
Just a thought.

There may be some other playbooks out there I don' know about that you
can use, but that could at least get some of the collaboration started
so folks don't have to start from scratch.

[1] - 
https://github.com/openstack/kolla-ansible/blob/afdd11b9a22ecca70962a4637d89ad50b7ded2e5/ansible/roles/mariadb/tasks/start.yml#L1-L16


+1 This is why I think there's still room for collaboration and we can re-use
several of the existing things. I don't think everything would have to be
written from scratch.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev