Re: [openstack-dev] [Manila] Attach/detach semantics

2015-11-18 Thread Ben Swartzlander

On 11/18/2015 05:31 AM, John Spray wrote:

On Wed, Nov 18, 2015 at 4:33 AM, Ben Swartzlander  wrote:

On 11/17/2015 10:02 AM, John Spray wrote:


Hi all,

As you may know, there is ongoing work on a spec for Nova to define an
"attach/detach" API for tighter integration with Manila.

The concept here is that this mechanism will be needed to implement
hypervisor mediated FS access using vsock, but that the mechanism
should also be applicable more generally to an "attach" concept for
filesystems accessed over IP networks (like existing NFS filers).

In the hypervisor-mediated case, attach would involve the hypervisor
host connecting as a filesystem client and then re-exporting to the
guest via a local address.  We think this would apply to
driver_handles_share_servers type drivers that support share networks,
by mapping the attach/detach share API to attaching/detaching the
share network from the guest VM.

Does that make sense to people maintaining this type of driver?  For
example, for the netapp and generic drivers, is it reasonable to
expose nova attach/detach APIs that attach and detach the associated
share network?



I'm not sure this proposal makes sense. I would like the share attach/detach
semantics to be the same for all types of shares, regardless of the driver
type.

The main challenge with attaching to shares on share servers (with share
networks) is that there may not exist a network route from the hypervisor to
the share server, because share servers are only required to be accessible
from the share network from which they are created. This has been a known
problem since Liberty because this behaviour prevents migration from
working, therefore we're proposing a mechanism for share-server drivers to
provide admin-network-facing interfaces for all share servers. This same
mechanism should be usable by the Nova when doing share attach/detach. Nova
would just need to list the export locations using an admin-context to see
the admin-facing export location that it should use.


For these drivers, we're not proposing connecting to them from the
hypervisor -- we would still be connecting directly from the guest via
the share network.

The change would be from the existing workflow:
  * Create share
  * Attach guest network to guest VM (need to look up network info,
talk to neutron API)


I think this is the point of confusion. The design for share-networks in 
Manila is to re-use your existing networks. A VM with no network 
connection is not particularly useful -- we assume that all VMs have at 
least 1 private network. The idea behind the share network is to map 
shares onto that private network so that the above step in unnecessary.


It's true that if you have _no_ network at all or if you have a 
particularly complicated network configuration, you may need to do an 
additional network attachment to access Manila shares. That should be an 
exceptional case though, no the norm.



  * Add IP access permission for the guest to access the share (need to
know IP of the guest)


If the guest has a private network and it's mapped to the share network, 
then the access only needs to be granted to the private IP. I agree it 
would be nicer to grant access by instance ID rather than IP, but in 
reality that mapping can be performed automatically and cheaply.



  * Mount from guest VM

To a new workflow:
  * Create share
  * Attach share to guest (knowing only share ID and guest instance ID)
  * Mount from guest VM

The idea is to abstract the networking part away, so that the user
just has to say "I want to be able to mount share X from guest Y",
without knowing about the networking stuff going on under the hood.
While this is partly because it's slicker, this is mainly so that
applications can use IP-networking shares interchangeably with future
hypervisor mediated shares: they call "attach" and don't have to worry
about whether that's a share network operation under the hood or a
hypervisor-twiddling operation under the hood.


I think we are already closer than you think. In most cases the only 
missing automation is knowing how to turn an instance ID into an IP 
address to give to Manila's access-allow API. There are 2 potential 
solutions to that problem:


1) Add and API to Manila or a layer on top of Manila that gives 
grant-access-by-instance-ID semantics.


2) Do the nova attach implementation and always use the hypervisor IP 
instead of the guest IP when talking to Manila.



John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [opnfv-tech-discuss] How can I install Redhat-OSP using Fuel

2015-11-18 Thread LUFei
Hi Steven and Tim,

Thank you for your reply. (And sorry for my late respond because I was 
preparing my exams.)

My company has decided to try OSP Director before making further decisions.
Just few more questions, hoping the answer will make my work easier. :-)

1) I have learnt that both OSP Director and Apex are based on TripleO, and both 
supported by Redhat.
    I'm sort of confused about the relationship between OSP Director and Apex.
    Are OSP Director and Apex the same thing or different? If different, what's 
the difference?

2) What's the further development you guys have done from the TripleO base to 
OSP Director / Apxe?

3) Is Foreman/QuickStack having a new release in Brahmaputra?

Thank you,
Kane

> Date: Wed, 11 Nov 2015 12:04:51 -0500
> From: tro...@redhat.com
> To: sha...@redhat.com; kane0...@hotmail.com
> CC: openstack-dev@lists.openstack.org; opnfv-tech-disc...@lists.opnfv.org
> Subject: Re: [opnfv-tech-discuss] [openstack-dev] [Fuel][fuel] How can I 
> install Redhat-OSP using Fuel
>
>
> Thanks for pointing that out Steve. Just to add to this - Fei, in case you 
> are also unaware the midstream version of OSP Director is called RDO Manager, 
> which is supported by the OPNFV Apex installer.
>
> Tim Rozet
> Red Hat SDN Team
>
> - Original Message -
> From: "Steven Hardy" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Cc: opnfv-tech-disc...@lists.opnfv.org
> Sent: Wednesday, November 11, 2015 7:46:54 AM
> Subject: Re: [opnfv-tech-discuss] [openstack-dev] [Fuel][fuel] How can I 
> install Redhat-OSP using Fuel
>
> On Tue, Nov 10, 2015 at 02:15:02AM +, Fei LU wrote:
>> Greeting Fuel teams,
>> My company is working on the installation of virtualization
>> infrastructure, and we have noticed Fuel is a great tool, much better than
>> our own installer. The question is that Mirantis is currently supporting
>> OpenStack on CentOS and Ubuntu, while my company is using Redhat-OSP.
>> I have read all the Fuel documents, including fuel dev doc, but I haven't
>> found the solution how can I add my own release into Fuel. Or maybe I'm
>> missing something.
>> So, would you guys please give some guide or hints?
>
> I'm guessing you already know this, but just in case - the
> install/management tool for recent versions of RHEL-OSP is OSP director,
> which is based directly on another OpenStack deployment project, TripleO.
>
> So, it's only fair to point out that you may have a much easier time
> participating in the TripleO community if your aim is primarily to support
> deploying RHEL-OSP or RDO distributions.
>
> http://docs.openstack.org/developer/tripleo-docs/
>
> There are various pros/cons and differences between the TripleO and Fuel
> tooling, but I hope that over time we can work towards less duplication and
> more reuse between the two efforts.
>
> Steve
> ___
> opnfv-tech-discuss mailing list
> opnfv-tech-disc...@lists.opnfv.org
> https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss
  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][sfc] How could an L2 agent extension access agent methods ?

2015-11-18 Thread Andreas Scheuring
Perfect.

The agent will have all static hooks for the extensions in place, like
they are used in todays agents (the modular agent was derived from
existing lb agent). The knowledge which concrete extension
implementation to chose (e.g. lb) comes from the implementation specific
manager class that is required for instantiating the modular agent. So
it is ensured that with lb you get the lb extensions, with sriov you get
the sriov extensions.

There are no plans to make extensions more "modular" (whatever this
means in this context) as well in the first round. But we can discuss
for a second stage.

Thanks

-- 
Andreas
(IRC: scheuran)



On Mi, 2015-11-18 at 15:28 +0100, Ihar Hrachyshka wrote:
> Andreas Scheuring  wrote:
> 
> > Hi all,
> > I wonder if this is somehow in conflict with the modular l2 agent
> > approach I'm currently following up for linuxbridge, macvtap and sriov?
> > - RFE: [1]
> > - Frist patchset [2]
> >
> > I don't think so, but to be sure I wanted to raise it up.
> 
> I don’t believe it’s in conflict, though generally, I suggest you move  
> extensions code into modular l2 agent pieces, if possible. We will have  
> extensions enabled for lb, ovs, and sr-iov the least in Mitaka.
> 
> Ihar
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [opnfv-users] [opnfv-tech-discuss] How can I install Redhat-OSP using Fuel

2015-11-18 Thread Dan Radez
Hello LUFei,

Tim and I are working on Apex together, Let me see if I can fill in some
gaps for you to help understand some of the relationships of these
projects and the game plan for their future.

1) You are correct both director and Apex are based on triple-o. The
difference is that director is a supported product of Red Hat and Apex
is a community project specific to OPNFV. Another item in this mix that
should be mentioned is RDO Manager (http://rdoproject.org). Apex is
built on top of RDO Manager.

At Red Hat for every product that we have we also support a community
project. So Director and RDO Manager are the Product/Community-Project
counterparts of each other. If you want to install an RPM based
installation of OpenStack You should choose one of these and choose in
light of whether you would like it Enterprise supported or community
supported.

Apex is a Community project that does not have a product counterpart.
Its purpose is to add NFV capabilities to RDO Manager and build release
artifacts for the OPNFV project according to OPNFV requirements.

2) Director / RDO Manager have added a setup tool called Instack that
will setup an undercloud for you and they provide documentation how to
interact with instack to deploy your over cloud.

3) Apex is the new name for the project that was the Foreman/Quickstack
installer. We will not release an updated version of Foreman/Quickstack
in OPNFV. Instead it will be replaced by Apex.

We hope to have Apex ready to start being consumed by users in a preview
like status sometime this week or next. There are a couple more patches
we are trying to land in the the codebase before it will be ready to use.

One in particular is the installation-instructions. I have a
work-in-progress patch here:
https://github.com/radez/apex/blob/installation-instructions/docs/src/installation-instructions.rst

Please feel free to start reading these and provide and feedback or ask
questions.

Please let me know if you have more questions that we can answer to help
clarify things for you.

Dan Radez
freenode: radez




On 11/18/2015 10:13 AM, LUFei wrote:
> Hi Steven and Tim,
> 
> Thank you for your reply. (And sorry for my late respond because I was 
> preparing my exams.)
> 
> My company has decided to try OSP Director before making further decisions.
> Just few more questions, hoping the answer will make my work easier. :-)
> 
> 1) I have learnt that both OSP Director and Apex are based on TripleO, and 
> both supported by Redhat.
> I'm sort of confused about the relationship between OSP Director and Apex.
> Are OSP Director and Apex the same thing or different? If different, 
> what's the difference?
> 
> 2) What's the further development you guys have done from the TripleO base to 
> OSP Director / Apxe?
> 
> 3) Is Foreman/QuickStack having a new release in Brahmaputra?
> 
> Thank you,
> Kane
> 
>> Date: Wed, 11 Nov 2015 12:04:51 -0500
>> From: tro...@redhat.com
>> To: sha...@redhat.com; kane0...@hotmail.com
>> CC: openstack-dev@lists.openstack.org; opnfv-tech-disc...@lists.opnfv.org
>> Subject: Re: [opnfv-tech-discuss] [openstack-dev] [Fuel][fuel] How can I 
>> install Redhat-OSP using Fuel
>>
>>
>> Thanks for pointing that out Steve. Just to add to this - Fei, in case you 
>> are also unaware the midstream version of OSP Director is called RDO 
>> Manager, which is supported by the OPNFV Apex installer.
>>
>> Tim Rozet
>> Red Hat SDN Team
>>e
>> - Original Message -
>> From: "Steven Hardy" 
>> To: "OpenStack Development Mailing List (not for usage questions)" 
>> 
>> Cc: opnfv-tech-disc...@lists.opnfv.org
>> Sent: Wednesday, November 11, 2015 7:46:54 AM
>> Subject: Re: [opnfv-tech-discuss] [openstack-dev] [Fuel][fuel] How can I 
>> install Redhat-OSP using Fuel
>>
>> On Tue, Nov 10, 2015 at 02:15:02AM +, Fei LU wrote:
>>> Greeting Fuel teams,
>>> My company is working on the installation of virtualization
>>> infrastructure, and we have noticed Fuel is a great tool, much better than
>>> our own installer. The question is that Mirantis is currently supporting
>>> OpenStack on CentOS and Ubuntu, while my company is using Redhat-OSP.
>>> I have read all the Fuel documents, including fuel dev doc, but I haven't
>>> found the solution how can I add my own release into Fuel. Or maybe I'm
>>> missing something.
>>> So, would you guys please give some guide or hints?
>>
>> I'm guessing you already know this, but just in case - the
>> install/management tool for recent versions of RHEL-OSP is OSP director,
>> which is based directly on another OpenStack deployment project, TripleO.
>>
>> So, it's only fair to point out that you may have a much easier time
>> participating in the TripleO community if your aim is primarily to support
>> deploying RHEL-OSP or RDO distributions.
>>
>> http://docs.openstack.org/developer/tripleo-docs/
>>
>> There are various pros/cons and 

Re: [openstack-dev] [OpenStack-Infra] [Infra] Gerrit downtime on November 18 2015

2015-11-18 Thread Anita Kuno
Sorry this got posted to the infra mailing list and missed the dev
mailing list.

Anita.

On 11/17/2015 08:04 PM, Spencer Krum wrote:
> Hello,
> 
> The infra team will not be upgrading gerrit tomorrow, as a result the
> planned outage will not occur. We do not have a date yet that we will be
> doing the upgrade, when we get one we will send out a notice to the
> list. If you have any questions you can hop in #openstack-infra and ask.
> 
> Thanks,
> Spencer
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable][neutron] How we handle Kilo backports

2015-11-18 Thread Ihar Hrachyshka

Hi all,

as per [1] I imply that all projects under stable-maint-core team  
supervision must abide the stable policy [2] which limits the types of  
backports for N-2 branches (now it’s stable/kilo) to "Only critical  
bugfixes and security patches”. With that, I remind all stable core members  
about the rule.


Since we are limited to ‘critical bugfixes’ only, and since there is no  
clear definition of what ‘critical’ means, I guess we should define it for  
ourselves.


In Neutron world, we usually use Critical importance for those bugs that  
break gate. High is used for those bugs that have high impact production  
wise. With that in mind, I suggest we define ‘critical’ bugfixes as  
Critical + High in LP. Comments on that?


(My understanding is that we can also advocate for the change in the global  
policy if we think the ‘critical only’ rule should be relaxed, but till  
then it makes sense to stick to what policy says.)


[1]  
http://lists.openstack.org/pipermail/openstack-dev/2015-November/078649.html

[2] http://docs.openstack.org/project-team-guide/stable-branches.html

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Redundant checks in form values

2015-11-18 Thread Suraj Deshmukh
In file [1] in `class AddRule` in method `_clean_rule_icmp` why checkups
are performed on if `icmp_type` or `icmp_code` is `None` or in range of `-1
to 255`.

What I mean here is that in `class AddRule` while values are being accepted
from form data and being stored, validators do their job of checking
whether that field is valid, so why a redundant checkup in method
`_clean_rule_icmp`?

Please correct me if I am wrong in understanding anything. Currently I am
working on the Bug #1511748 [2]. Previously while checking the validity of
`icmp_type` and `icmp_code`, the functionality of tcp_ports was used. This
is wrong because, TCP ports have a range of 0 to 65535 while `icmp_type`
and `icmp_code` have a range of 0 to 255.

So now oslo_utils.netutils has dedicated functionality to check if
`icmp_type` and `icmp_code` are valid here is a recent code merger [3].

So I was trying to add this newly added functionality into Horizon but the
test cases run older code and hence needed help in getting my head around
with the source.


[1]
openstack_dashboard/dashboards/project/access_and_security/security_groups/forms.py
[2] https://bugs.launchpad.net/horizon/+bug/1511748
[3] https://review.openstack.org/#/c/240661/

Thanks and Regards
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] [nova] default-next-release opt flag

2015-11-18 Thread Belmiro Moreira
With my operator hat on I think the release notes are the right place
for these changes.

Belmiro

On Wed, Nov 18, 2015 at 4:58 PM, Alexis Lee  wrote:

> Sylvain Bauza said on Wed, Nov 18, 2015 at 04:48:50PM +0100:
> > >This is just for the case of "we're going to change the default on the
> > >next release". I think in that case, a release note is the right thing.
> > >We don't need new fancy code for that.
> >
> > I usually hate saying just +1 in an email, but...
> >
> > +1
>
> OK, happy to drop this idea. Thanks for feedback.
>
>
> Alexis (lxsli)
> --
> Nova developer, Hewlett-Packard Limited.
> Registered Office: Cain Road, Bracknell, Berkshire RG12 1HN.
> Registered Number: 00690597 England
> VAT number: GB 314 1496 79
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][infra][neutron] branches for release-independent projects targeting Openstack release X

2015-11-18 Thread Thomas Morin

Hi everyone,

The starting point for this post is a specific Neutron sub-project 
(networking-bgpvpn) but I believe the issues raised are shared with 
other neutron stadium project and possibly relevant beyond Neutron, to 
projects tagged release-independent in general.


In the context of the networking-bgpvpn project, we have a few unsolved 
issues related to branches, more specifically about which branches we 
can create to host active development to make our subprojet (a Neutron 
service plugin) work with the last Openstack release and to backport it 
with to earlier releases.


Here are precisely the assumptions that we've made, largely based on the 
fact that the project is tagged 'release-independent' (meaning 
"completely bypass the 6-month cycle and release independently" [1]):
a) that we can make a release targeting Liberty much later than the 
release date of Liberty
b) that we could make releases more frequent than the 6-month cycle ; 
not only bugfix releases but also feature releases
c) that the idea of a release-independent project being backported to 
work with past Openstack releases is acceptable (of course, without 
requiring any change in release-managed projects, something largely 
possible in many cases for projects such as a Neutron service plugin or 
an ML2 driver)


Note that we aren't the only big tent project having this kind of 
expectations (e.g. [3]).


The rest of this email follows from these assumptions.

To do (a) and (c) we need separate branches in which to do active work.
We have understood clearly that given the contract around 'stable/*' 
branches, the branches in which to do this active work cannot be named 
'stable/*'.


Where we are today:
- we do active development, planning a liberty release soon, on our 
'master' branch
- we have a 'backport/kilo' and a 'backport/juno' branches in which we 
plan to make the necessary changes to adapt with Neutron kilo and juno; 
these branches were created by Neutron core devs for us [5], based on 
the common understanding that choosing 'stable/*' branch names for 
branches with active development would be a huge no no


Let me take a few example of the issues we hit...

I apologize in advance for imprecisions that may result from my limited 
experience with the Openstack project at large, and CI subtleties in 
particular.


### Continuous Integration issue

Our master branch is written to work with stable/liberty of other 
projects we depend on (neutron, python-neutronclient, devstack for the 
CI devstack VMs).  The issue we hit, at least this is my understanding, 
is that the CI system (at least the devstack VM parts) has a built-in 
implicit assumption that branch X of project foo is meant to work with 
branch X of project bar, but that this assumption is not true in our case.


The solution we could consider is to tweak CI gate jobs to add jobs that 
force a specific branch to be used for Openstack key project our job 
depend on (e.g. devstack, neutron, in our case), and filter which of 
these jobs would be triggered for each specific branch of our project.


For instance:
- define a gate-install-dsvm-networking-bgpvpn-kilo with branch-override 
stable/kilo  (will result in creating a devstack stable/kilo with 
Neutron stable/kilo inside)
- configure zuul to apply this job *only if* the gerrit change 
triggering the test is for our backport/kilo branch
(do that again for our juno branch, and again to map our master to 
Openstack stable/liberty)


It seems like it would work, although possible too verbose.

With the infra team support we already have something close to this 
configuration in place for our master (two jobs, one to test with 
master, one to force testing against stable/liberty). [4]


### Synchronizing requirements issue

Trying to be good pupils we had activated the check-requirements jobs in 
our zuul config, and the requirements proposal bot as well.
Just like the CI system, these requirements tools have a built-in 
implicit assumption that branch X of project foo is meant to work with 
branch X of project bar, which breaks in our case.


The issues we have are the following:
1- the botis proposing on our 'master' branch, updates to 
requirements.txt to make us in sync with the requirements 'master' 
branch, which is irrelevant since our master in facts targets Liberty  
-- this is easily "solved": we just ignore those proposed changes
2- the bot won't be able to propose anything relevant on our 
backport/kilo and backport/juno branches, since he does not know they 
target stable/kilo and stable/juno  -- we can live with that and update 
requirements ourselves
3- we can't update our requirements.txt on our master branch: we want to 
add requirements conforming with Liberty, but the check-requirements 
jobs thinks this is wrong since it checks our master branch against its 
master branch (that now has mitaka requirements)
4- same as 3 for our backport/kilo and backport/juno branches (the job 
does not know 

Re: [openstack-dev] [nova] nova-manage db archive_deleted_rows broken

2015-11-18 Thread Andrew Laski

On 11/18/15 at 08:32am, Andrew Laski wrote:

On 11/17/15 at 07:43pm, Jay Pipes wrote:

On 11/17/2015 05:43 PM, Matt Riedemann wrote:

I found some time to work on a reverse sort of nova's tables for the db
archive command, that looks like [1].  It works fine in the unit tests,
but fails because the deleted instances are referenced by
instance_actions that aren't deleted.  I see any DB APIs for deleting
instance actions.

Were we just planning on instance_actions living forever in the database?


Not as far as I understand.


They were never intended to live forever.  However there is a use 
case for holding on to the deleted action so that someone could query 
when or by whom their instance was deleted.  But the current API does 
not provide a good way to query for that so this may be something 
better left to the growing list of things that Tasks could address.





Should we soft delete instance_actions when we delete the referenced
instance?


No.


A few of us discussed this in #openstack-nova and highlighted that soft 
deleting them would make it easier to find things to purge/archive but 
would require handling older instance actions that were not soft 
deleted.  So there's a slight advantage to them after a data cleanup of 
old ones, but no strong technical advantage to going either way here.





Or should we (hard) delete instance_actions when we archive (move to
shadow tables) soft deleted instances?


Yes.


This seems good to me.  Though we probably want operator feedback on 
whether it's important to them to have these archived as well.




Best,
-jay


This is going to be a blocker to getting nova-manage db
archive_deleted_rows working.

[1] https://review.openstack.org/#/c/246635/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][neutron] How we handle Kilo backports

2015-11-18 Thread Carl Baldwin
On Wed, Nov 18, 2015 at 9:44 AM, Ihar Hrachyshka  wrote:
> Hi all,
>
> as per [1] I imply that all projects under stable-maint-core team
> supervision must abide the stable policy [2] which limits the types of
> backports for N-2 branches (now it’s stable/kilo) to "Only critical bugfixes
> and security patches”. With that, I remind all stable core members about the
> rule.
>
> Since we are limited to ‘critical bugfixes’ only, and since there is no
> clear definition of what ‘critical’ means, I guess we should define it for
> ourselves.
>
> In Neutron world, we usually use Critical importance for those bugs that
> break gate. High is used for those bugs that have high impact production
> wise. With that in mind, I suggest we define ‘critical’ bugfixes as Critical
> + High in LP. Comments on that?

I was wondering about this today too.  Ihar is correct about how we
use Critical importance in launchpad for Neutron bugs.  The number of
Critical neutron bugs is very small and most of them are not relevant
to stable releases because they are targeted at gate breakage incurred
by new development in master.

I'll +1 that we should extend this to Critical + High in launchpad.
Otherwise, we would severely limit our ability to backport important
bug fixes to a stable release that is only 6 months old and many
deployers are only beginning to turn their attention to it.

> (My understanding is that we can also advocate for the change in the global
> policy if we think the ‘critical only’ rule should be relaxed, but till then
> it makes sense to stick to what policy says.)

+1

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][All] Make CONF.set_override with paramter enforce_type=True by default

2015-11-18 Thread Doug Hellmann
Excerpts from Markus Zoeller's message of 2015-11-17 18:10:14 +0100:
> ChangBo Guo  wrote on 11/17/2015 01:29:53 PM:
> 
> > From: ChangBo Guo 
> > To: "OpenStack Development Mailing List (not for usage questions)" 
> > 
> > Date: 11/17/2015 01:37 PM
> > Subject: [openstack-dev] [oslo][All] Make CONF.set_override with 
> > paramter enforce_type=True by default
> > 
> > Hi ALL,
> 
> > 1. Problems :
> >oslo_config provides method CONF.set_override[1] , developers 
> > usually use it to change config option's value in tests. That's 
> convenient .
> >By default  parameter enforce_type=False,  it doesn't check any 
> > type or value of override. If set enforce_type=True , will check 
> parameter
> >override's type and value.  In production code(running time code), 
> > oslo_config  always checks  config option's value.
> >In short, we test and run code in different ways. so there's  gap: 
> > config option with wrong type or invalid value can pass tests when
> >parameter enforce_type = False in consuming projects.  that means 
> > some invalid or wrong tests are in our code base.
> >There is nova POC result when I enable "enforce_type=true" [2],  
> > and I must fix them in [3]
> > 
> >[1] https://github.com/openstack/oslo.config/blob/master/
> > oslo_config/cfg.py#L2173
> >[2] http://logs.openstack.org/16/242416/1/check/gate-nova-python27/
> > 97b5eff/testr_results.html.gz
> >[3]  https://review.openstack.org/#/c/242416/  https://
> > review.openstack.org/#/c/242717/  
> https://review.openstack.org/#/c/243061/
> > 
> > 2. Proposal 
> >1) Make  method CONF.set_override with  enforce_type=True  in 
> > consuming projects. and  fix violations when  enforce_type=True in each 
> project.
> > 
> >   2) Make  method CONF.set_override with  enforce_type=True by default
> > in oslo_config
> > 
> >Hope some one from consuming projects can help make  
> > enforce_type=True in consuming projects and fix violations,
> > 
> >You can find more details and comments  in  https://
> > etherpad.openstack.org/p/enforce_type_true_by_default
> > 
> > -- 
> > ChangBo Guo(gcb)
> 
> 
> Does it make sense to introduce a hacking check when your changes are
> merged? To prevent that new things slip in in Nova during the time period
> until "oslo.config" changes the default value?

Good suggestion! I found that to be a big help when removing the namespace
packages from the Oslo libs during Kilo.

Doug

> 
> Regards, Markus Zoeller (markus_z)
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic][security] what is OK to put in DEBUG logs?

2015-11-18 Thread Ruby Loo
Hi,

I think we all agree that it isn't OK to log credentials (like passwords)
in DEBUG logs. However, what about other information that might be
sensitive? A patch was recently submitted to log (in debug) the SWIFT
temporary URL [1]. I agree that it would be useful for debugging, but since
that temporary URL could be used (by someone that has access to the logs
but no admin access to ironic/glance) eg for fetching private images, is it
OK?

Even though we say that debug shouldn't be used in production, we can't
enforce what folks choose to do. And we know of at least one company that
runs their production environment with the debug setting. Which isn't to
say we shouldn't put things in debug, but I think it would be useful to
have some guidelines as to what we can safely expose or not.

I took a quick look at the security web page [2] but nothing jumped out at
me wrt this issue.

Thoughts?

--ruby

[1] https://review.openstack.org/#/c/243141/
[2] https://security.openstack.org
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][upgrade] new 'all things upgrade' subteam

2015-11-18 Thread Akihiro Motoki
I missed this mailing thread. Thanks for coordinating the effort!
Mon 1500UTC works for me too.

2015-11-18 23:36 GMT+09:00 Ihar Hrachyshka :
> Thanks everyone for responses.
>
> It seems like Mon 15:00 UTC works for all of us, so I pushed the following
> patch to book #openstack-meeting-2 room weekly:
>
> https://review.openstack.org/246949
>
> Ihar
>
>
> Rossella Sblendido  wrote:
>
>> Hi Ihar,
>>
>> same for me, all options are ok!
>>
>> cheers,
>>
>> Rossella
>>
>> On 11/12/2015 11:00 PM, Martin Hickey wrote:
>>>
>>> Hi Ihar,
>>>
>>> Any of those options would suit me, thanks.
>>>
>>> Cheers,
>>> Martin
>>>
>>>
>>>
>>>
>>> From:   Ihar Hrachyshka 
>>> To: "OpenStack Development Mailing List (not for usage questions)"
>>>  
>>> Date:   12/11/2015 21:39
>>> Subject:Re: [openstack-dev] [neutron][upgrade] new 'all things
>>>  upgrade'   subteam
>>>
>>>
>>>
>>> Artur  wrote:
>>>
 My TZ is UTC +1:00.
 Do we have any favorite day? Maybe Tuesday?
>>>
>>>
>>> I believe Tue is already too packed with irc meetings to be considered
>>> (we
>>>
>>> have, for the least, main neutron meetings and neutron-drivers meetings
>>> there).
>>>
>>> We have folks in US and Central Europe and Russia and Japan… I believe
>>> the
>>>
>>> best time would be somewhere around 13:00 to 15:00 UTC (that time would
>>> still be ‘before midnight' for Japan; afternoon for Europe, and morning
>>> for
>>>
>>> US East Coast).
>>>
>>> I have checked neutron meetings at [1], and I see that we have 13:00 UTC
>>> slots free for all days; 14:00 UTC slot available for Thu; and 15:00 UTC
>>> slots for Mon and Fri (I don’t believe we want to have it on Fri though).
>>> Also overall Mondays are all free.
>>>
>>> Should I create a doodle for those options? Or are there any alternative
>>> suggestions?
>>>
>>> [1]:
>>> http://git.openstack.org/cgit/openstack-infra/irc-meetings/tree/meetings
>>>
>>> Ihar
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] [nova] default-next-release opt flag

2015-11-18 Thread Alexis Lee
Sylvain Bauza said on Wed, Nov 18, 2015 at 04:48:50PM +0100:
> >This is just for the case of "we're going to change the default on the
> >next release". I think in that case, a release note is the right thing.
> >We don't need new fancy code for that.
> 
> I usually hate saying just +1 in an email, but...
> 
> +1

OK, happy to drop this idea. Thanks for feedback.


Alexis (lxsli)
-- 
Nova developer, Hewlett-Packard Limited.
Registered Office: Cain Road, Bracknell, Berkshire RG12 1HN.
Registered Number: 00690597 England
VAT number: GB 314 1496 79

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] trove unit tests failing on stable/kilo

2015-11-18 Thread Matt Riedemann



On 11/17/2015 10:37 PM, Nikhil Manchanda wrote:

Thanks for putting up that fix Matt.

The dependency on trunk python-troveclient (for stable/kilo) definitely
seems
screwy-- but I'm wondering if this was done for backwards compatibility
reasons?
(i.e. to ensure the latest version of python-troveclient should be able
to work
correctly against all previous releases of trove.)


If that was the plan, https://review.openstack.org/#/c/210004/ totally 
blows that up since it's a backward incompatible change and was released 
in a minor version rather than a major version.  That change is really 
what's breaking stable/kilo trove unit tests.




Either way, I think we should be honoring the requirements specified for
the respective
releases in g-r, so I think that this is the right fix.

Cheers,
Nikhil



On Tue, Nov 17, 2015 at 7:42 PM, Matt Riedemann
> wrote:



On 11/17/2015 9:27 PM, Matt Riedemann wrote:



On 11/17/2015 9:22 PM, Matt Riedemann wrote:

I noticed this failing today:


http://logs.openstack.org/81/206681/3/check/gate-trove-python27/45d645d/console.html#_2015-11-17_17_07_28_061



Looks like https://review.openstack.org/#/c/218701/ and
maybe the
dependent python-troveclient change would need to be
backported to
stable/kilo (neither are clean backports), or we can just
skip the test
on stable/kilo if there is a good reason why it won't work.


I also see that the unit test job on stable/kilo is pulling in trunk
python-troveclient:


http://logs.openstack.org/81/206681/3/check/gate-trove-python27/45d645d/console.html#_2015-11-17_17_07_28_393


Even though we have troveclient capped at 1.1.0 in kilo g-r:


https://github.com/openstack/requirements/blob/stable/kilo/global-requirements.txt#L136


So how is that happening?

Oh, because of this:


https://github.com/openstack/trove/blob/stable/kilo/test-requirements.txt#L17


And that's terriblewhy are we doing that?


Attempting to fix here: https://review.openstack.org/#/c/246735/


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] [nova] default-next-release opt flag

2015-11-18 Thread Sean Dague
On 11/17/2015 05:31 PM, Matt Riedemann wrote:
> 
> 
> On 11/17/2015 2:05 PM, Sylvain Bauza wrote:
>>
>>
>> Le 17/11/2015 20:25, Sean Dague a écrit :
>>> On 11/17/2015 01:48 PM, Matt Riedemann wrote:

 On 11/17/2015 11:28 AM, Alexis Lee wrote:
> Often in Nova we introduce an option defaulted off (so as not to break
> people) but then we want to make it default in the next release.
>
> Someone suggested an opt flag to mark this but I don't know what
> impact
> they wanted it to have. IE how the user should be alerted about the
> presence of these flagged options.
>
> If you are that person, or have opinions on this, please reply :)
>
>
> Alexis (lxsli)
>
 There is the deprecated_for_removal kwarg, but that doesn't fit here.
 There is the DeprecatedOpt, but that's for moving/renaming options. So
 this is something else, like deprecated_default or
 pending_default_change or something.
>>> Honestly, with reno now we could probably just add a release note in
>>> when we add it. That's more likely for us to not loose a thing like
>>> that.
>>>
>>> -Sean
>>>
>>
>> Agreed, it's now far easier to ask for having a release note within the
>> change, so the operators can just look at that. It also seems to me far
>> better for them to check the release notes rather than trying to see the
>> huge nova.conf file...
>>
>> -Sylvain
>>
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> Sure, a release note is justified, just like when we deprecate or rename
> an option.
> 
> The thing I like about the deprecated_for_removal kwarg is the warning
> that gets logged when you are still using the thing. I'm sure people see
> release notes for deprecated things and say, I'll add a TODO to clean
> this up in our tooling, but then get busy and forget about it until they
> break. The annoying warning is a constant indicator that this is
> something you need to move off of sooner rather than later.

Yes, I like also using the deprecated_for_removal flags, and I don't
want to change that.

This is just for the case of "we're going to change the default on the
next release". I think in that case, a release note is the right thing.
We don't need new fancy code for that.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] [nova] default-next-release opt flag

2015-11-18 Thread Sylvain Bauza



Le 18/11/2015 16:34, Sean Dague a écrit :

On 11/17/2015 05:31 PM, Matt Riedemann wrote:


On 11/17/2015 2:05 PM, Sylvain Bauza wrote:


Le 17/11/2015 20:25, Sean Dague a écrit :

On 11/17/2015 01:48 PM, Matt Riedemann wrote:

On 11/17/2015 11:28 AM, Alexis Lee wrote:

Often in Nova we introduce an option defaulted off (so as not to break
people) but then we want to make it default in the next release.

Someone suggested an opt flag to mark this but I don't know what
impact
they wanted it to have. IE how the user should be alerted about the
presence of these flagged options.

If you are that person, or have opinions on this, please reply :)


Alexis (lxsli)


There is the deprecated_for_removal kwarg, but that doesn't fit here.
There is the DeprecatedOpt, but that's for moving/renaming options. So
this is something else, like deprecated_default or
pending_default_change or something.

Honestly, with reno now we could probably just add a release note in
when we add it. That's more likely for us to not loose a thing like
that.

 -Sean


Agreed, it's now far easier to ask for having a release note within the
change, so the operators can just look at that. It also seems to me far
better for them to check the release notes rather than trying to see the
huge nova.conf file...

-Sylvain


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Sure, a release note is justified, just like when we deprecate or rename
an option.

The thing I like about the deprecated_for_removal kwarg is the warning
that gets logged when you are still using the thing. I'm sure people see
release notes for deprecated things and say, I'll add a TODO to clean
this up in our tooling, but then get busy and forget about it until they
break. The annoying warning is a constant indicator that this is
something you need to move off of sooner rather than later.

Yes, I like also using the deprecated_for_removal flags, and I don't
want to change that.

This is just for the case of "we're going to change the default on the
next release". I think in that case, a release note is the right thing.
We don't need new fancy code for that.


I usually hate saying just +1 in an email, but...

+1

-Sean



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer][libvirt] Libvirt error during instance disk allocation metering

2015-11-18 Thread Ilya Tyaptin
Hi, folks!

In our deployed envs we met with a libvirt error *"missing storage backend
for network files using rbd protocol"* in *virDomainGetBlockInfo* call [1]

.
This exception is raised when Ceilometer are trying to get info about VM
disk usage and allocation.
It only affects getting measures for a some disk pollsters which added in
this CR [2]

 with specified libvirt call [3]

 .
These pollsters have been added in the Kilo cycle and successful work in
Kilo deployments, but it doesn't work now.

Also, we have a bug in the upstream launchpad [4]

 but it have not been fixed yet.

I would glad to see any ideas about root cause of this issue or ways to
fixing it.

Thank you in advance!

References:
[1] Traceback 


./ceilometer-polling.log.0:4192:2015-11-17 16:20:54.807 14107 ERROR
ceilometer.compute.pollsters.disk Traceback (most recent call last):
./ceilometer-polling.log.0:4193:2015-11-17 16:20:54.807 14107 ERROR
ceilometer.compute.pollsters.disk   File
"/usr/lib/python2.7/dist-packages/ceilometer/compute/pollsters/disk.py",
line 703, in get_samples
./ceilometer-polling.log.0:4194:2015-11-17 16:20:54.807 14107 ERROR
ceilometer.compute.pollsters.disk instance,
./ceilometer-polling.log.0:4195:2015-11-17 16:20:54.807 14107 ERROR
ceilometer.compute.pollsters.disk   File
"/usr/lib/python2.7/dist-packages/ceilometer/compute/pollsters/disk.py",
line 672, in _populate_cache
./ceilometer-polling.log.0:4196:2015-11-17 16:20:54.807 14107 ERROR
ceilometer.compute.pollsters.disk for disk, info in disk_info:
./ceilometer-polling.log.0:4197:2015-11-17 16:20:54.807 14107 ERROR
ceilometer.compute.pollsters.disk   File
"/usr/lib/python2.7/dist-packages/ceilometer/compute/virt/libvirt/inspector.py",
line 215, in inspect_disk_info
./ceilometer-polling.log.0:4198:2015-11-17 16:20:54.807 14107 ERROR
ceilometer.compute.pollsters.disk block_info = domain.blockInfo(device)
./ceilometer-polling.log.0:4199:2015-11-17 16:20:54.807 14107 ERROR
ceilometer.compute.pollsters.disk   File
"/usr/lib/python2.7/dist-packages/libvirt.py", line 658, in blockInfo
./ceilometer-polling.log.0:4200:2015-11-17 16:20:54.807 14107 ERROR
ceilometer.compute.pollsters.disk if ret is None: raise libvirtError
('virDomainGetBlockInfo() failed', dom=self)
./ceilometer-polling.log.0:4201:2015-11-17 16:20:54.807 14107 ERROR
ceilometer.compute.pollsters.disk libvirtError: internal error: missing
storage backend for network files using rbd protocol

[2] CR with this commit:
https://review.openstack.org/#/c/145819/23/ceilometer/compute/virt/libvirt/inspector.py,cm

[3] Code entry:
https://github.com/openstack/ceilometer/blob/stable/liberty/ceilometer/compute/virt/libvirt/inspector.py#L215
[4] Upstream bug: https://bugs.launchpad.net/ceilometer/+bug/1457440


Best regards,

Tyaptin Il​y​a,

Ceilometer developer,

Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] trove unit tests failing on stable/kilo

2015-11-18 Thread Robert Collins
On 18 November 2015 at 16:27, Matt Riedemann  wrote:
>
>
> On 11/17/2015 9:22 PM, Matt Riedemann wrote:
>>
>> I noticed this failing today:
>>
>>
>> http://logs.openstack.org/81/206681/3/check/gate-trove-python27/45d645d/console.html#_2015-11-17_17_07_28_061
>>
>>
>> Looks like https://review.openstack.org/#/c/218701/ and maybe the
>> dependent python-troveclient change would need to be backported to
>> stable/kilo (neither are clean backports), or we can just skip the test
>> on stable/kilo if there is a good reason why it won't work.
>>
>
> I also see that the unit test job on stable/kilo is pulling in trunk
> python-troveclient:
>
> http://logs.openstack.org/81/206681/3/check/gate-trove-python27/45d645d/console.html#_2015-11-17_17_07_28_393
>
> Even though we have troveclient capped at 1.1.0 in kilo g-r:
>
> https://github.com/openstack/requirements/blob/stable/kilo/global-requirements.txt#L136
>
> So how is that happening?
>
> Oh, because of this:
>
> https://github.com/openstack/trove/blob/stable/kilo/test-requirements.txt#L17
>
> And that's terriblewhy are we doing that?

I know you know this, but yeah - don't do that :).

FWIW constraints *will* override that but liberty and above.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Mesos Conductor

2015-11-18 Thread bharath thiruveedula
Hi all,I am working on the blueprint [1]. As per my understanding, we have two 
resources/objects in mesos+marathon:1)Apps: combination of instances/containers 
running on multiple hosts representing a service.[2]2)Application Groups: Group 
of apps, for example we can have database application group which consists 
mongoDB app and MySQL App.[3]So I think we need to have two resources 'apps' 
and 'appgroups' in mesos conductor like we have pod and rc for k8s. And 
regarding 'magnum container' command, we can create, delete and retrieve 
container details as part of mesos app itself(container =  app with 1 
instance). Though I think in mesos case 'magnum app-create ..."  and 'magnum 
container-create ...' will use the same REST API for both cases. Let me know 
your opinion/comments on this and correct me if I am 
wrong[1]https://blueprints.launchpad.net/magnum/+spec/mesos-conductor.[2]https://mesosphere.github.io/marathon/docs/application-basics.html[3]https://mesosphere.github.io/marathon/docs/application-groups.htmlRegardsBharath
 T __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-manage db archive_deleted_rows broken

2015-11-18 Thread Matt Riedemann



On 11/18/2015 7:32 AM, Andrew Laski wrote:

On 11/17/15 at 07:43pm, Jay Pipes wrote:

On 11/17/2015 05:43 PM, Matt Riedemann wrote:

I found some time to work on a reverse sort of nova's tables for the db
archive command, that looks like [1].  It works fine in the unit tests,
but fails because the deleted instances are referenced by
instance_actions that aren't deleted.  I see any DB APIs for deleting
instance actions.

Were we just planning on instance_actions living forever in the
database?


Not as far as I understand.


They were never intended to live forever.  However there is a use case
for holding on to the deleted action so that someone could query when or
by whom their instance was deleted.  But the current API does not
provide a good way to query for that so this may be something better
left to the growing list of things that Tasks could address.




Should we soft delete instance_actions when we delete the referenced
instance?


No.


Or should we (hard) delete instance_actions when we archive (move to
shadow tables) soft deleted instances?


Yes.

Best,
-jay


This is going to be a blocker to getting nova-manage db
archive_deleted_rows working.

[1] https://review.openstack.org/#/c/246635/



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



OK, so we talked a bit about this in IRC today.  I'll try to summarize 
here.  There are really two options it sounds like, either soft delete 
instance_actions when we soft delete instances, or just hard delete the 
instance_actions when archiving/purging soft deleted instances (to avoid 
the referential constraint failure when hard deleting the instances).


We have to note that the os-instance-actions API only works on instances 
that are not deleted [1]. So once you delete an instance (soft deleted 
in this case, i.e. instances.deleted != 0 in the DB), then the 
os-instance-actions API is useless. We could change that with a 
microversion to read deleted instances...but I'm not sure if anyone 
wants that (operators might?).


Even if we start soft deleting instance_actions, the archive/purge code 
will still have to account for existing resources in the database and 
handle them, i.e. when archiving an instance, find it's related 
instance_actions and if they are not soft deleted, soft delete and 
archive them first before archiving the instance. We could probably also 
provide a DB migration to do this, but that would be a lot of queries 
and updates potentially during an offline upgrade. We could also provide 
a nova-manage command to do it explicitly.


So hard-deleting instance_actions when archiving/purging instances is 
easier from a development perspective, it's much more straight forward 
(fewer bugs, yay!).  The downside to hard delete is they are gone, they 
wouldn't be in shadow tables or anything in the nova database.


The proposed remedy for wanting to store instance_actions after 
hard-delete is to basically have a monitoring service setup that is 
storing the instance delete notificiations, i.e. StackTach, gnocchi, 
probably others (monasca?). This is kind of a cop out for the nova dev 
team since it puts the burden on the operators to have this extra 
service running, but I suspect most already have something like this 
running anyway.


I'll cross post this to the operators ML and have it on this weeks' nova 
meeting agenda to see if we can find some agreement on a path forward here.


[1] 
https://github.com/openstack/nova/blob/12.0.0/nova/api/openstack/compute/instance_actions.py#L68


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] trove unit tests failing on stable/kilo

2015-11-18 Thread Matt Riedemann



On 11/18/2015 9:23 AM, Matt Riedemann wrote:



On 11/17/2015 10:37 PM, Nikhil Manchanda wrote:

Thanks for putting up that fix Matt.

The dependency on trunk python-troveclient (for stable/kilo) definitely
seems
screwy-- but I'm wondering if this was done for backwards compatibility
reasons?
(i.e. to ensure the latest version of python-troveclient should be able
to work
correctly against all previous releases of trove.)


If that was the plan, https://review.openstack.org/#/c/210004/ totally
blows that up since it's a backward incompatible change and was released
in a minor version rather than a major version.  That change is really
what's breaking stable/kilo trove unit tests.



Either way, I think we should be honoring the requirements specified for
the respective
releases in g-r, so I think that this is the right fix.

Cheers,
Nikhil



On Tue, Nov 17, 2015 at 7:42 PM, Matt Riedemann
> wrote:



On 11/17/2015 9:27 PM, Matt Riedemann wrote:



On 11/17/2015 9:22 PM, Matt Riedemann wrote:

I noticed this failing today:


http://logs.openstack.org/81/206681/3/check/gate-trove-python27/45d645d/console.html#_2015-11-17_17_07_28_061




Looks like https://review.openstack.org/#/c/218701/ and
maybe the
dependent python-troveclient change would need to be
backported to
stable/kilo (neither are clean backports), or we can just
skip the test
on stable/kilo if there is a good reason why it won't work.


I also see that the unit test job on stable/kilo is pulling in
trunk
python-troveclient:


http://logs.openstack.org/81/206681/3/check/gate-trove-python27/45d645d/console.html#_2015-11-17_17_07_28_393



Even though we have troveclient capped at 1.1.0 in kilo g-r:


https://github.com/openstack/requirements/blob/stable/kilo/global-requirements.txt#L136



So how is that happening?

Oh, because of this:


https://github.com/openstack/trove/blob/stable/kilo/test-requirements.txt#L17



And that's terriblewhy are we doing that?


Attempting to fix here: https://review.openstack.org/#/c/246735/


--

Thanks,

Matt Riedemann



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe


http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





Getting back to root causes, I discussed with a couple of people in IRC 
and wanted to take notes here.


The root issue was the backward incompatible troveclient change:

https://review.openstack.org/#/c/210004/

That was released in 1.3.0 and 1.4.0.  A server side change was made in 
liberty that requires that:


https://review.openstack.org/#/c/218701/

The troveclient change is breaking stable/kilo since the server side 
change isn't in stable/kilo. We could backport that, but given 
global-requirements on troveclient in stable/kilo, it's technically invalid:


https://github.com/openstack/requirements/blob/stable/kilo/global-requirements.txt#L136

python-troveclient>=1.0.7,<1.1.0

Since it's unit tests only and stable/kilo trove is testing against 
trunk troveclient, maybe we don't care - we just hack the fix and go 
about our merry way.


I have little stake in trove as a project so it's ultimately up to the 
project drivers.


The right thing to do, IMO, is revert that backward incompatible 
troveclient change, release that as 1.4.1, restore the change and then 
release that as 2.0. We'd also blacklist 1.3.0 and 1.4.0 in 
global-requirements.


Unit tests on trove master and stable/liberty would break once the 
revert on troveclient landed because the trove unit tests require that 
code in troveclient, but that'd be fixed once you revert the revert 
(since the trove unit tests run trunk troveclient, not from released 
versions). This could be short term pain though and it's controllable 
within the trove core team.


I think long-term trove should not be unit testing against trunk 
troveclient, since it's a false sense of functionality as we've seen 
here. Trove should really be requiring the same versions of troveclient 
that are specified in global-requirements. Doing that would make this 
unit test thing a bit messier though, but not unmanageable.


So, I guess the question is, what does the trove team want to do here?

--

Thanks,

Matt Riedemann



Re: [openstack-dev] [QA][Tempest] Deprecate or drop about Tempest CLI

2015-11-18 Thread David_Paterson
Dell - Internal Use - Confidential
Normally I would say keep the original clis for a cycle but it also makes it 
more difficult to migrate them to using cliff. So I would vote for removing old 
entry points.  Reworking any workflows consuming the old entry points, to use 
the new ones, should be pretty trivial.

Thanks,
dp

From: Masayuki Igawa [mailto:masayuki.ig...@gmail.com]
Sent: Monday, November 16, 2015 8:40 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [QA][Tempest] Deprecate or drop about Tempest CLI

Hi tempest CLI users and developers,

Now, we are improving Tempest CLI as we discussed at the summit[1] and 
migrating legacy commands to tempest cli[2].

In this situation, my concern is 'CLI compatibility'.
If we'll drop old CLIs support(like my patch), it might break existing 
workflows.

So I think there are two options.
 1. Deprecate old tempest CLIs in this Mitaka cycle and we'll drop them at the 
beginning of the N cycle.
 2. Drop old tempest CLIs in this Mitaka cycle when new CLIs will be 
implemented.
# Actually, I'd like to just drop old CLIs. :)

If you have question and/or opinion, please let me know.

[1] https://etherpad.openstack.org/p/tempest-cli-improvements
[2] https://review.openstack.org/#/c/240399/

Best Regards,
-- Masayuki Igawa

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][neutron] How we handle Kilo backports

2015-11-18 Thread Assaf Muller
On Wed, Nov 18, 2015 at 12:38 PM, Carl Baldwin  wrote:
> On Wed, Nov 18, 2015 at 9:44 AM, Ihar Hrachyshka  wrote:
>> Hi all,
>>
>> as per [1] I imply that all projects under stable-maint-core team
>> supervision must abide the stable policy [2] which limits the types of
>> backports for N-2 branches (now it’s stable/kilo) to "Only critical bugfixes
>> and security patches”. With that, I remind all stable core members about the
>> rule.
>>
>> Since we are limited to ‘critical bugfixes’ only, and since there is no
>> clear definition of what ‘critical’ means, I guess we should define it for
>> ourselves.
>>
>> In Neutron world, we usually use Critical importance for those bugs that
>> break gate. High is used for those bugs that have high impact production
>> wise. With that in mind, I suggest we define ‘critical’ bugfixes as Critical
>> + High in LP. Comments on that?
>
> I was wondering about this today too.  Ihar is correct about how we
> use Critical importance in launchpad for Neutron bugs.  The number of
> Critical neutron bugs is very small and most of them are not relevant
> to stable releases because they are targeted at gate breakage incurred
> by new development in master.
>
> I'll +1 that we should extend this to Critical + High in launchpad.
> Otherwise, we would severely limit our ability to backport important
> bug fixes to a stable release that is only 6 months old and many
> deployers are only beginning to turn their attention to it.

+1

In many ways stable/kilo is more important than stable/liberty today.

>
>> (My understanding is that we can also advocate for the change in the global
>> policy if we think the ‘critical only’ rule should be relaxed, but till then
>> it makes sense to stick to what policy says.)
>
> +1
>
> Carl
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][stable] nominating lin hua cheng for keystone-stable-maint

2015-11-18 Thread Dolph Mathews
+1

On Tue, Nov 17, 2015 at 5:24 PM, Steve Martinelli 
wrote:

> I'd like to nominate Lin Hua Cheng for keystone-stable-maint. He has been
> doing reviews on keystone's liberty and kilo stable branches since mitaka
> development has opened, and being a member of horizon-stable-maint, he is
> already familiar with stable branch policies. If there are no objections
> from the current stable-maint-core and keystone-stable-maint teams, then
> I'd like to add him.
>
> Thanks,
>
> Steve Martinelli
> Keystone Project Team Lead
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Use of self signed certs in endpoints

2015-11-18 Thread Xav Paice
Setting an env var seems like a very straightforward way to do this, and
means the deployer can easily control the specifics of what they want
without any code changes - that suits me perfectly.  Adding some
documentation somewhere to that effect might be handy but this is indeed a
bit of an edge case if the distro packages already patch requests to
override the default anyway.  I only tripped over this when I started using
virtual environments and pip, and wasn't expecting the distro package to
alter the behaviour of the library it ships.

Thanks for the feedback and discussion, it's been really helpful.

On 17 November 2015 at 23:30, Cory Benfield  wrote:

>
> > On 16 Nov 2015, at 11:54, Sean Dague  wrote:
> > That sounds pretty reasonable to me. I definitely support the idea that
> > we should be using system CA by default, even if that means overriding
> > requests in our tools.
>
> Setting REQUESTS_CA_BUNDLE is absolutely the way to go about this. In
> requests 2.9.0 we will also support the case that REQUESTS_CA_BUNDLE points
> to a directory of certificates, not a single certificate file, so this
> should cover all Linux distributions methods of distributing
> OpenSSL-compatible certificates.
>
> If OpenStack wants to support using Windows and OS X built-in certificate
> stores, that's harder. This is because both systems do not use PEM-file
> based certificate distribution, which means OpenSSL can’t read them.
>
> Cory
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Attach/detach semantics

2015-11-18 Thread John Spray
On Wed, Nov 18, 2015 at 4:33 AM, Ben Swartzlander  wrote:
> On 11/17/2015 10:02 AM, John Spray wrote:
>>
>> Hi all,
>>
>> As you may know, there is ongoing work on a spec for Nova to define an
>> "attach/detach" API for tighter integration with Manila.
>>
>> The concept here is that this mechanism will be needed to implement
>> hypervisor mediated FS access using vsock, but that the mechanism
>> should also be applicable more generally to an "attach" concept for
>> filesystems accessed over IP networks (like existing NFS filers).
>>
>> In the hypervisor-mediated case, attach would involve the hypervisor
>> host connecting as a filesystem client and then re-exporting to the
>> guest via a local address.  We think this would apply to
>> driver_handles_share_servers type drivers that support share networks,
>> by mapping the attach/detach share API to attaching/detaching the
>> share network from the guest VM.
>>
>> Does that make sense to people maintaining this type of driver?  For
>> example, for the netapp and generic drivers, is it reasonable to
>> expose nova attach/detach APIs that attach and detach the associated
>> share network?
>
>
> I'm not sure this proposal makes sense. I would like the share attach/detach
> semantics to be the same for all types of shares, regardless of the driver
> type.
>
> The main challenge with attaching to shares on share servers (with share
> networks) is that there may not exist a network route from the hypervisor to
> the share server, because share servers are only required to be accessible
> from the share network from which they are created. This has been a known
> problem since Liberty because this behaviour prevents migration from
> working, therefore we're proposing a mechanism for share-server drivers to
> provide admin-network-facing interfaces for all share servers. This same
> mechanism should be usable by the Nova when doing share attach/detach. Nova
> would just need to list the export locations using an admin-context to see
> the admin-facing export location that it should use.

For these drivers, we're not proposing connecting to them from the
hypervisor -- we would still be connecting directly from the guest via
the share network.

The change would be from the existing workflow:
 * Create share
 * Attach guest network to guest VM (need to look up network info,
talk to neutron API)
 * Add IP access permission for the guest to access the share (need to
know IP of the guest)
 * Mount from guest VM

To a new workflow:
 * Create share
 * Attach share to guest (knowing only share ID and guest instance ID)
 * Mount from guest VM

The idea is to abstract the networking part away, so that the user
just has to say "I want to be able to mount share X from guest Y",
without knowing about the networking stuff going on under the hood.
While this is partly because it's slicker, this is mainly so that
applications can use IP-networking shares interchangeably with future
hypervisor mediated shares: they call "attach" and don't have to worry
about whether that's a share network operation under the hood or a
hypervisor-twiddling operation under the hood.

John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][neutron] upper constraints for stable/liberty

2015-11-18 Thread Ihar Hrachyshka

Robert Collins  wrote:


On 14 November 2015 at 02:53, Ihar Hrachyshka  wrote:

Hi Sachi and all,

I was recently looking into how stable/liberty branches are set for  
neutron

in terms of requirements caps, and I realized that we don’t have neither
version caps nor upper constraints applied to unit test jobs in
stable/liberty gate. We have -constraints targets defined in tox.ini, but
they are not running in gate.

I believe this situation leaves us open to breakages by any random library
releases out there. Am I right? If so, I would like to close the breakage
vector for projects I care (all neutron stadium).

I suggest we do the following:

- unless there is some specific reason for that, stop running  
unconstrained

jobs in neutron/master;


Sachi King is working up a bit of data mining to confirm that the
constraints jobs are only failing when unconstrained jobs fail - then
we're going to propose the change to project-config to switch around
which vote.



For what I saw in neutron, it never fails unless there is actual constraint  
not bumped.


- enable voting for constraints jobs in neutron/liberty; once proved to  
work

fine, stop running unconstrained jobs in neutron/liberty;


I expect the same query can answer this as well.

- for neutron-*aas, introduce constraints targets in tox.ini, enable  
jobs in

gate; make them vote there/remove old jobs;
- after that, backport constraints targets to stable/liberty; make them  
vote

there/remove old jobs.


We're going to advocate widespread adoption once the neutron master
ones are voting


Does the plan make sense?


Totally :) As non-Neutron-contributors we've just been conservative in
our recommendations; if Neutron wants to move a little faster by
taking on a little risk, thats *totally cool* IMO.


I believe there is general understanding that it’s the way to go, and we  
were already happy to be guinea pigs for initial data mining, so I don’t  
expect problems getting the core team onboard.


My question was more about what we do with stable/liberty branches. Is it  
part of the plan that we backport the constraint jobs there?


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] Documentation update planning

2015-11-18 Thread Jesse Pretorius
Hi everyone,

In the community meeting [1] this week we'll be having a facilitated
discussion with regards to documentation improvements for OpenStack-Ansible.

If you have any feedback you'd like considered, would like to be part of
the conversation, or are keen to contribute then please add to the etherpad
[2], respond to this email and ideally be in the meeting. :)

[1]
https://wiki.openstack.org/wiki/Meetings/openstack-ansible#Agenda_for_next_meeting
[2] https://etherpad.openstack.org/p/oa-install-docs

-- 
Jesse Pretorius
IRC: odyssey4me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] HMT/Reseller - The Proposed Plan

2015-11-18 Thread Henry Nash
Hi

During our IRC meeting this week, we decided we needed a recap of this plan, so 
here goes:

Phase 0 (already merged in Liberty):

We already support a hierarchy of projects (using the parent_id attribute of 
the project entity). All the projects in a tree must be in the same domain. 
Role assignment inheritance is supported down the tree (either by assigning the 
role to the domain and have it inherited by the whole tree, or by assigning to 
a node in the project tree and having that assignment inherited by the sub-tree 
below that node).

Phase 1 (all code up for review for Mitaka):

Keep the existing conceptual model for domains, but store them as projects with 
a new attribute (is_domain=True).  The domain API remains (but accesses the 
project table instead), but you can also use the project API to create a domain 
(by setting is_domain=True). The is_domain attribute is immutable - i.e. you 
can’t flip a project between being a regular project and one acting as a 
domain. Projects acting as a domain have no parent (they are always top level 
domains/projects). Domain tokens can be deprecated in place of a project token 
scoped to a project acting as a domain (the token is augmented with the 
is_domain attribute so a policy rule can distinguish between a token on a 
domain and a regular project). This phase does not provide any support for 
resellers.

Phase 1 is covered by the two approved specs: HMT 
(https://review.openstack.org/#/c/139824 
, this actually coves Phase 1 and 2) 
and is_domain token (https://review.openstack.org/#/c/193543/ 
)

Phase 2 (earlier versions of code were proposed for Liberty, need fixing up for 
Mitaka):

At the summit we agreed to re-examine Phase 2 to see if we could perhaps use 
federation instead to cover this use case. As outlined in my email to the list 
(http://lists.openstack.org/pipermail/openstack-dev/2015-October/078063.html 
), 
this does not provide the support required. Hence, as per that email, I am 
proposing we revert the original specification (with restrictions), which is as 
follows:

Extended the concept of domains to allow a hierarchy of domains to support the 
reseller model (the requirements and specifications are in the same approved 
spec that covers Phase 1 above, https://review.openstack.org/#/c/139824 
). Given that we would already have 
Phase 1 in place, the actual changes in Phase 2 would be as follows:

a) Since projects already have a parent_id attribute and domains are 
represented as projects with the is_domain attribute set to True, allow 
projects acting as domains to be nested (like regular projects).
b) The parent of a project acting as a domain must either be another project 
acting as a domain or None (i.e. a root domain). A regular project cannot act 
as a parent for a project acting as a domain. In effect, we allow a hierarchy 
of domains at the top of “the tree” and all regular project trees hang off 
those higher level domains.
c) Projects acting as domains cannot be the recipient of an inherited role 
assignment from their parent - i.e. we don’t inherit assignments between 
domains.
d) All domain names (i.e. project names for projects acting as domains) must 
still be unique (otherwise we break our current auth model)

Comments on the issues that people have raised with Phase 2:

i) It’s too complex!
Well, in terms of code changes, the majority of the changes are actually in 
Phase 0 and 1. We just use the underlying capabilities in Phase 2.

ii) Our restriction of “all domain names must be unique” means that one 
reseller could “squat” on a domain name and prevent anyone else from having a 
domain of that name.
This is absolutely true. In reality, however, reseller arrangements will be 
contractual, so if a cloud provider was really concerned about this, they could 
legislate against this in the contract with the reseller (e.g. "All the domains 
you create must be suffixed with your reseller name”). In addition, if we 
believe that federation will become the dominant auth model, then the physical 
name of the domain becomes less important.

iii) What’s this about name clashing?
Ok, so there are two different scenarios here:
> Since, today, a project can have the same name as it’s domain, this means 
> that when we convert to using projects acting as a domain, the same thing can 
> be true (so this is really a consequence of Phase 1). Although we could, in 
> theory, prevent this for any new domains being created, migrating over the 
> existing domains could always create this situation. However, it doesn’t 
> actually cause any issue to existing APIs, so I don’t think it’s anything to 
> be concerned about.
> In Phase 2, you could have the situation where a project acting as a domain 
> has child projects some of which are 

Re: [openstack-dev] [networking-ovs-dpdk]

2015-11-18 Thread James Page
Hi All

Reading through this thread I thought participants might be interested to
know that Ubuntu now provides a DPDK enabled Open vSwitch package in Ubuntu
15.10 and for Ubuntu 14.04 using the Liberty UCA; you can use it as follows:

   sudo add-apt-repository cloud-archive:liberty  # not required for 15.10
   sudo apt-get update
   sudo apt-get install openvswitch-switch-dpdk
   sudo update-alternatives --set
ovs-vswitchd /usr/lib/openvswitch-switch-dpdk/ovs-vswitchd-dpdk
   sudo service openvswitch-switch restart

DPDK configuration is passed through OVS using
/etc/default/openvswitch-switch (read for details).  The dpdk package also
provides some basic configuration mechanisms for hugepages and binding
networking adapters to userspace in /etc/dpdk.

This is a first packaging release and is experimental - please do provide
feedback on what you do and don't like about it...

On Wed, Nov 18, 2015 at 6:12 AM, Prathyusha Guduri <
prathyushaconne...@gmail.com> wrote:

> Thanks a lot Sean, that was helpful.
>
> Changing the target from ivshmem to native-linuxapp removed the error and
> it doesn't hang at creating external bridge anymore.
> All processes(nova-api, neutron, ovs-vswitchd, etc) did start.
>
> Thanks,
> Prathyusha
>
> On Tue, Nov 17, 2015 at 7:57 PM, Mooney, Sean K 
> wrote:
>
>> We mainly test with 2M hugepages not 1G however our ci does use 1G pages.
>>
>> We recently noticed a different but unrelated related issue with using
>> the ivshmem target when building dpdk.
>>
>> (https://bugs.launchpad.net/networking-ovs-dpdk/+bug/1517032)
>>
>>
>>
>> Instead of modifying dpdk can you try
>>
>> Changing the default dpdk build target to x86_64-native-linuxapp-gcc.
>>
>>
>>
>> This can be done by  adding
>>
>>
>>
>> RTE_TARGET=x86_64-native-linuxapp-gcc to the local.conf
>>
>> And removing the following file to force a rebuild
>> “/opt/stack/ovs/BUILD_COMPLETE”
>>
>>
>>
>> I agree with your assessment though this appears to be a timing issue in
>> dpdk 2.0
>>
>>
>>
>>
>>
>>
>>
>> *From:* Prathyusha Guduri [mailto:prathyushaconne...@gmail.com]
>> *Sent:* Tuesday, November 17, 2015 1:42 PM
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [networking-ovs-dpdk]
>>
>>
>>
>> Here is stack.sh log -
>>
>>
>> 2015-11-17 13:38:50.010 | Loading uio module
>> 2015-11-17 13:38:50.028 | Loading DPDK UIO module
>> 2015-11-17 13:38:50.038 | starting ovs db
>> 2015-11-17 13:38:50.038 | binding nics
>> 2015-11-17 13:38:50.039 | starting vswitchd
>> 2015-11-17 13:38:50.190 | sudo RTE_SDK=/opt/stack/DPDK-v2.0.0
>> RTE_TARGET=build /opt/stack/DPDK-v2.0.0/tools/dpdk_nic_bind.py -b igb_uio
>> :07:00.0
>> 2015-11-17 13:38:50.527 | sudo ovs-vsctl --no-wait --may-exist add-port
>> br-eth1 dpdk0 -- set Interface dpdk0 type=dpdk
>> 2015-11-17 13:38:51.671 | Waiting for ovs-vswitchd to start...
>> 2015-11-17 13:38:52.685 | Waiting for ovs-vswitchd to start...
>> 2015-11-17 13:38:53.702 | Waiting for ovs-vswitchd to start...
>> 2015-11-17 13:38:54.720 | Waiting for ovs-vswitchd to start...
>> 2015-11-17 13:38:55.733 | Waiting for ovs-vswitchd to start...
>> 2015-11-17 13:38:56.749 | Waiting for ovs-vswitchd to start...
>> 2015-11-17 13:38:57.768 | Waiting for ovs-vswitchd to start...
>> 2015-11-17 13:38:58.787 | Waiting for ovs-vswitchd to start...
>> 2015-11-17 13:38:59.802 | Waiting for ovs-vswitchd to start...
>> 2015-11-17 13:39:00.818 | Waiting for ovs-vswitchd to start...
>> 2015-11-17 13:39:01.836 | Waiting for ovs-vswitchd to start...
>> 2015-11-17 13:39:02.849 | Waiting for ovs-vswitchd to start...
>> 2015-11-17 13:39:03.866 | Waiting for ovs-vswitchd to start...
>> 2015-11-17 13:39:04.884 | Waiting for ovs-vswitchd to start...
>> 2015-11-17 13:39:05.905 | Waiting for ovs-vswitchd to start...
>> 2015-11-17 13:39:06.923 | Waiting for ovs-vswitchd to start...
>> 2015-11-17 13:39:07.937 | Waiting for ovs-vswitchd to start...
>> 2015-11-17 13:39:08.956 | Waiting for ovs-vswitchd to start...
>> 2015-11-17 13:39:09.973 | Waiting for ovs-vswitchd to start...
>> 2015-11-17 13:39:10.988 | Waiting for ovs-vswitchd to start...
>> 2015-11-17 13:39:12.004 | Waiting for ovs-vswitchd to start...
>> 2015-11-17 13:39:13.022 | Waiting for ovs-vswitchd to start...
>> 2015-11-17 13:39:14.040 | Waiting for ovs-vswitchd to start...
>> 2015-11-17 13:39:15.060 | Waiting for ovs-vswitchd to start...
>> 2015-11-17 13:39:16.073 | Waiting for ovs-vswitchd to start...
>> 2015-11-17 13:39:17.089 | Waiting for ovs-vswitchd to start...
>> 2015-11-17 13:39:18.108 | Waiting for ovs-vswitchd to start...
>> 2015-11-17 13:39:19.121 | Waiting for ovs-vswitchd to start...
>> 2015-11-17 13:39:20.138 | Waiting for ovs-vswitchd to start...
>> 2015-11-17 13:39:21.156 | Waiting for ovs-vswitchd to start...
>> 2015-11-17 13:39:22.169 | Waiting for ovs-vswitchd to start...
>> 2015-11-17 13:39:23.185 | Waiting for ovs-vswitchd to start...
>>
>>
>>
>> On Tue, Nov 17, 2015 at 6:50 PM, 

Re: [openstack-dev] [networking-ovs-dpdk]

2015-11-18 Thread Mooney, Sean K
Hi that is great to know.
I will internally report this behavior to our dpdk team
But I have already got a patch to change our default target to native-linuxapp
https://review.openstack.org/#/c/246375/ which should merge shortly.
Im glad it is now working for you.

From: Prathyusha Guduri [mailto:prathyushaconne...@gmail.com]
Sent: Wednesday, November 18, 2015 6:13 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [networking-ovs-dpdk]

Thanks a lot Sean, that was helpful.
Changing the target from ivshmem to native-linuxapp removed the error and it 
doesn't hang at creating external bridge anymore.
All processes(nova-api, neutron, ovs-vswitchd, etc) did start.
Thanks,
Prathyusha

On Tue, Nov 17, 2015 at 7:57 PM, Mooney, Sean K 
> wrote:
We mainly test with 2M hugepages not 1G however our ci does use 1G pages.
We recently noticed a different but unrelated related issue with using the 
ivshmem target when building dpdk.
(https://bugs.launchpad.net/networking-ovs-dpdk/+bug/1517032)

Instead of modifying dpdk can you try
Changing the default dpdk build target to x86_64-native-linuxapp-gcc.

This can be done by  adding

RTE_TARGET=x86_64-native-linuxapp-gcc to the local.conf
And removing the following file to force a rebuild 
“/opt/stack/ovs/BUILD_COMPLETE”

I agree with your assessment though this appears to be a timing issue in dpdk 
2.0



From: Prathyusha Guduri 
[mailto:prathyushaconne...@gmail.com]
Sent: Tuesday, November 17, 2015 1:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [networking-ovs-dpdk]

Here is stack.sh log -

2015-11-17 13:38:50.010 | Loading uio module
2015-11-17 13:38:50.028 | Loading DPDK UIO module
2015-11-17 13:38:50.038 | starting ovs db
2015-11-17 13:38:50.038 | binding nics
2015-11-17 13:38:50.039 | starting vswitchd
2015-11-17 13:38:50.190 | sudo RTE_SDK=/opt/stack/DPDK-v2.0.0 RTE_TARGET=build 
/opt/stack/DPDK-v2.0.0/tools/dpdk_nic_bind.py -b igb_uio :07:00.0
2015-11-17 13:38:50.527 | sudo ovs-vsctl --no-wait --may-exist add-port br-eth1 
dpdk0 -- set Interface dpdk0 type=dpdk
2015-11-17 13:38:51.671 | Waiting for ovs-vswitchd to start...
2015-11-17 13:38:52.685 | Waiting for ovs-vswitchd to start...
2015-11-17 13:38:53.702 | Waiting for ovs-vswitchd to start...
2015-11-17 13:38:54.720 | Waiting for ovs-vswitchd to start...
2015-11-17 13:38:55.733 | Waiting for ovs-vswitchd to start...
2015-11-17 13:38:56.749 | Waiting for ovs-vswitchd to start...
2015-11-17 13:38:57.768 | Waiting for ovs-vswitchd to start...
2015-11-17 13:38:58.787 | Waiting for ovs-vswitchd to start...
2015-11-17 13:38:59.802 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:00.818 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:01.836 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:02.849 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:03.866 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:04.884 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:05.905 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:06.923 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:07.937 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:08.956 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:09.973 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:10.988 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:12.004 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:13.022 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:14.040 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:15.060 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:16.073 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:17.089 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:18.108 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:19.121 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:20.138 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:21.156 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:22.169 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:23.185 | Waiting for ovs-vswitchd to start...

On Tue, Nov 17, 2015 at 6:50 PM, Prathyusha Guduri 
> wrote:
Hi Sean,
Here is ovs-vswitchd.log

2015-11-13T12:48:01Z|1|dpdk|INFO|User-provided -vhost_sock_dir in use: 
/var/run/openvswitch
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 1 on socket 0
EAL: Detected lcore 2 as core 2 on socket 0
EAL: Detected lcore 3 as core 3 on socket 0
EAL: Detected lcore 4 as core 4 on socket 0
EAL: Detected lcore 5 as core 5 on socket 0
EAL: Detected lcore 6 as core 0 on socket 0
EAL: Detected lcore 7 as core 1 on socket 0
EAL: Detected lcore 8 as core 2 on socket 0
EAL: Detected lcore 9 as core 3 on socket 0
EAL: Detected lcore 10 as core 4 on socket 0

Re: [openstack-dev] [Ironic] [OSC] Quick poll: OpenStackClient command for provision action

2015-11-18 Thread Sam Betts (sambetts)
I think all the filtering etc that exists on the current CLI should move over 
to OSC, I personally find things like --fields super useful.


+1 to removing "chassis show --nodes" and making it part of node list.


+1 to deploy, instead of activate. Jim also suggested provision. WDYT?


I'd only chosen boot and shutdown because they were the only 1 word synonyms I 
could think of for power on and power off, if everyone else is happy with 
poweron and poweroff, then so am I :) I'm also not sure what to do about 
maintenance mode, maybe something like quarantine and unquarantine? I quite 
like ignore, as its descriptive of whats actually happening, but I'm unsure of 
the best antonym for it, I was thinking acknowledge or something like that.


Heres a revised list of commands based on everyone's suggestions so far:


openstack baremetal [node/driver/chassis/port] list [For ports --node, For 
nodes --chassis]

openstack baremetal [node/driver/chassis/port] show UUID [For nodes --states, 
For driver --properties]


openstack baremetal [node/chassis/port] create

openstack baremetal [node/chassis/port] update UUID

openstack baremetal [node/chassis/port] delete UUID


openstack baremetal node provide UUID

openstack baremetal node deploy UUID

openstack baremetal node rebuild UUID

openstack baremetal node inspect UUID

openstack baremetal node validate UUID

openstack baremetal node manage UUID

openstack baremetal node abort UUID

openstack baremetal node poweron UUID

openstack baremetal node poweroff UUID

openstack baremetal node reboot UUID


openstack baremetal node ignore UUID

openstack baremetal node acknowledge UUID


openstack baremetal node console [--enable, --disable] UUID

openstack baremetal node boot-device [--supported, --set CDROM, PXE, DISK] UUID


openstack baremetal [node/driver] vendor NAME_OR_UUID METHOD


WDYT?


Sam
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][infra] Getting a bleeding edge libvirt gate job running

2015-11-18 Thread Kashyap Chamarthy
On Wed, Nov 18, 2015 at 06:46:58AM +1100, Ian Wienand wrote:
> On 11/18/2015 06:10 AM, Markus Zoeller wrote:
>  This
> >was a trigger to see if we can create a gate job which utilizes the
> >latest, bleeding edge, version of libvirt to test such features.
> 
> >* Is already someone working on something like that and I missed it?
> 
> I believe the closest we have got is probably [1]; pulling apart some
> of the comments there might be helpful

After months of review, and being almost close to merge, the "Support
for libvirt/QEMU tar releases" patch you refer to below, seems to be
abandoned by the contributor.

It is functionally fine and just ought to put in an external plugin (as
you note below) and add the relevant glue code in DevStack to call it.

> In short, a devstack plugin that installs the latest libvirt is
> probably the way to go.
>
> Ongoing, the only issue I see is that we do things to base devstack
> that conflict/break this plugin, as we are more-or-less assuming the
> distro version (see things like [2], stuff like this comes up every
> now and then).
> 
> >* If 'no', is there already a repo which contains the very latest
> >   libvirt builds which we can utilize?
> 
> For Fedora, there is virt-preview  [3] at least

Yep, it is actively maintained.

> 
> [1] https://review.openstack.org/#/c/108714/
> [2] https://review.openstack.org/246501
> [3] https://fedoraproject.org/wiki/Virtualization_Preview_Repository

-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][infra] Getting a bleeding edge libvirt gate job running

2015-11-18 Thread James Page
Hi Markus

On Tue, Nov 17, 2015 at 7:10 PM, Markus Zoeller  wrote:

> Background
> ==
> The blueprint [1] wants to utilize the *virtlogd* logging deamon from
> libvirt. Among others to solve bug [2], one of our oldest ones. The
> funny part is, that this libvirt feature is still in development. This
> was a trigger to see if we can create a gate job which utilizes the
> latest, bleeding edge, version of libvirt to test such features. We
> discussed it shortly in IRC [3] (tonyb, bauzas, markus_z) and wanted to
> get some feedback here. The summary of the idea is:
> * Create a custom repo which contains the latest libvirt version
> * Enhance Devstack so that it can point to a custom repo to install
>   the built libvirt packages
> * Have a nodepool image which is compatible with the libvirt packages
> * In case of [1]: check if tempest needs further/changed tests
>
> Open questions
> ==
> * Is already someone working on something like that and I missed it?
> * If 'no', is there already a repo which contains the very latest
>   libvirt builds which we can utilize?
>

What are your requirements for libvirt and qemu?  The Liberty UCA for
Ubuntu has the following versions:

  libvirt: 1.2.16
  qemu: 2.3

if that's useful these can be added to any Ubuntu 14.04 system using:

  sudo add-apt-repository cloud-archive:liberty

versions will be updated during Xenial development to later releases which
will be backported to 14.04 as part of the Mitaka UCA.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all][osprofiler] OSprofiler is dead, long live OSprofiler

2015-11-18 Thread Boris Pavlovic
Hi stackers,

I updated OSprofiler spec: https://review.openstack.org/#/c/103825/ reviews
are required.


Best regards,
Boris Pavlovic



On Mon, Nov 9, 2015 at 2:57 AM, Boris Pavlovic  wrote:

> Hi stackers,
>
> Intro
> ---
>
> It's not a big secret that OpenStack is huge and complicated ecosystem of
> different
> services that are working together to implement OpenStack API.
>
> For example booting VM is going through many projects and services:
> nova-api, nova-scheduler, nova-compute, glance-api, glance-registry,
> keystone, cinder-api, neutron-api... and many others.
>
> The question is how to understand what part of the request takes the most
> of the time and should be improved. It's especially interested to get such
> data under the load.
>
> To make it simple, I wrote OSProfiler which is tiny library that should be
> added to all OpenStack
> projects to create cross project/service tracer/profiler.
>
> Demo (trace of CLI command: nova boot) can be found here:
> http://boris-42.github.io/ngk.html
>
> This library is very simple. For those who wants to know how it works and
> how it's integrated with OpenStack take a look here:
> https://github.com/openstack/osprofiler/blob/master/README.rst
>
> What is the current status?
> ---
>
> Good news:
> - OSprofiler is mostly done
> - OSprofiler is integrated with Cinder, Glance, Trove & Ceilometer
>
> Bad news:
> - OSprofiler is not integrated in a lot of important projects: Keystone,
> Nova, Neutron
> - OSprofiler can use only Ceilometer + oslo.messaging as a backend
> - OSprofiler stores part of arguments in api-paste.ini part in
> project.conf which is terrible thing
> - There is no DSVM job that check that changes in OSprofiler don't break
> the projects that are using it
> - It's hard to enable OSprofiler in DevStack
>
> Good news:
> I spend some time and made 4 specs that should address most of issues:
> https://github.com/openstack/osprofiler/tree/master/doc/specs
>
> Let's make it happen in Mitaka!
>
> Thoughts?
> By the way somebody would like to join this effort?)
>
> Best regards,
> Boris Pavlovic
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovs-dpdk]

2015-11-18 Thread Prathyusha Guduri
Hi James,

Thanks for the information. Will experiment on it and share my
observations.

Sean - I guess this timing issue is fixed in DPDK 2.1. Will try with that
and let you know. Anyway, thanks for the help

Regards,
Prathyusha

On Wed, Nov 18, 2015 at 6:00 PM, Mooney, Sean K 
wrote:

> Hi james
>
> Yes we are planning on testing the packaged release to see if it is
> compatible with our ml2 driver and the
>
> Changes we are submitting upstream. If it is we will add a use binary flag
> to our devstack plugin to skip the
>
> Compilation step and use that instead on 15.10 or 14.04
> cloud-archive:liberty
>
>
>
> As part of your packaging did ye fix pciutils to correctly report the
> unused drivers when an interface is bound
>
> The dpdk driver? Also does it support both igb_uio and/or vfio-pci drivers
> for dpdk interface?
>
>
>
> Anyway yes I hope to check it out and seeing what ye have done. When
> ovs-dpdk starts getting packaged in more operating systems
>
> We will probably swap our default to the binary install though we will
> keep the source install option as it allows us to work on new features
>
> Before they are packaged and to have better performance.
>
>
>
> Regards
>
> sean
>
>
>
>
>
> *From:* James Page [mailto:james.p...@ubuntu.com]
> *Sent:* Wednesday, November 18, 2015 11:55 AM
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [networking-ovs-dpdk]
>
>
>
> Hi All
>
>
>
> Reading through this thread I thought participants might be interested to
> know that Ubuntu now provides a DPDK enabled Open vSwitch package in Ubuntu
> 15.10 and for Ubuntu 14.04 using the Liberty UCA; you can use it as follows:
>
>
>
>sudo add-apt-repository cloud-archive:liberty  # not required for 15.10
>
>sudo apt-get update
>
>sudo apt-get install openvswitch-switch-dpdk
>
>sudo update-alternatives --set
> ovs-vswitchd /usr/lib/openvswitch-switch-dpdk/ovs-vswitchd-dpdk
>
>sudo service openvswitch-switch restart
>
>
>
> DPDK configuration is passed through OVS using
> /etc/default/openvswitch-switch (read for details).  The dpdk package also
> provides some basic configuration mechanisms for hugepages and binding
> networking adapters to userspace in /etc/dpdk.
>
>
>
> This is a first packaging release and is experimental - please do provide
> feedback on what you do and don't like about it...
>
>
>
> On Wed, Nov 18, 2015 at 6:12 AM, Prathyusha Guduri <
> prathyushaconne...@gmail.com> wrote:
>
> Thanks a lot Sean, that was helpful.
>
> Changing the target from ivshmem to native-linuxapp removed the error and
> it doesn't hang at creating external bridge anymore.
>
> All processes(nova-api, neutron, ovs-vswitchd, etc) did start.
>
> Thanks,
>
> Prathyusha
>
>
>
> On Tue, Nov 17, 2015 at 7:57 PM, Mooney, Sean K 
> wrote:
>
> We mainly test with 2M hugepages not 1G however our ci does use 1G pages.
>
> We recently noticed a different but unrelated related issue with using the
> ivshmem target when building dpdk.
>
> (https://bugs.launchpad.net/networking-ovs-dpdk/+bug/1517032)
>
>
>
> Instead of modifying dpdk can you try
>
> Changing the default dpdk build target to x86_64-native-linuxapp-gcc.
>
>
>
> This can be done by  adding
>
>
>
> RTE_TARGET=x86_64-native-linuxapp-gcc to the local.conf
>
> And removing the following file to force a rebuild
> “/opt/stack/ovs/BUILD_COMPLETE”
>
>
>
> I agree with your assessment though this appears to be a timing issue in
> dpdk 2.0
>
>
>
>
>
>
>
> *From:* Prathyusha Guduri [mailto:prathyushaconne...@gmail.com]
> *Sent:* Tuesday, November 17, 2015 1:42 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [networking-ovs-dpdk]
>
>
>
> Here is stack.sh log -
>
>
> 2015-11-17 13:38:50.010 | Loading uio module
> 2015-11-17 13:38:50.028 | Loading DPDK UIO module
> 2015-11-17 13:38:50.038 | starting ovs db
> 2015-11-17 13:38:50.038 | binding nics
> 2015-11-17 13:38:50.039 | starting vswitchd
> 2015-11-17 13:38:50.190 | sudo RTE_SDK=/opt/stack/DPDK-v2.0.0
> RTE_TARGET=build /opt/stack/DPDK-v2.0.0/tools/dpdk_nic_bind.py -b igb_uio
> :07:00.0
> 2015-11-17 13:38:50.527 | sudo ovs-vsctl --no-wait --may-exist add-port
> br-eth1 dpdk0 -- set Interface dpdk0 type=dpdk
> 2015-11-17 13:38:51.671 | Waiting for ovs-vswitchd to start...
> 2015-11-17 13:38:52.685 | Waiting for ovs-vswitchd to start...
> 2015-11-17 13:38:53.702 | Waiting for ovs-vswitchd to start...
> 2015-11-17 13:38:54.720 | Waiting for ovs-vswitchd to start...
> 2015-11-17 13:38:55.733 | Waiting for ovs-vswitchd to start...
> 2015-11-17 13:38:56.749 | Waiting for ovs-vswitchd to start...
> 2015-11-17 13:38:57.768 | Waiting for ovs-vswitchd to start...
> 2015-11-17 13:38:58.787 | Waiting for ovs-vswitchd to start...
> 2015-11-17 13:38:59.802 | Waiting for ovs-vswitchd to start...
> 2015-11-17 13:39:00.818 | Waiting for ovs-vswitchd to start...
> 2015-11-17 

Re: [openstack-dev] [nova][infra] Getting a bleeding edge libvirt gate job running

2015-11-18 Thread Daniel P. Berrange
On Wed, Nov 18, 2015 at 05:18:28PM +1100, Tony Breeds wrote:
> On Tue, Nov 17, 2015 at 03:32:45PM -0800, Jay Pipes wrote:
> > On 11/17/2015 11:10 AM, Markus Zoeller wrote:
> > >Background
> > >==
> > >The blueprint [1] wants to utilize the *virtlogd* logging deamon from
> > >libvirt. Among others to solve bug [2], one of our oldest ones. The
> > >funny part is, that this libvirt feature is still in development. This
> > >was a trigger to see if we can create a gate job which utilizes the
> > >latest, bleeding edge, version of libvirt to test such features. We
> > >discussed it shortly in IRC [3] (tonyb, bauzas, markus_z) and wanted to
> > >get some feedback here. The summary of the idea is:
> > >* Create a custom repo which contains the latest libvirt version
> > >* Enhance Devstack so that it can point to a custom repo to install
> > >   the built libvirt packages
> > >* Have a nodepool image which is compatible with the libvirt packages
> > >* In case of [1]: check if tempest needs further/changed tests
> > >
> > >Open questions
> > >==
> > >* Is already someone working on something like that and I missed it?
> > 
> > Sean (cc'd) might have some information on what he's doing in the OVS w/
> > DPDK build environment, which AFAIK requires a later build of libvirt than
> > available in most distros.
> > 
> > >* If 'no', is there already a repo which contains the very latest
> > >   libvirt builds which we can utilize?
> > >
> > >I haven't done anything with the gates before, which means there is a
> > >very high chance I'm missing something or missunderstanding a concept.
> > >Please let me know what you think.
> > 
> > A generic "build libvirt or OVS from this source repo" dsvm job would be
> > great I think. That would allow overrides in ENV variables to point the job
> > to a URI for grabbing sources of OVS (DPDK OVS, mainline OVS) or libvirt
> > that would be built into the target nodepool images.
> 
> I was really hoping to decouple the build from the dsvm jobs.  My initial
> thoughts were a add a devstack plugin that add $repo and then upgrade
> $packages.  I wanted to decouple the build from install as I assumed that the
> delays in building libvirt (etc) would be problematic *and* provide another
> failure mode for devstack that we really don't want to deal with.
> 
> I was only thinking of having libvirt and qemu in there but if the plug-in was
> abstract enough it could easily provide packages for other help utils (like 
> OVS
> and DPDK).
> 
> When I started looking at this Ubuntu was the likely candidate as Fedora in 
> the gate
> wasn't really a stable thing.  I see a little more fedora in nodepool so 
> perhaps a
> really quick win would be to just use the lib-virt preview on F22.

Trying to build from bleeding edge is just a can of worms as you'll need to
have someone baby-sitting the job to fix it up on new releases when the
list of build deps changes or build options alter. As an example, next
QEMU release will require you to pull in 3rd party libcacard library
for SPICE build, since it was split out, so there's already a build
change pending that would cause a regression in the gate.

So, my recommendation would really be to just use Fedora with virt-preview
for the bleeding edge and avoid trying to compile stuff in the gate. The
virt-preview repository tracks upstream releases of QEMU+Libvirt+libguestfs
with minimal delay and is built with the same configuration as future Fedora
releases will use. So such testing is good evidence that Nova won't break on
the next Fedora release.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Product] [nfv][telco][product] Future of telco working group and weekly meeting reminder

2015-11-18 Thread Steve Gordon
- Original Message -
> From: "Shamail" 
> To: "Steve Gordon" 
> 
> Hi Steve,
> 
> > On Nov 18, 2015, at 6:39 AM, Steve Gordon  wrote:
> > 
> > Hi all,
> > 
> > Back in Vancouver [1] we began discussing the growing overlap between OPNFV
> > requirements projects and the current mission of the telco working group.
> > During this period the product working group, also in part focused on
> > recording and prioritizing user stories, has also been hitting its straps.
> > As we have recently lost a couple of core members of the telco working
> > group particularly on the technical side due to role changes etc. I think
> > it is time to roll its activities into these efforts.
> > 
> > With that in mind I would like to propose:
> > 
> > * Submitting the existing telcowg-usecases to the openstack-user-stories
> > repository
> > * Engaging within the product working group (assuming they will have us ;))
> > as owners of these user stories
> This is a very similar model to what the enterprise WG did recently as well.
> Please let me know if I can do anything to help with the transition of the
> user stories.
> > 
> > There is of course still a need to actually *implement* the requirements
> > exposed by these user stories but I am hopeful that together we can work
> > through a common process for this rather than continuing to attack it
> > separately. I would personally still like to see greater direct engagement
> > from service providers, but it seems like OPNFV and/or the OpenStack User
> > Committee [2] itself might be the right place for this going forward.
> > 
> > I'd like to discuss this proposal further in the weekly meeting. This
> > week's Telco Working Group meeting is scheduled for Wednesday the 18th at
> > 1400 UTC. As always the agenda etherpad is here:
> > 
> >   https://etherpad.openstack.org/p/nfv-meeting-agenda
> Would it make sense for someone else (besides yourself :)) from the Product
> WG to join this session for Q as well?

The more the merrier as they say... ;)

-Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Tempest] Deprecate or drop about Tempest CLI

2015-11-18 Thread Andrea Frittoli
On Wed, Nov 18, 2015 at 8:16 AM Masayuki Igawa 
wrote:

> 2015年11月18日(水) 5:37 Matthew Treinish :
>
>> On Tue, Nov 17, 2015 at 01:39:34AM +, Masayuki Igawa wrote:
>> > Hi tempest CLI users and developers,
>> >
>> > Now, we are improving Tempest CLI as we discussed at the summit[1] and
>> >  migrating legacy commands to tempest cli[2].
>> >
>> > In this situation, my concern is 'CLI compatibility'.
>> > If we'll drop old CLIs support(like my patch), it might break existing
>> > workflows.
>> >
>> > So I think there are two options.
>> >  1. Deprecate old tempest CLIs in this Mitaka cycle and we'll drop them
>> at
>> > the beginning of the N cycle.
>> >  2. Drop old tempest CLIs in this Mitaka cycle when new CLIs will be
>> > implemented.
>> > # Actually, I'd like to just drop old CLIs. :)
>>
>> As would I, but I agree we need to worry about existing users out there
>> and
>> being nice to them.
>>
>> I think we should do option #1, but in [2] we also maintain the old entry
>> point.
>> We also have the function called by the old entry points emit a single
>> warning
>> that says to use the new CLI. This way we have the new entry points all
>> setup
>> and we indicate that everyone should move over to them.
>>
>> If you started with the tempest-verify-config script you actually would
>> would
>> fail because devstack currently uses it. So doing the switchover
>> gracefully I
>> think would be best, because this is the same issue users will
>> potentially hit.
>>
>
> Thank you for clarifying.
> Sure, it sounds reasonable to me. I'll try to maintain the old entry
> point, too.
>

+1, thank you!


>
> Best Regards,
> -- Masayuki Igawa
>
>
>
>>
>> >
>
>
>> > If you have question and/or opinion, please let me know.
>> >
>> > [1] https://etherpad.openstack.org/p/tempest-cli-improvements
>> > [2] https://review.openstack.org/#/c/240399/
>> >
>>
>> -Matt Treinish
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Andrea Frittoli
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic][inspector][documentation] any suggestions

2015-11-18 Thread Serge Kovaleff
Hi Stackers,

I am going to help my team with the Inspector installation instruction.

Any ideas or suggestions what and how to contribute back to the community?

I see that Ironic Inspector could benefit from Documentation efforts. The
repo hasn't got Doc folder or/and auto-generated documentation.

Cheers,
Serge Kovaleff
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat]

2015-11-18 Thread P. Ramanjaneya Reddy
Can somebody guide how to support heat coding for any new feature.
any document for reference please share.

On Tue, Nov 17, 2015 at 11:40 PM, Steven Hardy  wrote:

> On Tue, Nov 17, 2015 at 05:09:16PM +, Pratik Mallya wrote:
> >Hello Heat team,
> >I recently proposed a patch https://review.openstack.org/#/c/246481/
> which
> >changes some events when resources are signaled. I was wondering if
> anyone
> >has any downstream services that depend on the events being in the
> format
> >specified, since this patch would break that expectation.
> >(Not sure if this is the correct way to solicit this information.
> Please
> >let me know alternatives, if any :) )
>
> I just commented on the review and the bug.
>
> IMHO we can't do this, it's been part of our user-visible API for as long
> as I can remember, thus there's bound to be some folks relying on that
> information to determine if a SIGNAL event has happened.
>
> There are other ways to determine the stack action when a signal event
> happens, so IMHO this should be postponed until we decide to do a v2 API,
> and even then we'll need to maintain compatibility with the old format for
> v1.
>
> You could add it to the v2 API wishlist spec:
>
> https://review.openstack.org/#/c/184863/
>
> Steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic][tempest]Tempest test plugin and microversions

2015-11-18 Thread Yuiko Takada
Hi Ironic folks,

As we discussed in Design Summit, we will move forward with tempest test
tasks.
I've posted a patch for tempest test plugin interface [1]
(Now it fails because of flake8-ignore-difference, anyway).

Then, I'd like to discuss about our tests migration procedure.
As you know, we are also working for Tempest microversions, so
our tests in Tempest need to be fixed for working with microversions.
Its spec has been approved and now microversion testing frame work has been
posted [2].
IMO, tests in Tempest should be fixed before moving into Ironic tree,
but maybe [2] will take long time to be merged.
What do you think?

[1] https://review.openstack.org/#/c/246161/
[2] https://review.openstack.org/#/c/242346/


Best Regards,
Yuiko Takada
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Making stable maintenance its own OpenStack project team

2015-11-18 Thread Thierry Carrez
Matt Riedemann wrote:
> [...]
> This is basically what I see for PTL of a stable maint team:
> 
> 1. Helping projects deal with stable branch policy and process. We are
> decentralizing now but I suspect there will still be a lot of learning
> and cat herding needed in individual projects. This includes being
> around to help (I'm always in #openstack-stable) and helping to
> coordinate point releases as needed.
> 
> 2. Keeping the CI system working on stable. This is already something
> I've been doing on stable for a long time so that just continues. It
> also means keeping a finger on how the periodic nightly jobs are doing
> and getting projects involved/aware when their stable branches are
> failing (bringing that up in the mailing list, project meetings or a
> cross project meeting as needed). An extension of this is seeing what we
> can do to try and keep stable branches around longer, which is clearly
> something the operator community has been wanting.
> 
> 3. Mentoring. A big part of this thread is talking about a lack of
> resources to work on stable branch issues (usually CI). So part of
> mentoring here is identifying and helping people that are wanting to
> work in this space. A perfect example is tonyb and the work he's done
> the past several months on tracking gate failures and blocking issues.
> It also includes keeping track of who's doing a good job with reviews
> and seeing if they are ready for core.
> 
> 4. Identifying official tags for a project, which doesn't just mean they
> have a stable branch but that they abide by the stable branch policy
> (what's appropriate to backport) and that they are maintaining their
> stable branches; this means actually reviewing proposed changes, being
> proactive about backporting high severity fixes, and keeping their CI
> jobs clean.
> 
> 5. Looking for ways to improve processes and tooling. One thing that has
> come up in this thread was a need for tooling to identify backportable
> changes (e.g. a high severity bug is marked as backport potential in
> launchpad but no one has done the backport yet). I'm also not entirely
> sure we have something today that is tracking stable branch review stats
> so that we can identify people that are doing (or cores not doing) reviews.
> 
> So this is a bit long-winded but it's what I see as being important for
> a separate team and PTL to do in this space. If we're going to do this,
> I want to make sure these things are covered and I think I'm in a
> position to do that.

This is a very good summary. I think it's critical to have good
representation of the "gate fixing" part of the role: without it, the
stable team is basically useless and totally dependent on another group,
with all the tension we had in the past. Growing the next generation of
gate fixers is essential.

I'm mostly away this week at a conference, but now that we have a list
of names, we should set up a meeting early next week to further discuss
team goals and initial leadership.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][stable] nominating lin hua cheng for keystone-stable-maint

2015-11-18 Thread Thierry Carrez
Steve Martinelli wrote:
> I'd like to nominate Lin Hua Cheng for keystone-stable-maint. He has
> been doing reviews on keystone's liberty and kilo stable branches since
> mitaka development has opened, and being a member of
> horizon-stable-maint, he is already familiar with stable branch
> policies. If there are no objections from the current stable-maint-core
> and keystone-stable-maint teams, then I'd like to add him.

+1, he already knows and applies stable policy !

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is there any way I can completely erase all the data when deleting a cinder volume

2015-11-18 Thread Gorka Eguileor
On 18/11, Young Yang wrote:
> There are some sensitive data in my volume.
> I hope openstack can completely erase all the data (e.g. overwrite the
> whole volume will 0 bits) when deleting a cinder volume.
> 
> I plan to write some code to make Openstack to mount that volume and
> rewrite the whole volume with 0 bits.
> 
> But I'm wondering if there is any better way to accomplish that.
> 
> Thanks in advance! :)

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hi,

Cinder already does that by default.

Clearing of deleted volumes is controlled by "volume_clear"
configuration option which has a default of "zero".

Available values are "none", "zero" and "shred".

Cheers,
Gorka.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano]How to use Murano to transmit files to Mistral and execute scripts on Mistral

2015-11-18 Thread WANG, Ming Hao (Tony T)
Dear Alexander and Murano developers and testers,

Any suggestion for this? ☺

Thanks,
Tony

From: WANG, Ming Hao (Tony T)
Sent: Tuesday, November 17, 2015 6:12 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: RE: [openstack-dev] [murano]How to use Murano to transmit files to 
Mistral and execute scripts on Mistral

Alexander,

Thanks for your response.
During the workflow running stage, it may need to access some other artifacts.
In my case, I use Mistral workflow to call Ansible playbook, and I need Murano 
to put Ansible playbook into right place on Mistral server so that Mistral 
workflow can find it.

Thanks,
Tony

From: Alexander Tivelkov [mailto:ativel...@mirantis.com]
Sent: Tuesday, November 17, 2015 4:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [murano]How to use Murano to transmit files to 
Mistral and execute scripts on Mistral

Hi Tony,

Probably I am missing something, but why do you need Murano Agent to interact 
with Mistral? You can call Mistral APIs right from MuranoPL code being executed 
by Murano Engine. Murano's core library contains the 
io.murano.system.MistralClient class ([1]) which may be used to upload and run 
mistral workflows.

Please let me know if you need more details on this

[1]  
https://github.com/openstack/murano/blob/master/murano/engine/system/mistralclient.py

On Tue, Nov 17, 2015 at 5:07 AM WANG, Ming Hao (Tony T) 
> wrote:
Dear Murano developers and testers,

I want to put some files that Mistral workflow needs into Murano package, and 
hope Murano can transmit them to Mistral before it calls Mistral workflow.
The flow should be as following:

1.   User uploads one Murano package which includes both Murano 
artifacts and Mistral artifacts to Murano;

2.   Murano transmits the Mistral artifacts to Mistral, and 
Mistral does its work.

After study, I thought muranoagent may be helpful and plan to install a 
muranoagent on the Mistral server since it can put files into nova server, and 
can run scripts on the nova server.
After further study, I found muranoagent solution may be not feasible:

1.   muranoagent and murano-engine(dsl) uses rabbitMQ to 
communicate.

2.   When an Agent object is created in DSL, murano-engine 
creates a unique message queue to communicate with the muranoagent in nova 
instance:
The queue name consists of current murano environment id, and the nova instance 
murano object id.

3.   During murano creates the nova instance, it passes this 
unique queue name via nova user_data to muranoagent on guest.
In this way, muranoagents on different guests can communicate with 
murano-engine separately.
This doesn’t suit the muranoagent + Mistral server solution.
We only want to install one muranoagent in Mistral server, and it should only 
listen on one message queue.
We can’t create  new muranoagent for each murano environment.
To achieve this, one solution that I can think is to modify murano code to 
implement a new MistralAgent to listen on a pre-defined message queue.

Could you please share your ideas about this?
If you have other solution, please also help to share it.  ☺•

Thanks in advance,
Tony

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
Regards,
Alexander Tivelkov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA][Tempest][Keystone] Re: [openstack-qa] [tempest][keystone]Inherit API was not found in API tempest.

2015-11-18 Thread Masayuki Igawa
Hi Koshiya-san,

Actually, we don't use openstack-qa ML for discussion now. So I added
openstack-dev ML to CC.

2015年11月18日(水) 16:31 koshiya maho :
>
> Hi all,
>
> I'm creating a tempest that conforms to operational environment
> of OpenStack that we have built.
>
> So far as I've seen the API tempest in the community,
> Inherit API of Keystone was not found.
>
> Does this exist somewhere? Other task is moving in relation to this?
>
> If not, I want to challenge to post a patch of this.

Do you mean this API?
http://developer.openstack.org/api-ref-identity-v3-ext.html#identity_v3_OS-INHERIT-ext

I've found a client for OS-TRUST extension api
 in tempest/services/identity/v3/json/identity_client.py but not about
OS-INHERIT.
So I think you can add OS-INHERIT api client and test then post it :)

Best regards,
-- Masayuki Igawa

> --
> Maho Koshiya
> NTT Software Corporation
> E-Mail : koshiya.m...@po.ntts.co.jp
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Auth-token expiration time

2015-11-18 Thread ESWAR RAO
Hi All,

I have an application which does some HEAT transactions of creating
stack,polling for it and updating stack in a loop.

In my setup, /etc/keystone/keystone.conf has below settings:

expiration=3600

But my transaction loop takes more than 1 hr and AUTH token is expired in
between and I am getting :

 Exception in heat start task for stack
jsm-e08dceac-71fe-48e8-817e-93ec78881827
of network service e08dceac-71fe-48e8-817e-93ec78881827 due to

 ERROR: Authentication failed.Please try again with option
--include-password or export HEAT_INCLUDE_PASSWORD=1

Is there any means of making the auth-token not getting expired instead of
changing expiration time in keystone conf ??

Thanks

Eswar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] Fedora/CentOS/other Support

2015-11-18 Thread Jesse Pretorius
Hi everyone,

There has been interest expressed from various corners to support more than
just Ubuntu in OpenStack-Ansible. While the project is very happy to work
on this, no-one has yet stepped up to take a lead role in the execution of
this work.

The current community has done some research into appropriate patterns to
use and has a general idea of how to do it - but in order to actually
execute there need to be enough people who commit to actually maintaining
the work once it's done. We don't want to carry the extra code if we don't
also pick up extra contributors to maintain the code.

If you're interested in seeing this become a reality and are prepared to
commit some time to making it happen, the project will be happy to work
with you to achieve it.

In this week's community meeting [1] we'll initiate a discussion on the
topic with the purpose of determining who's on board with the work.

In the following week's community meeting we will have a work breakdown
discussion where we'll work on filling out an etherpad [2] with what we
need to get done and who will be doing it. Naturally the etherpad can be
populated already by anyone who's interested. If you're actually happy to
volunteer to assist with the work, please feel free to add a note to that
effect on the etherpad.

[1]
https://wiki.openstack.org/wiki/Meetings/openstack-ansible#Agenda_for_next_meeting
[2] https://etherpad.openstack.org/p/openstack-ansible-multi-os-support

-- 
Jesse Pretorius
IRC: odyssey4me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Is there any way I can completely erase all the data when deleting a cinder volume

2015-11-18 Thread Young Yang
There are some sensitive data in my volume.
I hope openstack can completely erase all the data (e.g. overwrite the
whole volume will 0 bits) when deleting a cinder volume.

I plan to write some code to make Openstack to mount that volume and
rewrite the whole volume with 0 bits.

But I'm wondering if there is any better way to accomplish that.

Thanks in advance! :)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][inspector][documentation] any suggestions

2015-11-18 Thread Dmitry Tantsur

On 11/18/2015 10:10 AM, Serge Kovaleff wrote:

Hi Stackers,

I am going to help my team with the Inspector installation instruction.


Hi, that's great!



Any ideas or suggestions what and how to contribute back to the community?

I see that Ironic Inspector could benefit from Documentation efforts.
The repo hasn't got Doc folder or/and auto-generated documentation.


Creating a proper documentation would be a great step to contribute :) 
it's tracked as https://bugs.launchpad.net/ironic-inspector/+bug/1514803


Right now all documentation, including the installation guide, is in a 
couple of rst files in root:

https://github.com/openstack/ironic-inspector/blob/master/README.rst
https://github.com/openstack/ironic-inspector/blob/master/HTTP-API.rst



Cheers,
Serge Kovaleff




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Tempest] Deprecate or drop about Tempest CLI

2015-11-18 Thread Masayuki Igawa
2015年11月18日(水) 5:37 Matthew Treinish :

> On Tue, Nov 17, 2015 at 01:39:34AM +, Masayuki Igawa wrote:
> > Hi tempest CLI users and developers,
> >
> > Now, we are improving Tempest CLI as we discussed at the summit[1] and
> >  migrating legacy commands to tempest cli[2].
> >
> > In this situation, my concern is 'CLI compatibility'.
> > If we'll drop old CLIs support(like my patch), it might break existing
> > workflows.
> >
> > So I think there are two options.
> >  1. Deprecate old tempest CLIs in this Mitaka cycle and we'll drop them
> at
> > the beginning of the N cycle.
> >  2. Drop old tempest CLIs in this Mitaka cycle when new CLIs will be
> > implemented.
> > # Actually, I'd like to just drop old CLIs. :)
>
> As would I, but I agree we need to worry about existing users out there and
> being nice to them.
>
> I think we should do option #1, but in [2] we also maintain the old entry
> point.
> We also have the function called by the old entry points emit a single
> warning
> that says to use the new CLI. This way we have the new entry points all
> setup
> and we indicate that everyone should move over to them.
>
> If you started with the tempest-verify-config script you actually would
> would
> fail because devstack currently uses it. So doing the switchover
> gracefully I
> think would be best, because this is the same issue users will potentially
> hit.
>

Thank you for clarifying.
Sure, it sounds reasonable to me. I'll try to maintain the old entry point,
too.

Best Regards,
-- Masayuki Igawa



>
> >
> > If you have question and/or opinion, please let me know.
> >
> > [1] https://etherpad.openstack.org/p/tempest-cli-improvements
> > [2] https://review.openstack.org/#/c/240399/
> >
>
> -Matt Treinish
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Product] [nfv][telco][product] Future of telco working group and weekly meeting reminder

2015-11-18 Thread Shamail
Hi Steve,

> On Nov 18, 2015, at 6:39 AM, Steve Gordon  wrote:
> 
> Hi all,
> 
> Back in Vancouver [1] we began discussing the growing overlap between OPNFV 
> requirements projects and the current mission of the telco working group. 
> During this period the product working group, also in part focused on 
> recording and prioritizing user stories, has also been hitting its straps. As 
> we have recently lost a couple of core members of the telco working group 
> particularly on the technical side due to role changes etc. I think it is 
> time to roll its activities into these efforts.
> 
> With that in mind I would like to propose:
> 
> * Submitting the existing telcowg-usecases to the openstack-user-stories 
> repository
> * Engaging within the product working group (assuming they will have us ;)) 
> as owners of these user stories
This is a very similar model to what the enterprise WG did recently as well.  
Please let me know if I can do anything to help with the transition of the user 
stories.  
> 
> There is of course still a need to actually *implement* the requirements 
> exposed by these user stories but I am hopeful that together we can work 
> through a common process for this rather than continuing to attack it 
> separately. I would personally still like to see greater direct engagement 
> from service providers, but it seems like OPNFV and/or the OpenStack User 
> Committee [2] itself might be the right place for this going forward.
> 
> I'd like to discuss this proposal further in the weekly meeting. This week's 
> Telco Working Group meeting is scheduled for Wednesday the 18th at 1400 UTC. 
> As always the agenda etherpad is here:
> 
>   https://etherpad.openstack.org/p/nfv-meeting-agenda
Would it make sense for someone else (besides yourself :)) from the Product WG 
to join this session for Q as well? 
> 
> Thanks all!
> 
> Steve
> 
> [1] https://etherpad.openstack.org/p/YVR-ops-telco
> [2] https://wiki.openstack.org/wiki/Governance/Foundation/UserCommittee
> 
> ___
> Product-wg mailing list
> product...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/product-wg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][tap-as-a-service] weekly meeting

2015-11-18 Thread Irena Berezovsky
On Wed, Nov 18, 2015 at 8:31 AM, Takashi Yamamoto 
wrote:

> hi,
>
> On Thu, Nov 12, 2015 at 2:11 AM, Vikram Hosakote (vhosakot)
>  wrote:
> > Hi,
> >
> > TAAS looks great for traffic monitoring.
> >
> > Some questions about TAAS.
> >
> > 1) Can TAAS be used for provider networks as well, or just for tenant
> > networks ?
>
> currently only for VM ports on tenant networks.
>
> >
> > 2) Will there be any performance impact is every neutron port and every
> > packet is mirrored/duplicated ?
>
> i guess per-port impact is negligible.
> there's definitely per-packet impacts.
> i don't have any numbers though.
>
> >
> > 3) How is TAAS better than a non-mirroring approaches like
> packet-sniffing
> > (wireshark/tcpdump) and tracking interface counters/metrics ?
>
> i think taas is richer but probably slower than them.
>
> >
> > 4) Is TAAS a legal/lawful way to intercept/duplicate customer traffic in
> a
> > production cloud ? Or, TAAS is used just for debugging/troubleshooting ?
>
> although i'm not sure about legal/lawful requirements,
> i guess taas can be used for such purposes.
>

You check this presentation for potential usage scenarios:


https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/tap-as-a-service-taas-port-monitoring-for-neutron-networks

>
> > I was not able to find answers for these questions in
> > https://etherpad.openstack.org/p/mitaka-neutron-unplugged-track.
> >
> > Thanks!
> >
> >
> > Regards,
> > Vikram Hosakote
> > vhosa...@cisco.com
> > Software Engineer
> > Cloud and Virtualization Group (CVG)
> > Cisco Systems
> > Boxborough MA USA
> >
> > From: Takashi Yamamoto 
> > Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Date: Tuesday, November 10, 2015 at 10:08 PM
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Subject: [openstack-dev] [neutron][tap-as-a-service] weekly meeting
> >
> > hi,
> >
> > tap-as-a-service meeting will be held weekly, starting today.
> > http://eavesdrop.openstack.org/#Tap_as_a_Service_Meeting
> > anyone interested in the project is welcome.
> > sorry for immediate notice.
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is there any way I can completely erase all the data when deleting a cinder volume

2015-11-18 Thread Duncan Thomas
For the LVM and raw block device drivers, there is already an option to do
that - set volume_clear to 'zero' in cinder.conf

If you want this for other drivers, then the code could easily be adopted,
however I would question whether it is a good idea - the I/O load of
zeroing out volumes is very large, and can easily overshadow the other I/O
on the system significantly.

If you are using the LVM driver, I'd suggest investigating the thin
provisioning options, since they provide similar levels of tenant security
(though not disk disposal security) with far better performance.

On 18 November 2015 at 10:03, Young Yang  wrote:

>
> There are some sensitive data in my volume.
> I hope openstack can completely erase all the data (e.g. overwrite the
> whole volume will 0 bits) when deleting a cinder volume.
>
> I plan to write some code to make Openstack to mount that volume and
> rewrite the whole volume with 0 bits.
>
> But I'm wondering if there is any better way to accomplish that.
>
> Thanks in advance! :)
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] HMT/Reseller - The Proposed Plan

2015-11-18 Thread Raildo Mascena
Hi Henry,

As you said before, this Reseller implementation is not simple, so this
plan sounds reasonable for me.

On Wed, Nov 18, 2015 at 9:53 AM Henry Nash 
wrote:

> Hi
>
> During our IRC meeting this week, we decided we needed a recap of this
> plan, so here goes:
>
> *Phase 0 (already merged in Liberty):*
>
> We already support a hierarchy of projects (using the parent_id attribute
> of the project entity). All the projects in a tree must be in the same
> domain. Role assignment inheritance is supported down the tree (either by
> assigning the role to the domain and have it inherited by the whole tree,
> or by assigning to a node in the project tree and having that assignment
> inherited by the sub-tree below that node).
>
> *Phase 1 (all code up for review for Mitaka):*
>
> Keep the existing conceptual model for domains, but store them as projects
> with a new attribute (is_domain=True).  The domain API remains (but
> accesses the project table instead), but you can also use the project API
> to create a domain (by setting is_domain=True). The is_domain attribute is
> immutable - i.e. you can’t flip a project between being a regular project
> and one acting as a domain. Projects acting as a domain have no parent
> (they are always top level domains/projects). Domain tokens can be
> deprecated in place of a project token scoped to a project acting as a
> domain (the token is augmented with the is_domain attribute so a policy
> rule can distinguish between a token on a domain and a regular project).
> This phase does not provide any support for resellers.
>
++ For this phase does not provide support for reseller, we need the phase
2 anyway.

>
> Phase 1 is covered by the two approved specs: HMT (
> https://review.openstack.org/#/c/139824, this actually coves Phase 1 and
> 2) and is_domain token (https://review.openstack.org/#/c/193543/)
>
> *Phase 2 (earlier versions of code were proposed for Liberty, need fixing
> up for Mitaka):*
>
> At the summit we agreed to re-examine Phase 2 to see if we could perhaps
> use federation instead to cover this use case. As outlined in my email to
> the list (
> http://lists.openstack.org/pipermail/openstack-dev/2015-October/078063.html),
> this does not provide the support required. Hence, as per that email, I am
> proposing we revert the original specification (with restrictions), which
> is as follows:
>
> Extended the concept of domains to allow a hierarchy of domains to support
> the reseller model (the requirements and specifications are in the same
> approved spec that covers Phase 1 above,
> https://review.openstack.org/#/c/139824). Given that we would already
> have Phase 1 in place, the actual changes in Phase 2 would be as follows:
>
> a) Since projects already have a parent_id attribute and domains are
> represented as projects with the is_domain attribute set to True, allow
> projects acting as domains to be nested (like regular projects).
> b) The parent of a project acting as a domain must either be another
> project acting as a domain or None (i.e. a root domain). A regular project
> cannot act as a parent for a project acting as a domain. In effect, we
> allow a hierarchy of domains at the top of “the tree” and all regular
> project trees hang off those higher level domains.
> c) Projects acting as domains cannot be the recipient of an inherited role
> assignment from their parent - i.e. we don’t inherit assignments between
> domains.
> d) All domain names (i.e. project names for projects acting as domains)
> must still be unique (otherwise we break our current auth model)
>
> Comments on the issues that people have raised with Phase 2:
>
> i) It’s too complex!
> Well, in terms of code changes, the majority of the changes are actually
> in Phase 0 and 1. We just use the underlying capabilities in Phase 2.
>
> ii) Our restriction of “all domain names must be unique” means that one
> reseller could “squat” on a domain name and prevent anyone else from having
> a domain of that name.
> This is absolutely true. In reality, however, reseller arrangements will
> be contractual, so if a cloud provider was really concerned about this,
> they could legislate against this in the contract with the reseller (e.g.
> "All the domains you create must be suffixed with your reseller name”). In
> addition, if we believe that federation will become the dominant auth
> model, then the physical name of the domain becomes less important.
>

> iii) What’s this about name clashing?
> Ok, so there are two different scenarios here:
> > Since, today, a project can have the same name as it’s domain, this
> means that when we convert to using projects acting as a domain, the same
> thing can be true (so this is really a consequence of Phase 1). Although we
> could, in theory, prevent this for any new domains being created, migrating
> over the existing domains could always create this situation. However, it
> doesn’t actually cause any issue to 

Re: [openstack-dev] [neutron][sfc] How could an L2 agent extension access agent methods ?

2015-11-18 Thread Andreas Scheuring
Hi all, 
I wonder if this is somehow in conflict with the modular l2 agent
approach I'm currently following up for linuxbridge, macvtap and sriov?
- RFE: [1]
- Frist patchset [2]

I don't think so, but to be sure I wanted to raise it up.




[1] https://bugs.launchpad.net/neutron/+bug/1468803
[2] https://review.openstack.org/#/c/246318/

-- 
Andreas
(IRC: scheuran)



On Mo, 2015-11-16 at 20:42 +, Cathy Zhang wrote:
> I have updated the etherpad to add "Overall requirement - API cross check 
> among the features that can manipulate the same flow's forwarding behavior".
> 
> Thanks,
> Cathy
> 
> -Original Message-
> From: Paul Carver [mailto:pcar...@paulcarver.us] 
> Sent: Monday, November 16, 2015 7:50 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [neutron][sfc] How could an L2 agent extension 
> access agent methods ?
> 
> On 11/13/2015 7:03 PM, Henry Fourie wrote:
> 
> >
> >   I wonder whether just pushing flows into the existing tables at random 
> > points in time can be unstable and break the usual flow assumed by the main 
> > agent loop.
> > LF> No not expect any issues.
> >
> > Am I making sense?
> >
> > [1] https://etherpad.openstack.org/p/l2-agent-extensions-api-expansion
> >
> 
> I attempted to describe a possible issue at the bottom of the Etherpad in the 
> bullet point "Overall requirement - Flow prioritization mechanism"
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][stable] nominating lin hua cheng for keystone-stable-maint

2015-11-18 Thread Brant Knudson
On Tue, Nov 17, 2015 at 5:24 PM, Steve Martinelli 
wrote:

> I'd like to nominate Lin Hua Cheng for keystone-stable-maint. He has been
> doing reviews on keystone's liberty and kilo stable branches since mitaka
> development has opened, and being a member of horizon-stable-maint, he is
> already familiar with stable branch policies. If there are no objections
> from the current stable-maint-core and keystone-stable-maint teams, then
> I'd like to add him.
>
> Thanks,
>
> Steve Martinelli
> Keystone Project Team Lead
>
>
+1 from me. - Brant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][fuel] How can I install Redhat-OSP using Fuel

2015-11-18 Thread LUFei
Hi Igor and Vladimir,

Thank you guys for your help. And sorry for my late respond because I was 
preparing my exams.

And another sorry for I have to put off this work.
My company has taken Steven and Tim's, both from Redhat, suggestion to try OSP 
Director before making further decisions.

Thank you again.
Fei


> Date: Wed, 11 Nov 2015 13:14:21 -0800
> From: ikalnit...@mirantis.com
> To: vkuk...@mirantis.com
> CC: openstack-dev@lists.openstack.org; opnfv-tech-disc...@lists.opnfv.org
> Subject: Re: [openstack-dev] [Fuel][fuel] How can I install Redhat-OSP using 
> Fuel
>
> Hey Fei LU,
>
> Thanks for being interested in Fuel. I'll help you with pleasure.
>
> First of all, as Vladimir mentioned, you need to create a new release.
> That's could be done by POST request to /api/v1/releases/. You can use
> JSON of CentOS with slight changes. When releases is created you need
> to do two things:
>
> 1. Prepare a provisioning image and make it shared by Nginx, Please
> ensure you have correct path to this image in your recently created
> RedHat release.
>
> 2. Populate RedHat release with deployment tasks. It could be done by
> executing the following command:
>
> fuel rel --sync-deployment-tasks --dir "/etc/puppet/{release-version}"
>
> I think most of CentOS tasks should fine on RedHat, though we didn't
> test it. If you met any problem, please feel free to contact us using
> either this ML or #fuel-dev IRC channel.
>
> Thanks,
> Igor
>
> On Wed, Nov 11, 2015 at 3:41 AM, Vladimir Kuklin  wrote:
>> Hi, Fei
>>
>> It seems you will need to do several things with Fuel - create a new
>> release, associate your cluster with it when creating it and provide paths
>> to corresponding repositories with packages. Also, you will need to create a
>> base image for Image-based provisioning. I am not sure we have all the 100%
>> of the code that supports it, but it should be possible to do so with some
>> additional efforts. Let me specifically refer to Fuel Agent team who are
>> working on Image-Based Provisioning and Nailgun folks who should help you
>> with figuring out patterns for repositories URLs configuration.
>>
>> On Tue, Nov 10, 2015 at 5:15 AM, Fei LU  wrote:
>>>
>>> Greeting Fuel teams,
>>>
>>>
>>> My company is working on the installation of virtualization
>>> infrastructure, and we have noticed Fuel is a great tool, much better than
>>> our own installer. The question is that Mirantis is currently supporting
>>> OpenStack on CentOS and Ubuntu, while my company is using Redhat-OSP.
>>>
>>> I have read all the Fuel documents, including fuel dev doc, but I haven't
>>> found the solution how can I add my own release into Fuel. Or maybe I'm
>>> missing something.
>>>
>>> So, would you guys please give some guide or hints?
>>>
>>> Appreciating any help.
>>> Kane
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Yours Faithfully,
>> Vladimir Kuklin,
>> Fuel Library Tech Lead,
>> Mirantis, Inc.
>> +7 (495) 640-49-04
>> +7 (926) 702-39-68
>> Skype kuklinvv
>> 35bk3, Vorontsovskaya Str.
>> Moscow, Russia,
>> www.mirantis.com
>> www.mirantis.ru
>> vkuk...@mirantis.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [inspector] Auto discovery extension for Ironic Inspector

2015-11-18 Thread Dmitry Tantsur

I have to admin I forgot about this thread. Please find comments inline.

On 11/06/2015 05:25 PM, Bruno Cornec wrote:

Hello,

Pavlo Shchelokovskyy said on Tue, Nov 03, 2015 at 09:41:51PM +:

For auto-setting driver options on enrollment, I would vote for option 2
with default being fake driver + optional CMDB integration. This would
ease
managing a homogeneous pool of BMs, but still (using fake driver or data
from CMDB) work reasonably well in heterogeneous case.


Using fake driver means we need a manual step to set it to something 
non-fake :) and the current introspection process already has 1 manual 
step (enrolling nodes), so I'd like autodiscovery to require 0 of them 
(at least for the majority of users).




As for setting a random password, CMDB integration is crucial IMO. Large
deployments usually have some sort of it already, and it must serve as a
single source of truth for the deployment. So if inspector is changing
the
ipmi password, it should not only notify/update Ironic's knowledge on
that
node, but also notify/update the CMDB on that change - at least there
must
be a possibility (a ready-to-use plug point) to do that before we roll
out
such feature.


Well, if we have a CMDB, we probably don't need to set credentials. Or 
at least we should rely on the CMDB as a primary source. This "setting 
random password" thing is more about people without CMDB (aka using 
ironic as a CMDB ;). I'm not sure it's a compelling enough use case.


Anyway, it could be interesting to talk about some generic 
OpenStack-CMDB interface, which might something proposed below.




wrt interaction with CMDB, we have investigating around some ideas tha
we have gathered at https://github.com/uggla/alexandria/wiki


Oh, that's interesting. I see some potential overlap with ironic and 
ironic-inspector. Would be cool to chat on it the next summit.




Some code has been written to try to model some of these aspects, but
having more contributors and patches to enhance that integration would
be great ! Similarly available at https://github.com/uggla/alexandria

We had planned to talk about these ideas at the previous OpenStack
summit but didn't get enough votes it seems. So now aiming at preenting
to the next one ;-)


+100, would love to hear.



HTH,
Bruno.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [inspector] Auto discovery extension for Ironic Inspector

2015-11-18 Thread Dmitry Tantsur

On 11/02/2015 05:07 PM, Sam Betts (sambetts) wrote:

Auto discovery is a topic which has been discussed a few times in the
past for

Ironic, and its interesting to solve because its a bit of a chicken and egg

problem. The ironic inspector allows us to inspect nodes that we don't know

the mac addresses for yet, to do this we run a global DHCP PXE rule that
will

respond to all mac addresses and PXE boot any machine that requests it,

this means its possible for machines that we haven't been asked to

inspect to boot into the inspector ramdisk and send their information to

inspector's API. To prevent this data from being processed further by

inspector if its a machine we shouldn't care about, we do a node lookup.
If the data

fails this node lookup we used to drop this data and continue no further, in

release 2.0.0 we added a hook point to intercept this state called the
Node Not

Found hook point which allows us to run some python code at this point in

processing before failing and dropping the inspection data. Something we've

discussed as a use for this hook point is, enrolling a node that fails the

lookup into Ironic, and then having inspector continue to process the

inspection data as we would for any other node that had inspection requested

for it, this allows us to auto-discover unknown nodes into Ironic.


If this auto discovery hook was enabled this would be the flow when
inspector

receives inspection data from the inspector ramdisk:


- Run pre-process on the inspection data to sanitise the data and ready
it for

   the rest of the process


- Node lookup using fields from the inspection data:

   - If in inspector node cache return node info


   - If not in inspector node cache and but is in ironic node database, fail

 inspection because its a known node and inspection hasn't been
requested

 for it.


   - If not in inspector node cache or ironic node database, enroll the
node in

 ironic and return node info


- Process inspection data


The remaining question for this idea is how to handle the driver
settings for

each node that we discover, we've currently discussed 3 different options:


1. Enroll the node in ironic using the fake driver, and leave it to the
operator

to set the driver type and driver info before they move the node
from enroll

to manageable.


I'm -1 to this because it requires a manual step. We already have a 
process requiring 1 manual step - inspection :) I'd like autodiscovery 
to turn it to 0.





2. Allow for the default driver and driver info information to be set in
the

ironic inspector configuration file, this will be set on every node
that is

auto discovered. Possible config file example:


[autodiscovery]

driver = pxe_ipmitool

address_field = 

username_field = 

password_field = 


This is my favorite one. We'll also need to provide the default user 
name/password. We can try to advance a node to MANAGEABLE state after 
enrolling it. If the default credentials don't work, node would stay in 
ENROLL state, and this will be a signal to an operator to check them.





3. A possibly vendor specific option that was suggested at the summit was to

provide an ability to look up out of band credentials from an
external CMDB.


We already have an extension point for discovery. If we know more about 
CMDB interfaces, we can extend it, but it's already possible to use.





The first option is technically possible using the second option, by setting

the driver to fake and leaving the driver info blank.


+1




With IPMI based drivers most IPMI related information can be retrieved
from the

node by the inspector ramdisk, however for non-ipmi based drivers such
as the

cimc/ucs drivers this information isn't accessible from an in-band OS
command.


A problem with option 2 is that it can not account for a mixed driver

environment.


We have also discussed for IPMI based drivers inspector could set a new

randomly generated password on to the freshly discovered node, with the idea

being fresh hardware often comes with a default password, and if you used

inspector to discover it then it could set a unique password on it and

automatically make ironic aware of that.


We're throwing this idea out onto the mailer because we'd like to get
feedback

from the community to see if this would be useful for people using
inspector,

and to see if people have any opinions on what the right way to handle
the node

driver settings is.


Yeah, I'm not decided on this one. Sounds cool but dangerous :)




Sam (sambetts)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [nova] nova-manage db archive_deleted_rows broken

2015-11-18 Thread Andrew Laski

On 11/17/15 at 07:43pm, Jay Pipes wrote:

On 11/17/2015 05:43 PM, Matt Riedemann wrote:

I found some time to work on a reverse sort of nova's tables for the db
archive command, that looks like [1].  It works fine in the unit tests,
but fails because the deleted instances are referenced by
instance_actions that aren't deleted.  I see any DB APIs for deleting
instance actions.

Were we just planning on instance_actions living forever in the database?


Not as far as I understand.


They were never intended to live forever.  However there is a use case 
for holding on to the deleted action so that someone could query when or 
by whom their instance was deleted.  But the current API does not 
provide a good way to query for that so this may be something better 
left to the growing list of things that Tasks could address.





Should we soft delete instance_actions when we delete the referenced
instance?


No.


Or should we (hard) delete instance_actions when we archive (move to
shadow tables) soft deleted instances?


Yes.

Best,
-jay


This is going to be a blocker to getting nova-manage db
archive_deleted_rows working.

[1] https://review.openstack.org/#/c/246635/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][sfc] How could an L2 agent extension access agent methods ?

2015-11-18 Thread Ihar Hrachyshka

Andreas Scheuring  wrote:


Hi all,
I wonder if this is somehow in conflict with the modular l2 agent
approach I'm currently following up for linuxbridge, macvtap and sriov?
- RFE: [1]
- Frist patchset [2]

I don't think so, but to be sure I wanted to raise it up.


I don’t believe it’s in conflict, though generally, I suggest you move  
extensions code into modular l2 agent pieces, if possible. We will have  
extensions enabled for lb, ovs, and sr-iov the least in Mitaka.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] no tempest tests are being run on -sahara jobs due to bad devstack plugin

2015-11-18 Thread Sean Dague
When attempting to figure out why a jumbo tempest job with heat / trove
/ sahara was skipping sahara test jobs, I discovered that the sahara
devstack plugin is wrong, and doesn't implement the required
'is_sahara_enabled' function. So tempest is never configured for it.

It hasn't been running any tempest tests ... for possibly ever -
http://logs.openstack.org/03/246603/1/check/gate-tempest-dsvm-sahara/84a451a/console.html

A fix is up here - https://review.openstack.org/#/c/246880 - but seems
blocked on all the other issues in this plugin. It would be really good
if we could get this merged and have the gate-dsvm-tempest-sahara test
do something besides burn nodes. :)

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovs-dpdk]

2015-11-18 Thread James Page
Hi Sean

On Wed, Nov 18, 2015 at 12:30 PM, Mooney, Sean K 
wrote:

> Hi james
>
> Yes we are planning on testing the packaged release to see if it is
> compatible with our ml2 driver and the
>
> Changes we are submitting upstream. If it is we will add a use binary flag
> to our devstack plugin to skip the
>
> Compilation step and use that instead on 15.10 or 14.04
> cloud-archive:liberty
>

Excellent.


> As part of your packaging did ye fix pciutils to correctly report the
> unused drivers when an interface is bound
>
> The dpdk driver? Also does it support both igb_uio and/or vfio-pci drivers
> for dpdk interface?
>

Re pcituils, we've not done any work in that area - can you give an example
of what you would expect?

The dpdk package supports both driver types in /etc/dpdk/interfaces - when
you declare an adapter for use, you get to specify the module you want to
use as well; we're relying the in-tree kernel drivers (uio-pci-generic and
vfio-pci) right now.


>
>
> Anyway yes I hope to check it out and seeing what ye have done. When
> ovs-dpdk starts getting packaged in more operating systems
>
> We will probably swap our default to the binary install though we will
> keep the source install option as it allows us to work on new features
>
> Before they are packaged and to have better performance.
>

That sounds sensible; re 'better performance' - yeah we do have to baseline
the optimizations at compile time right now (ssse3 only right now) , but I
really hope that does change so that we can move to a runtime CPU feature
detection model, allowing the best possible performance through the
packages we have in Ubuntu (or any other distribution for that matter).
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][upgrade] new 'all things upgrade' subteam

2015-11-18 Thread Ihar Hrachyshka

Thanks everyone for responses.

It seems like Mon 15:00 UTC works for all of us, so I pushed the following  
patch to book #openstack-meeting-2 room weekly:


https://review.openstack.org/246949

Ihar

Rossella Sblendido  wrote:


Hi Ihar,

same for me, all options are ok!

cheers,

Rossella

On 11/12/2015 11:00 PM, Martin Hickey wrote:

Hi Ihar,

Any of those options would suit me, thanks.

Cheers,
Martin




From:   Ihar Hrachyshka 
To: "OpenStack Development Mailing List (not for usage questions)"
 
Date:   12/11/2015 21:39
Subject:Re: [openstack-dev] [neutron][upgrade] new 'all things
 upgrade'   subteam



Artur  wrote:


My TZ is UTC +1:00.
Do we have any favorite day? Maybe Tuesday?


I believe Tue is already too packed with irc meetings to be considered (we

have, for the least, main neutron meetings and neutron-drivers meetings
there).

We have folks in US and Central Europe and Russia and Japan… I believe the

best time would be somewhere around 13:00 to 15:00 UTC (that time would
still be ‘before midnight' for Japan; afternoon for Europe, and morning  
for


US East Coast).

I have checked neutron meetings at [1], and I see that we have 13:00 UTC
slots free for all days; 14:00 UTC slot available for Thu; and 15:00 UTC
slots for Mon and Fri (I don’t believe we want to have it on Fri though).
Also overall Mondays are all free.

Should I create a doodle for those options? Or are there any alternative
suggestions?

[1]:
http://git.openstack.org/cgit/openstack-infra/irc-meetings/tree/meetings

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][security] what is OK to put in DEBUG logs?

2015-11-18 Thread Devananda van der Veen
On Wed, Nov 18, 2015 at 9:48 AM, Ruby Loo  wrote:

> Hi,
>
> I think we all agree that it isn't OK to log credentials (like passwords)
> in DEBUG logs. However, what about other information that might be
> sensitive? A patch was recently submitted to log (in debug) the SWIFT
> temporary URL [1]. I agree that it would be useful for debugging, but since
> that temporary URL could be used (by someone that has access to the logs
> but no admin access to ironic/glance) eg for fetching private images, is it
> OK?
>
> Even though we say that debug shouldn't be used in production, we can't
> enforce what folks choose to do. And we know of at least one company that
> runs their production environment with the debug setting. Which isn't to
> say we shouldn't put things in debug, but I think it would be useful to
> have some guidelines as to what we can safely expose or not.
>
> I took a quick look at the security web page [2] but nothing jumped out at
> me wrt this issue.
>
> Thoughts?
>
> --ruby
>
> [1] https://review.openstack.org/#/c/243141/
> [2] https://security.openstack.org
>
>
In this context, the URL is a time-limited access code being used in place
of a password or keystone auth token to allow an unprivileged client
temporary access to a specific privileged resource, without granting that
client access to any other resources. In some cases, that resource might be
a public Glance image and so one might say, "oh, it's not _that_
sensitive". However, the same module being affected by [1] is also used by
the iLO driver to upload a temporary image containing sensitive
instance-specific data.

I agree that it's not the same risk as exposing a password, but I still
consider this an access token, and therefore don't think it should be
written to log files, even at DEBUG.

-Deva
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Mesos Conductor

2015-11-18 Thread Hongbin Lu
Hi Bharath,

I agree the "container" part. We can implement "magnum container-create .." for 
mesos bay in the way you mentioned. Personally, I don't like to introduce 
"apps" and "appgroups" resources to Magnum, because they are already provided 
by native tool [1]. I couldn't see the benefits to implement a wrapper API to 
offer what native tool already offers. However, if you can point out a valid 
use case to wrap the API, I will give it more thoughts.

Best regards,
Hongbin

[1] https://docs.mesosphere.com/using/cli/marathonsyntax/

From: bharath thiruveedula [mailto:bharath_...@hotmail.com]
Sent: November-18-15 1:20 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [magnum] Mesos Conductor

Hi all,

I am working on the blueprint 
[1]. As per my 
understanding, we have two resources/objects in mesos+marathon:

1)Apps: combination of instances/containers running on multiple hosts 
representing a service.[2]
2)Application Groups: Group of apps, for example we can have database 
application group which consists mongoDB app and MySQL App.[3]

So I think we need to have two resources 'apps' and 'appgroups' in mesos 
conductor like we have pod and rc for k8s. And regarding 'magnum container' 
command, we can create, delete and retrieve container details as part of mesos 
app itself(container =  app with 1 instance). Though I think in mesos case 
'magnum app-create ..."  and 'magnum container-create ...' will use the same 
REST API for both cases.

Let me know your opinion/comments on this and correct me if I am wrong

[1]https://blueprints.launchpad.net/magnum/+spec/mesos-conductor.
[2]https://mesosphere.github.io/marathon/docs/application-basics.html
[3]https://mesosphere.github.io/marathon/docs/application-groups.html


Regards
Bharath T
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Mesos Conductor

2015-11-18 Thread Jay Lau
Just want to input more for this topic, I was also thinking more for how we
can unify the client interface for Magnum.

Currently, the Kubernetes is using "kubectl create" to create all of k8s
objects including pod, rc, service, pv, pvc, hpa etc using either yaml,
json, yml, stdin file format; The marathon also using yaml, json file to
create applications. In my understanding, it is difficult to unify the
concept of all COEs but at least seems many COEs are trying to unify the
input and output: all using same file format as input and getting same
format output.

It is a good signal for Magnum and the Magmum can leverage those features
to unify the client interface for different COEs. i.e we can use "magnum
app create" to create pod, rc, service, pv, pvc even marathon service etc.
Just some early thinking from my side...

Thanks!

On Thu, Nov 19, 2015 at 10:01 AM, Jay Lau  wrote:

> +1.
>
> One problem I want to mention is that for mesos integration, we cannot
> limited to Marathon + Mesos as there are many frameworks can run on top of
> Mesos, such as Chronos, Kubernetes etc, we may need to consider more for
> Mesos integration as there is a huge eco-system build on top of Mesos.
>
> On Thu, Nov 19, 2015 at 8:26 AM, Adrian Otto 
> wrote:
>
>> Bharath,
>>
>> I agree with Hongbin on this. Let’s not expand magnum to deal with apps
>> or appgroups in the near term. If there is a strong desire to add these
>> things, we could allow it by having a plugin/extensions interface for the
>> Magnum API to allow additional COE specific features. Honestly, it’s just
>> going to be a nuisance to keep up with the various upstreams until they
>> become completely stable from an API perspective, and no additional changes
>> are likely. All of our COE’s still have plenty of maturation ahead of them,
>> so this is the wrong time to wrap them.
>>
>> If someone really wants apps and appgroups, (s)he could add that to an
>> experimental branch of the magnum client, and have it interact with the
>> marathon API directly rather than trying to represent those resources in
>> Magnum. If that tool became popular, then we could revisit this topic for
>> further consideration.
>>
>> Adrian
>>
>> > On Nov 18, 2015, at 3:21 PM, Hongbin Lu  wrote:
>> >
>> > Hi Bharath,
>> >
>> > I agree the “container” part. We can implement “magnum container-create
>> ..” for mesos bay in the way you mentioned. Personally, I don’t like to
>> introduce “apps” and “appgroups” resources to Magnum, because they are
>> already provided by native tool [1]. I couldn’t see the benefits to
>> implement a wrapper API to offer what native tool already offers. However,
>> if you can point out a valid use case to wrap the API, I will give it more
>> thoughts.
>> >
>> > Best regards,
>> > Hongbin
>> >
>> > [1] https://docs.mesosphere.com/using/cli/marathonsyntax/
>> >
>> > From: bharath thiruveedula [mailto:bharath_...@hotmail.com]
>> > Sent: November-18-15 1:20 PM
>> > To: openstack-dev@lists.openstack.org
>> > Subject: [openstack-dev] [magnum] Mesos Conductor
>> >
>> > Hi all,
>> >
>> > I am working on the blueprint [1]. As per my understanding, we have two
>> resources/objects in mesos+marathon:
>> >
>> > 1)Apps: combination of instances/containers running on multiple hosts
>> representing a service.[2]
>> > 2)Application Groups: Group of apps, for example we can have database
>> application group which consists mongoDB app and MySQL App.[3]
>> >
>> > So I think we need to have two resources 'apps' and 'appgroups' in
>> mesos conductor like we have pod and rc for k8s. And regarding 'magnum
>> container' command, we can create, delete and retrieve container details as
>> part of mesos app itself(container =  app with 1 instance). Though I think
>> in mesos case 'magnum app-create ..."  and 'magnum container-create ...'
>> will use the same REST API for both cases.
>> >
>> > Let me know your opinion/comments on this and correct me if I am wrong
>> >
>> > [1]https://blueprints.launchpad.net/magnum/+spec/mesos-conductor.
>> > [2]https://mesosphere.github.io/marathon/docs/application-basics.html
>> > [3]https://mesosphere.github.io/marathon/docs/application-groups.html
>> >
>> >
>> > Regards
>> > Bharath T
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Thanks,
>
> Jay Lau (Guangya Liu)
>



-- 
Thanks,

Jay Lau (Guangya Liu)

[openstack-dev] [Neutron] Call for review focus

2015-11-18 Thread Armando M.
Hi Neutrites,

We are nearly two weeks away from the end of Mitaka 1.

I am writing this email to invite you to be mindful to what you review,
especially in the next couple of weeks. Whenever you have the time to
review code, please consider giving priority to the following:

   - Patches that target blueprints targeted for Mitaka
   ;
   - Patches that target bugs that are either critical
   

   or high
   

   ;
   - Patches that target rfe-approved
   

'bugs';
   - Patches that target specs
   

that
   have followed the most current submission process
   ;

Everything else should come later, no matter how easy or interesting it is
to review; remember that as a community we have the collective duty to work
towards a common (set of) target(s), as being planned in collaboration with
the Neutron Drivers
 team and the
larger core  team.

I would invite submitters to ensure that the Launchpad resources
(blueprints, and bug report) capture the most updated view in terms of
patches etc. Work with your approver to help him/her be focussed where it
matters most.

Finally, we had plenty of discussions at the design summit
,
and some of those discussions will have to be followed up with actions (aka
code in OpenStack lingo). Even though, we no longer have deadlines for
feature submission, I strongly advise you not to leave it last minute. We
can only handle so much work for any given release, and past experience
tells us that we can easily hit a breaking point at around the ~30
blueprint mark.

Once we reached it, it's likely we'll have to start pushing back work for
Mitaka and allow us some slack; things are fluid as we all know, and the
random gate breakage is always lurking round the corner! :)

Happy hacking,
Armando
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Mesos Conductor

2015-11-18 Thread Adrian Otto
Bharath,

I agree with Hongbin on this. Let’s not expand magnum to deal with apps or 
appgroups in the near term. If there is a strong desire to add these things, we 
could allow it by having a plugin/extensions interface for the Magnum API to 
allow additional COE specific features. Honestly, it’s just going to be a 
nuisance to keep up with the various upstreams until they become completely 
stable from an API perspective, and no additional changes are likely. All of 
our COE’s still have plenty of maturation ahead of them, so this is the wrong 
time to wrap them.

If someone really wants apps and appgroups, (s)he could add that to an 
experimental branch of the magnum client, and have it interact with the 
marathon API directly rather than trying to represent those resources in 
Magnum. If that tool became popular, then we could revisit this topic for 
further consideration.

Adrian

> On Nov 18, 2015, at 3:21 PM, Hongbin Lu  wrote:
> 
> Hi Bharath,
>  
> I agree the “container” part. We can implement “magnum container-create ..” 
> for mesos bay in the way you mentioned. Personally, I don’t like to introduce 
> “apps” and “appgroups” resources to Magnum, because they are already provided 
> by native tool [1]. I couldn’t see the benefits to implement a wrapper API to 
> offer what native tool already offers. However, if you can point out a valid 
> use case to wrap the API, I will give it more thoughts.
>  
> Best regards,
> Hongbin
>  
> [1] https://docs.mesosphere.com/using/cli/marathonsyntax/
>  
> From: bharath thiruveedula [mailto:bharath_...@hotmail.com] 
> Sent: November-18-15 1:20 PM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [magnum] Mesos Conductor
>  
> Hi all,
>  
> I am working on the blueprint [1]. As per my understanding, we have two 
> resources/objects in mesos+marathon:
>  
> 1)Apps: combination of instances/containers running on multiple hosts 
> representing a service.[2]
> 2)Application Groups: Group of apps, for example we can have database 
> application group which consists mongoDB app and MySQL App.[3]
>  
> So I think we need to have two resources 'apps' and 'appgroups' in mesos 
> conductor like we have pod and rc for k8s. And regarding 'magnum container' 
> command, we can create, delete and retrieve container details as part of 
> mesos app itself(container =  app with 1 instance). Though I think in mesos 
> case 'magnum app-create ..."  and 'magnum container-create ...' will use the 
> same REST API for both cases. 
>  
> Let me know your opinion/comments on this and correct me if I am wrong
>  
> [1]https://blueprints.launchpad.net/magnum/+spec/mesos-conductor.
> [2]https://mesosphere.github.io/marathon/docs/application-basics.html
> [3]https://mesosphere.github.io/marathon/docs/application-groups.html
>  
>  
> Regards
> Bharath T 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Report from Gerrit User Summit

2015-11-18 Thread David Pursehouse
On Wed, Nov 18, 2015 at 3:33 AM James E. Blair  wrote:

> Sean Dague  writes:
>
> > Given that it's on track for getting accepted upstream, it's probably
> > something we could cherry pick into our deploy, as it won't be a
> > divergence issue.
>
> Once it merges upstream, yes.
>
>
I haven't merged it yet because I'm waiting for feedback from Dave
Borowitz.  He had comments on the original patch that Khai put up for the
master branch.

There are also a few other fixes up for review on stable-2.11 so I'll
probably be doing another release when they all get merged.

I'll hold off until after you've done your upgrade though, in case anything
else crops up that needs to be fixed.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Tempest][Keystone] Re: [openstack-qa] [tempest][keystone]Inherit API was not found in API tempest.

2015-11-18 Thread koshiya maho
Hi Igawa-san,

Thanks for your reply.

On Wed, 18 Nov 2015 09:00:03 +
Masayuki Igawa  wrote:

> Hi Koshiya-san,
> 
> Actually, we don't use openstack-qa ML for discussion now. So I added
> openstack-dev ML to CC.

Oh...sorry. I deleted openstack-qa ML from this mail.

> 2015年11月18日(水) 16:31 koshiya maho :
> >
> > Hi all,
> >
> > I'm creating a tempest that conforms to operational environment
> > of OpenStack that we have built.
> >
> > So far as I've seen the API tempest in the community,
> > Inherit API of Keystone was not found.
> >
> > Does this exist somewhere? Other task is moving in relation to this?
> >
> > If not, I want to challenge to post a patch of this.
> 
> Do you mean this API?
> http://developer.openstack.org/api-ref-identity-v3-ext.html#identity_v3_OS-INHERIT-ext
> 
> I've found a client for OS-TRUST extension api
>  in tempest/services/identity/v3/json/identity_client.py but not about
> OS-INHERIT.
> So I think you can add OS-INHERIT api client and test then post it :)

Yes, it was meaning "OS-INHERIT".
Immediately, I will start this work.

Thanks,


--
Maho Koshiya
NTT Software Corporation
E-Mail : koshiya.m...@po.ntts.co.jp



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] How to add a periodic check for typos?

2015-11-18 Thread Gareth
Hi stacker,

We could use some 3rd tools like topy:
pip install topy
topy -a 
git commit & git review

Here is an example: https://review.openstack.org/#/c/247261/

Could we have a periodic job like Jenkins users updating our requirement.txt?

-- 
Gareth

Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball
OpenStack contributor, kun_huang@freenode
My promise: if you find any spelling or grammar mistakes in my email
from Mar 1 2013, notify me
and I'll donate $1 or ¥1 to an open organization you specify.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Mesos Conductor

2015-11-18 Thread bharath thiruveedula
@hongin, @adrian I agree with you. So can we go ahead with magnum 
container-create(delete) ...  for mesos bay (which actually create 
mesos(marathon) app internally)?
@jay, yes we multiple frameworks which are using mesos lib. But the mesos bay 
we are creating uses marathon. And we had discussion in irc on this topic, and 
I was asked to implement initial version for marathon. And agree with you to 
have unified client interface for creating pod,app.
RegardsBharath T  

Date: Thu, 19 Nov 2015 10:01:35 +0800
From: jay.lau@gmail.com
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] Mesos Conductor

+1.

One problem I want to mention is that for mesos integration, we cannot limited 
to Marathon + Mesos as there are many frameworks can run on top of Mesos, such 
as Chronos, Kubernetes etc, we may need to consider more for Mesos integration 
as there is a huge eco-system build on top of Mesos.

On Thu, Nov 19, 2015 at 8:26 AM, Adrian Otto  wrote:
Bharath,



I agree with Hongbin on this. Let’s not expand magnum to deal with apps or 
appgroups in the near term. If there is a strong desire to add these things, we 
could allow it by having a plugin/extensions interface for the Magnum API to 
allow additional COE specific features. Honestly, it’s just going to be a 
nuisance to keep up with the various upstreams until they become completely 
stable from an API perspective, and no additional changes are likely. All of 
our COE’s still have plenty of maturation ahead of them, so this is the wrong 
time to wrap them.



If someone really wants apps and appgroups, (s)he could add that to an 
experimental branch of the magnum client, and have it interact with the 
marathon API directly rather than trying to represent those resources in 
Magnum. If that tool became popular, then we could revisit this topic for 
further consideration.



Adrian



> On Nov 18, 2015, at 3:21 PM, Hongbin Lu  wrote:

>

> Hi Bharath,

>

> I agree the “container” part. We can implement “magnum container-create ..” 
> for mesos bay in the way you mentioned. Personally, I don’t like to introduce 
> “apps” and “appgroups” resources to Magnum, because they are already provided 
> by native tool [1]. I couldn’t see the benefits to implement a wrapper API to 
> offer what native tool already offers. However, if you can point out a valid 
> use case to wrap the API, I will give it more thoughts.

>

> Best regards,

> Hongbin

>

> [1] https://docs.mesosphere.com/using/cli/marathonsyntax/

>

> From: bharath thiruveedula [mailto:bharath_...@hotmail.com]

> Sent: November-18-15 1:20 PM

> To: openstack-dev@lists.openstack.org

> Subject: [openstack-dev] [magnum] Mesos Conductor

>

> Hi all,

>

> I am working on the blueprint [1]. As per my understanding, we have two 
> resources/objects in mesos+marathon:

>

> 1)Apps: combination of instances/containers running on multiple hosts 
> representing a service.[2]

> 2)Application Groups: Group of apps, for example we can have database 
> application group which consists mongoDB app and MySQL App.[3]

>

> So I think we need to have two resources 'apps' and 'appgroups' in mesos 
> conductor like we have pod and rc for k8s. And regarding 'magnum container' 
> command, we can create, delete and retrieve container details as part of 
> mesos app itself(container =  app with 1 instance). Though I think in mesos 
> case 'magnum app-create ..."  and 'magnum container-create ...' will use the 
> same REST API for both cases.

>

> Let me know your opinion/comments on this and correct me if I am wrong

>

> [1]https://blueprints.launchpad.net/magnum/+spec/mesos-conductor.

> [2]https://mesosphere.github.io/marathon/docs/application-basics.html

> [3]https://mesosphere.github.io/marathon/docs/application-groups.html

>

>

> Regards

> Bharath T

> __

> OpenStack Development Mailing List (not for usage questions)

> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Thanks,

Jay Lau (Guangya Liu)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev   
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] How to add a periodic check for typos?

2015-11-18 Thread Matt Riedemann



On 11/18/2015 8:00 PM, Gareth wrote:

Hi stacker,

We could use some 3rd tools like topy:
 pip install topy
 topy -a 
 git commit & git review

Here is an example: https://review.openstack.org/#/c/247261/

Could we have a periodic job like Jenkins users updating our requirement.txt?



Let the list trolling begin!

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Autoscaling both clusters andcontainers

2015-11-18 Thread Jay Lau
Cool, thanks Ton!

On Thu, Nov 19, 2015 at 7:07 AM, Ton Ngo  wrote:

> The slides for the Tokyo talk is available on slideshare:
>
> http://www.slideshare.net/huengo965921/exploring-magnum-and-senlin-integration-for-autoscaling-containers
>
> Ton,
>
>
> [image: Inactive hide details for Jay Lau ---11/17/2015 10:05:27 PM---It's
> great that we discuss this in mail list, I filed a bp here h]Jay Lau
> ---11/17/2015 10:05:27 PM---It's great that we discuss this in mail list, I
> filed a bp here https://blueprints.launchpad.net/mag
>
> From: Jay Lau 
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: 11/17/2015 10:05 PM
> Subject: Re: [openstack-dev] [magnum] Autoscaling both clusters and
> containers
> --
>
>
>
> It's great that we discuss this in mail list, I filed a bp here
> *https://blueprints.launchpad.net/magnum/+spec/two-level-auto-scaling*
> 
> and planning a spec for this. You can get some early ideas from what Ton
> pointed here:
> *https://www.openstack.org/summit/tokyo-2015/videos/presentation/exploring-magnum-and-senlin-integration-for-autoscaling-containers*
> 
>
> *@Ton*, is it possible that we publish the slides to slideshare? ;-)
>
> Our thinking was introduce an autoscaler service to Magnum just like what
> GCE is doing now, will have you updated when a spec is ready for review.
>
> On Wed, Nov 18, 2015 at 1:22 PM, Egor Guz <*e...@walmartlabs.com*
> > wrote:
>
>Ryan
>
>I haven’t seen any proposals/implementations from Mesos/Swarm (but  I
>am not following Mesos and Swam community very close these days).
>But Kubernetes 1.1 has pod autoscaling (
>
> *https://github.com/kubernetes/kubernetes/blob/master/docs/design/horizontal-pod-autoscaler.md*
>
> 
>),
>which should cover containers auto-scaling. Also there is PR for
>cluster auto-scaling (
>*https://github.com/kubernetes/kubernetes/pull/15304*
>), which
>has implementation for GCE, but OpenStack support can be added as well.
>
>—
>Egor
>
>From: Ton Ngo <*t...@us.ibm.com* *t...@us.ibm.com* >>
>Reply-To: "OpenStack Development Mailing List (not for usage
>questions)" <*openstack-dev@lists.openstack.org*
>*openstack-dev@lists.openstack.org* 
>>>
>Date: Tuesday, November 17, 2015 at 16:58
>To: "OpenStack Development Mailing List (not for usage questions)" <
>*openstack-dev@lists.openstack.org* 
>>>
>Subject: Re: [openstack-dev] [magnum] Autoscaling both clusters and
>containers
>
>
>Hi Ryan,
>There was a talk in the last Summit on this topics to explore the
>options with Magnum, Senlin, Heat, Kubernetes:
>
>
> *https://www.openstack.org/summit/tokyo-2015/videos/presentation/exploring-magnum-and-senlin-integration-for-autoscaling-containers*
>
> 
>A demo was shown with Senlin interfacing to Magnum to autoscale.
>There was also a Magnum design session to discuss this same topics.
>The use cases are similar to what you describe. Because the subject is
>complex, there are many moving parts, and multiple teams/projects are
>involved, one outcome of the design session is that we will write a spec on
>autoscaling containers and cluster. A patch should be coming soon, so it
>would be great to have your input on the spec.
>Ton,
>
>[Inactive hide details for Ryan Rossiter ---11/17/2015 02:05:48
>PM---Hi all, I was having a discussion with a teammate with resp]Ryan
>Rossiter ---11/17/2015 02:05:48 PM---Hi all, I was having a discussion with
>a teammate with respect to container
>
>From: Ryan Rossiter <*rlros...@linux.vnet.ibm.com*
>>>
>To: *openstack-dev@lists.openstack.org*
>*openstack-dev@lists.openstack.org* 
>>
>Date: 11/17/2015 02:05 PM
>Subject: [openstack-dev] [magnum] Autoscaling both clusters and
>containers
>
>
>
>
>
>Hi all,
>
>I was having a discussion with a teammate with respect 

Re: [openstack-dev] [magnum] Mesos Conductor

2015-11-18 Thread Kai Qiang Wu
@bharath,

1) actually, if you mean use container-create(delete) to do on mesos bay
for apps.  I am not sure how different the interface between docker
interface and mesos interface. One point that when you introduce that
feature, please not make docker container interface more complicated than
now. I worried that because it would confuse end-users a lot than the
unified benefits. (maybe as optional parameter to pass one json file to
create containers in mesos)

2) For the unified interface, I think it need more thoughts, we need not
bring more trouble to end-users to learn new concepts or interfaces, except
we could have more clear interface, but different COES vary a lot. It is
very challenge.



Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   bharath thiruveedula 
To: OpenStack Development Mailing List not for usage questions

Date:   19/11/2015 10:31 am
Subject:Re: [openstack-dev] [magnum] Mesos Conductor



@hongin, @adrian I agree with you. So can we go ahead with magnum
container-create(delete) ...  for mesos bay (which actually create mesos
(marathon) app internally)?

@jay, yes we multiple frameworks which are using mesos lib. But the mesos
bay we are creating uses marathon. And we had discussion in irc on this
topic, and I was asked to implement initial version for marathon. And agree
with you to have unified client interface for creating pod,app.

Regards
Bharath T

Date: Thu, 19 Nov 2015 10:01:35 +0800
From: jay.lau@gmail.com
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] Mesos Conductor

+1.

One problem I want to mention is that for mesos integration, we cannot
limited to Marathon + Mesos as there are many frameworks can run on top of
Mesos, such as Chronos, Kubernetes etc, we may need to consider more for
Mesos integration as there is a huge eco-system build on top of Mesos.

On Thu, Nov 19, 2015 at 8:26 AM, Adrian Otto 
wrote:
  Bharath,

  I agree with Hongbin on this. Let’s not expand magnum to deal with
  apps or appgroups in the near term. If there is a strong desire to
  add these things, we could allow it by having a plugin/extensions
  interface for the Magnum API to allow additional COE specific
  features. Honestly, it’s just going to be a nuisance to keep up with
  the various upstreams until they become completely stable from an API
  perspective, and no additional changes are likely. All of our COE’s
  still have plenty of maturation ahead of them, so this is the wrong
  time to wrap them.

  If someone really wants apps and appgroups, (s)he could add that to
  an experimental branch of the magnum client, and have it interact
  with the marathon API directly rather than trying to represent those
  resources in Magnum. If that tool became popular, then we could
  revisit this topic for further consideration.

  Adrian

  > On Nov 18, 2015, at 3:21 PM, Hongbin Lu 
  wrote:
  >
  > Hi Bharath,
  >
  > I agree the “container” part. We can implement “magnum
  container-create ..” for mesos bay in the way you mentioned.
  Personally, I don’t like to introduce “apps” and “appgroups”
  resources to Magnum, because they are already provided by native tool
  [1]. I couldn’t see the benefits to implement a wrapper API to offer
  what native tool already offers. However, if you can point out a
  valid use case to wrap the API, I will give it more thoughts.
  >
  > Best regards,
  > Hongbin
  >
  > [1] https://docs.mesosphere.com/using/cli/marathonsyntax/
  >
  > From: bharath thiruveedula [mailto:bharath_...@hotmail.com]
  > Sent: November-18-15 1:20 PM
  > To: openstack-dev@lists.openstack.org
  > Subject: [openstack-dev] [magnum] Mesos Conductor
  >
  > Hi all,
  >
  > I am working on the blueprint [1]. As per my understanding, we have
  two resources/objects in mesos+marathon:
  >
  > 1)Apps: combination of instances/containers running on multiple
  hosts representing a service.[2]
  > 2)Application Groups: Group of apps, for example we can have
  database application group which consists mongoDB app and MySQL
  App.[3]
  >
  > So I think we need to have two resources 'apps' and 'appgroups' in
  mesos conductor like we have pod and rc for k8s. And regarding
  'magnum container' 

Re: [openstack-dev] [magnum] Mesos Conductor

2015-11-18 Thread Jay Lau
+1.

One problem I want to mention is that for mesos integration, we cannot
limited to Marathon + Mesos as there are many frameworks can run on top of
Mesos, such as Chronos, Kubernetes etc, we may need to consider more for
Mesos integration as there is a huge eco-system build on top of Mesos.

On Thu, Nov 19, 2015 at 8:26 AM, Adrian Otto 
wrote:

> Bharath,
>
> I agree with Hongbin on this. Let’s not expand magnum to deal with apps or
> appgroups in the near term. If there is a strong desire to add these
> things, we could allow it by having a plugin/extensions interface for the
> Magnum API to allow additional COE specific features. Honestly, it’s just
> going to be a nuisance to keep up with the various upstreams until they
> become completely stable from an API perspective, and no additional changes
> are likely. All of our COE’s still have plenty of maturation ahead of them,
> so this is the wrong time to wrap them.
>
> If someone really wants apps and appgroups, (s)he could add that to an
> experimental branch of the magnum client, and have it interact with the
> marathon API directly rather than trying to represent those resources in
> Magnum. If that tool became popular, then we could revisit this topic for
> further consideration.
>
> Adrian
>
> > On Nov 18, 2015, at 3:21 PM, Hongbin Lu  wrote:
> >
> > Hi Bharath,
> >
> > I agree the “container” part. We can implement “magnum container-create
> ..” for mesos bay in the way you mentioned. Personally, I don’t like to
> introduce “apps” and “appgroups” resources to Magnum, because they are
> already provided by native tool [1]. I couldn’t see the benefits to
> implement a wrapper API to offer what native tool already offers. However,
> if you can point out a valid use case to wrap the API, I will give it more
> thoughts.
> >
> > Best regards,
> > Hongbin
> >
> > [1] https://docs.mesosphere.com/using/cli/marathonsyntax/
> >
> > From: bharath thiruveedula [mailto:bharath_...@hotmail.com]
> > Sent: November-18-15 1:20 PM
> > To: openstack-dev@lists.openstack.org
> > Subject: [openstack-dev] [magnum] Mesos Conductor
> >
> > Hi all,
> >
> > I am working on the blueprint [1]. As per my understanding, we have two
> resources/objects in mesos+marathon:
> >
> > 1)Apps: combination of instances/containers running on multiple hosts
> representing a service.[2]
> > 2)Application Groups: Group of apps, for example we can have database
> application group which consists mongoDB app and MySQL App.[3]
> >
> > So I think we need to have two resources 'apps' and 'appgroups' in mesos
> conductor like we have pod and rc for k8s. And regarding 'magnum container'
> command, we can create, delete and retrieve container details as part of
> mesos app itself(container =  app with 1 instance). Though I think in mesos
> case 'magnum app-create ..."  and 'magnum container-create ...' will use
> the same REST API for both cases.
> >
> > Let me know your opinion/comments on this and correct me if I am wrong
> >
> > [1]https://blueprints.launchpad.net/magnum/+spec/mesos-conductor.
> > [2]https://mesosphere.github.io/marathon/docs/application-basics.html
> > [3]https://mesosphere.github.io/marathon/docs/application-groups.html
> >
> >
> > Regards
> > Bharath T
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Learning to Debug the Gate

2015-11-18 Thread Mikhail Medvedev
We had a mini tutorial today in #openstack-infra, were Clark Boylan
explained how one can bring up an environment to debug logstash-2.0.
This is tangential to "debugging the gate", but still could be useful to
better understand logstash/es pipeline. 

I did condense the conversation into the doc, excuse any grammar /
punctuation that I missed:

http://paste.openstack.org/show/479346/

-- 
Mikhail Medvedev

On Tue, Nov 10, 2015, at 10:21, Sean Dague wrote:
> There is also a neutron issue which is causing a 35% failure rate in
> neutron jobs -
> http://tinyurl.com/ne3ex4v
> 
> That one still needs resolution.
> 
> On 11/10/2015 10:54 AM, Davanum Srinivas wrote:
> > Took about 35 mins or so :)
> > 
> > -- Dims
> > 
> > On Tue, Nov 10, 2015 at 10:45 AM, Matt Riedemann
> >  wrote:
> >>
> >>
> >> On 11/9/2015 3:54 PM, Anita Kuno wrote:
> >>>
> >>> On 11/05/2015 07:45 PM, Anita Kuno wrote:
> 
>  On 11/03/2015 05:30 PM, Anita Kuno wrote:
> >
> > On 11/02/2015 12:39 PM, Anita Kuno wrote:
> >>
> >> On 10/29/2015 10:42 PM, Anita Kuno wrote:
> >>>
> >>> On 10/29/2015 08:27 AM, Anita Kuno wrote:
> 
>  On 10/28/2015 12:14 AM, Matt Riedemann wrote:
> >
> >
> >
> > On 10/27/2015 4:08 AM, Anita Kuno wrote:
> >>
> >> Learning how to debug the gate was identified as a theme at the
> >> "Establish Key Themes for the Mitaka Cycle" cross-project session:
> >> https://etherpad.openstack.org/p/mitaka-crossproject-themes
> >>
> >> I agreed to take on this item and facilitate the process.
> >>
> >> Part one of the conversation includes referencing this video
> >> created by
> >> Sean Dague and Dan Smith:
> >> https://www.youtube.com/watch?v=fowBDdLGBlU
> >>
> >> Please consume this as you are able.
> >>
> >> Other suggestions for how to build on this resource were mentioned
> >> and
> >> will be coming in the future but this was an easy, actionable first
> >> step.
> >>
> >> Thank you,
> >> Anita.
> >>
> >>
> >> __
> >>
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> > https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/tales-from-the-gate-how-debugging-the-gate-helps-your-enterprise
> >
> >
> 
>  The source for the definition of "the gate":
> 
>  http://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul/layout.yaml#n34
> 
>  Thanks for following along,
>  Anita.
> 
> >>>
> >>> This is the status page showing the status of our running jobs,
> >>> including patches in the gate pipeline:
> >>> http://status.openstack.org/zuul/
> >>>
> >>> Thank you,
> >>> Anita.
> >>>
> >>
> >> This is a simulation of how the gate tests patches:
> >> http://docs.openstack.org/infra/publications/zuul/#%2818%29
> >>
> >> Click in the browser window to advance the simulation.
> >>
> >> Thank you,
> >> Anita.
> >>
> >
> > Here is a presentation that uses the slide deck linked above, I
> > recommend watching: https://www.youtube.com/watch?v=WDoSCGPiFDQ
> >
> > Thank you,
> > Anita.
> >
> 
>  Three links in this edition of Learning to Debug the Gate:
> 
>  The view that tracks our top bugs:
>  http://status.openstack.org/elastic-recheck/
> 
>  The logstash queries that create the above view:
> 
>  http://git.openstack.org/cgit/openstack-infra/elastic-recheck/tree/queries
> 
>  Logstash itself, where you too can practice creating queries:
>  http://logstash.openstack.org
> 
>  Note: in logstash the query is the transferable piece of information.
>  Filters can help you create a query, they do not populate a query. The
>  information that is in the query bar is what is important here.
> 
>  Practice making some queries of your own.
> 
>  Thanks for reading,
>  Anita.
> 
> >>>
> >>> Today the elastic search cluster was upgraded to 1.7.3:
> >>>
> >>> http://lists.openstack.org/pipermail/openstack-dev/2015-November/078314.html
> >>>
> >>> Go over the 1.7 elastic search docs:
> >>> https://www.elastic.co/guide/en/elasticsearch/reference/1.7/index.html
> >>> and try a few queries.
> >>>
> >>> Thanks for following along,
> >>> Anita.
> >>>
> >>> __
> >>> OpenStack Development 

[openstack-dev] [release] short pause/delay in new releases

2015-11-18 Thread Doug Hellmann
Most of the members of the release team are off or traveling over the
next couple of days, so anticipate delays with new releases until early
next week. Sorry for any inconvenience this causes.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [lbaas] [octavia] German + Brandon stepping down, Call for Candidates, and No Meeting 11/25

2015-11-18 Thread Eichberger, German
All,

Brandon and I have decided to step down as Octavia Sub-Team-Leads (STL) [1] and 
we want to thank everybody who helped make Octavia the project it is today. 
Brandon will be dedicating more of his time to other parts of Neutron and I 
will be splitting my time between FWaaS, Octavia, and internal projects. 

Doug (cc’d )has graciously agreed to hold the election and will send out 
details on voting in due time. We are encouraging everyone who wants to be 
considered to submit his/her candidacy to the ML. The Octavia team has a deep 
talent pool and two great candidates Michael and Stephen already stepped 
forward [1]. We are excited to work with the new PTL in the future.

Lastly, due to the Thanksgiving Holidays we are skipping next week meeting.

Happy Holidays,
German + Brandon

[1] 
http://eavesdrop.openstack.org/meetings/octavia/2015/octavia.2015-11-18-20.00.log.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Redundant checks in form values

2015-11-18 Thread Akihiro Motoki
In the current AddRule form, validate_port_range is used as a validator,
so a value from 1 to 65535 is accepted.
I think this is the reason that _clean_rule_icmp check the exact value range.
You can improve it as you think.

bug 1511748 is a regression of validate_port_range change in
https://review.openstack.org/#/c/116508/.
Previously it accepts -1 and we can use it both for TCP/UDP and ICMP,
but after that change it no longer cannot be applied to ICMP.

IMO we need to use separate validators for TCP/UDP and ICMP.

Akihiro



2015-11-19 2:05 GMT+09:00 Suraj Deshmukh :
> In file [1] in `class AddRule` in method `_clean_rule_icmp` why checkups are
> performed on if `icmp_type` or `icmp_code` is `None` or in range of `-1 to
> 255`.
>
> What I mean here is that in `class AddRule` while values are being accepted
> from form data and being stored, validators do their job of checking whether
> that field is valid, so why a redundant checkup in method
> `_clean_rule_icmp`?
>
> Please correct me if I am wrong in understanding anything. Currently I am
> working on the Bug #1511748 [2]. Previously while checking the validity of
> `icmp_type` and `icmp_code`, the functionality of tcp_ports was used. This
> is wrong because, TCP ports have a range of 0 to 65535 while `icmp_type` and
> `icmp_code` have a range of 0 to 255.
>
> So now oslo_utils.netutils has dedicated functionality to check if
> `icmp_type` and `icmp_code` are valid here is a recent code merger [3].
>
> So I was trying to add this newly added functionality into Horizon but the
> test cases run older code and hence needed help in getting my head around
> with the source.
>
>
> [1]
> openstack_dashboard/dashboards/project/access_and_security/security_groups/forms.py
> [2] https://bugs.launchpad.net/horizon/+bug/1511748
> [3] https://review.openstack.org/#/c/240661/
>
> Thanks and Regards
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Autoscaling both clusters andcontainers

2015-11-18 Thread Ton Ngo
The slides for the Tokyo talk is available on slideshare:
http://www.slideshare.net/huengo965921/exploring-magnum-and-senlin-integration-for-autoscaling-containers

Ton,




From:   Jay Lau 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   11/17/2015 10:05 PM
Subject:Re: [openstack-dev] [magnum] Autoscaling both clusters and
containers



It's great that we discuss this in mail list, I filed a bp here
https://blueprints.launchpad.net/magnum/+spec/two-level-auto-scaling and
planning a spec for this. You can get some early ideas from what Ton
pointed here:
https://www.openstack.org/summit/tokyo-2015/videos/presentation/exploring-magnum-and-senlin-integration-for-autoscaling-containers


@Ton, is it possible that we publish the slides to slideshare? ;-)

Our thinking was introduce an autoscaler service to Magnum just like what
GCE is doing now, will have you updated when a spec is ready for review.

On Wed, Nov 18, 2015 at 1:22 PM, Egor Guz  wrote:
  Ryan

  I haven’t seen any proposals/implementations from Mesos/Swarm (but  I am
  not following Mesos and Swam community very close these days).
  But Kubernetes 1.1 has pod autoscaling (
  
https://github.com/kubernetes/kubernetes/blob/master/docs/design/horizontal-pod-autoscaler.md
  ),
  which should cover containers auto-scaling. Also there is PR for cluster
  auto-scaling (https://github.com/kubernetes/kubernetes/pull/15304), which
  has implementation for GCE, but OpenStack support can be added as well.

  —
  Egor

  From: Ton Ngo >
  Reply-To: "OpenStack Development Mailing List (not for usage questions)"
  
  Date: Tuesday, November 17, 2015 at 16:58
  To: "OpenStack Development Mailing List (not for usage questions)" <
  openstack-dev@lists.openstack.org>
  Subject: Re: [openstack-dev] [magnum] Autoscaling both clusters and
  containers


  Hi Ryan,
  There was a talk in the last Summit on this topics to explore the options
  with Magnum, Senlin, Heat, Kubernetes:
  
https://www.openstack.org/summit/tokyo-2015/videos/presentation/exploring-magnum-and-senlin-integration-for-autoscaling-containers

  A demo was shown with Senlin interfacing to Magnum to autoscale.
  There was also a Magnum design session to discuss this same topics. The
  use cases are similar to what you describe. Because the subject is
  complex, there are many moving parts, and multiple teams/projects are
  involved, one outcome of the design session is that we will write a spec
  on autoscaling containers and cluster. A patch should be coming soon, so
  it would be great to have your input on the spec.
  Ton,

  [Inactive hide details for Ryan Rossiter ---11/17/2015 02:05:48 PM---Hi
  all, I was having a discussion with a teammate with resp]Ryan Rossiter
  ---11/17/2015 02:05:48 PM---Hi all, I was having a discussion with a
  teammate with respect to container

  From: Ryan Rossiter 
  To: openstack-dev@lists.openstack.org
  Date: 11/17/2015 02:05 PM
  Subject: [openstack-dev] [magnum] Autoscaling both clusters and
  containers

  



  Hi all,

  I was having a discussion with a teammate with respect to container
  scaling. He likes the aspect of nova-docker that allows you to scale
  (essentially) infinitely almost instantly, assuming you are using a
  large pool of compute hosts. In the case of Magnum, if I'm a container
  user, I don't want to be paying for a ton of vms that just sit idle, but
  I also want to have enough vms to handle my scale when I infrequently
  need it. But above all, when I need scale, I don't want to suddenly have
  to go boot vms and wait for them to start up when I really need it.

  I saw [1] which discusses container scaling, but I'm thinking we can
  take this one step further. If I don't want to pay for a lot of vms when
  I'm not using them, could I set up an autoscale policy that allows my
  cluster to expand when my container concentration gets too high on my
  existing cluster? It's kind of a case of nested autoscaling. The
  containers are scaled based on request demand, and the cluster vms are
  scaled based on container count.

  I'm unsure of the details of Senlin, but at least looking at Heat
  autoscaling [2], this would not be very hard to add to the Magnum
  templates, and we would forward those on through the bay API. (I figure
  we would do this through the bay, not baymodel, because I can see
  similar clusters that would want to be scaled differently).

  Let me know if I'm totally crazy or if this is a good idea (or if you
  guys have already talked about this before). I would be interested in
  your feedback.

  [1]
  http://lists.openstack.org/pipermail/openstack-dev/2015-November/078628.html

  [2] 

Re: [openstack-dev] [stable][neutron] How we handle Kilo backports

2015-11-18 Thread Tony Breeds
On Wed, Nov 18, 2015 at 05:44:38PM +0100, Ihar Hrachyshka wrote:
> Hi all,
> 
> as per [1] I imply that all projects under stable-maint-core team
> supervision must abide the stable policy [2] which limits the types of
> backports for N-2 branches (now it’s stable/kilo) to "Only critical bugfixes
> and security patches”. With that, I remind all stable core members about the
> rule.
> 
> Since we are limited to ‘critical bugfixes’ only, and since there is no
> clear definition of what ‘critical’ means, I guess we should define it for
> ourselves.
> 
> In Neutron world, we usually use Critical importance for those bugs that
> break gate. High is used for those bugs that have high impact production
> wise. With that in mind, I suggest we define ‘critical’ bugfixes as Critical
> + High in LP. Comments on that?

So I'm not a core but I check the severity of the bug and query the review owner
if it is < High.  My rationale is that sometimes bugs are mis-classified,
someone took the time to backport it so it's critical to that person if not
the project.

Note that doesn't mean they'll get in but it facilitates the discussion.

Anyway we can iterate on this: https://review.openstack.org/247229

Yours Tony.


pgpdtbXIKlq6C.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][tempest]Tempest test plugin and microversions

2015-11-18 Thread Ken'ichi Ohmichi
Hi,

Nice description for our situation.
Yes, qa-spec of microversions testing is already approved.
But as you said, we need time for the implementation.

Your migration patch for ironic doesn't seem hard to be merged, I feel. So
it is also nice option to merge the migration patch first.
Technically, the microversions testing framework will be merged into
tempest-lib at early stage, and we can share it on many projects.

Thanks

2015年11月18日(水) 午後6:54 Yuiko Takada :

> Hi Ironic folks,
>
> As we discussed in Design Summit, we will move forward with tempest test
> tasks.
> I've posted a patch for tempest test plugin interface [1]
> (Now it fails because of flake8-ignore-difference, anyway).
>
> Then, I'd like to discuss about our tests migration procedure.
> As you know, we are also working for Tempest microversions, so
> our tests in Tempest need to be fixed for working with microversions.
> Its spec has been approved and now microversion testing frame work has
> been posted [2].
> IMO, tests in Tempest should be fixed before moving into Ironic tree,
> but maybe [2] will take long time to be merged.
> What do you think?
>
> [1] https://review.openstack.org/#/c/246161/
> [2] https://review.openstack.org/#/c/242346/
>
>
> Best Regards,
> Yuiko Takada
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack][openstack-operators] IRC meeting(s)

2015-11-18 Thread JJ Asghar
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hey everyone!

I've been seeing some interesting regressions in our community
recently. Our IRC meetings, or at least the ones I've been attending,
are extremely variable in quality. I believe this is a disservice to
our community as a whole, so I've offered to help.

I've written up a blog post[1] on some general characteristics of IRC
meetings that seem to come out with the most fruitful conversations.

I hope everyone finds something useful from this post, and hopefully
will help educate new comers to our community in some good working
practices.

I'd love to talk about this more, so please don't hesitate to reach out.

[1]:
http://jjasghar.github.io/blog/2015/11/18/characteristics-of-a-successful-chatroom-meeting/
- -- 
Best Regards,
JJ Asghar
c: 512.619.0722 t: @jjasghar irc: j^2
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2
Comment: GPGTools - https://gpgtools.org

iQIcBAEBCgAGBQJWTRbiAAoJEDZbxzMH0+jTF4QQAKlso2UFCC79Rsxf2VR5gEMl
0i3n1aMmdnUB9Gz9yniaaoGBn4Vu0Xwx7jFdv2K3NGHDkvYl3NLEy9Z2nCgo7Pj4
b8CFmNrdEKx5mYHmg/7nRFm6VZC4OKyF2df5qByWOjreYdP9/s2hmqH97oLeQfyc
Dd87PJlcvCzbgUONHyVJXioeWh6apqyuz+kr5l4hdyzYHeK490DfpzcRIMSuYWxW
g9GPbEqSwEdSsiB1Z4OyYqV3BCOiLx1utJAgfwgC6K4ORO9g06eWP+I9RO3Ytz87
hU/HySoFqods0xj8uCwuO1TuFJEvF+Mw9xKhxg0bSRVGalnTE644H3kRpnaAyoZi
Wqlz0/kKuCpEVdGe14adfeHYXsFhqYx2740SJ5hmsRUkK+XlZv+VSNnCPwDFkpyR
Gz6aiReFTRw5HFSvuwNMQ/Ph4RcR0WBZraFyGPVnghMGgEkH8LCMRT+YR01oyU/G
d23+qti9RF8z+7dnvH2auI+mc3cE6gWsRi40tPQmvcm+B7GBkTl66U0sBYbnVSsD
Yyn/E/mnUmb6Ewzv8sDyKqIBS3U2iiR5dG36egN6H0R1ciroYEtk76ZdpHnfVsyw
9SsRdpWPbU3KjfR0pwwMyJAm40OUA0qjhTMXv20OEn7ocDlFUTa/ky/yMfx1XUPy
YiPtOE0FOyAf/P59AR3a
=bmr5
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][infra] Getting a bleeding edge libvirt gate job running

2015-11-18 Thread Tony Breeds
On Wed, Nov 18, 2015 at 11:46:40AM +, Daniel P. Berrange wrote:
> On Wed, Nov 18, 2015 at 05:18:28PM +1100, Tony Breeds wrote:
> > On Tue, Nov 17, 2015 at 03:32:45PM -0800, Jay Pipes wrote:
> > > On 11/17/2015 11:10 AM, Markus Zoeller wrote:
> > > >Background
> > > >==
> > > >The blueprint [1] wants to utilize the *virtlogd* logging deamon from
> > > >libvirt. Among others to solve bug [2], one of our oldest ones. The
> > > >funny part is, that this libvirt feature is still in development. This
> > > >was a trigger to see if we can create a gate job which utilizes the
> > > >latest, bleeding edge, version of libvirt to test such features. We
> > > >discussed it shortly in IRC [3] (tonyb, bauzas, markus_z) and wanted to
> > > >get some feedback here. The summary of the idea is:
> > > >* Create a custom repo which contains the latest libvirt version
> > > >* Enhance Devstack so that it can point to a custom repo to install
> > > >   the built libvirt packages
> > > >* Have a nodepool image which is compatible with the libvirt packages
> > > >* In case of [1]: check if tempest needs further/changed tests
> > > >
> > > >Open questions
> > > >==
> > > >* Is already someone working on something like that and I missed it?
> > > 
> > > Sean (cc'd) might have some information on what he's doing in the OVS w/
> > > DPDK build environment, which AFAIK requires a later build of libvirt than
> > > available in most distros.
> > > 
> > > >* If 'no', is there already a repo which contains the very latest
> > > >   libvirt builds which we can utilize?
> > > >
> > > >I haven't done anything with the gates before, which means there is a
> > > >very high chance I'm missing something or missunderstanding a concept.
> > > >Please let me know what you think.
> > > 
> > > A generic "build libvirt or OVS from this source repo" dsvm job would be
> > > great I think. That would allow overrides in ENV variables to point the 
> > > job
> > > to a URI for grabbing sources of OVS (DPDK OVS, mainline OVS) or libvirt
> > > that would be built into the target nodepool images.
> > 
> > I was really hoping to decouple the build from the dsvm jobs.  My initial
> > thoughts were a add a devstack plugin that add $repo and then upgrade
> > $packages.  I wanted to decouple the build from install as I assumed that 
> > the
> > delays in building libvirt (etc) would be problematic *and* provide another
> > failure mode for devstack that we really don't want to deal with.
> > 
> > I was only thinking of having libvirt and qemu in there but if the plug-in 
> > was
> > abstract enough it could easily provide packages for other help utils (like 
> > OVS
> > and DPDK).
> > 
> > When I started looking at this Ubuntu was the likely candidate as Fedora in 
> > the gate
> > wasn't really a stable thing.  I see a little more fedora in nodepool so 
> > perhaps a
> > really quick win would be to just use the lib-virt preview on F22.
> 
> Trying to build from bleeding edge is just a can of worms as you'll need to
> have someone baby-sitting the job to fix it up on new releases when the
> list of build deps changes or build options alter. As an example, next
> QEMU release will require you to pull in 3rd party libcacard library
> for SPICE build, since it was split out, so there's already a build
> change pending that would cause a regression in the gate.

Right, that's why I wanted to decouple the build from the gate.  releases don't
happen that often so if virt-preview/UCA isn't appropraite for some reason I
can easly dedicate a day/project/release to build the packages.
 
> So, my recommendation would really be to just use Fedora with virt-preview
> for the bleeding edge and avoid trying to compile stuff in the gate. The
> virt-preview repository tracks upstream releases of QEMU+Libvirt+libguestfs
> with minimal delay and is built with the same configuration as future Fedora
> releases will use. So such testing is good evidence that Nova won't break on
> the next Fedora release.

Right that was more or less my motivation.

Yours Tony.


pgpRTXZ7Y2HFP.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][security] what is OK to put in DEBUG logs?

2015-11-18 Thread Morgan Fainberg
On Nov 18, 2015 13:52, "Devananda van der Veen" 
wrote:
>
>
> On Wed, Nov 18, 2015 at 9:48 AM, Ruby Loo  wrote:
>>
>> Hi,
>>
>> I think we all agree that it isn't OK to log credentials (like
passwords) in DEBUG logs. However, what about other information that might
be sensitive? A patch was recently submitted to log (in debug) the SWIFT
temporary URL [1]. I agree that it would be useful for debugging, but since
that temporary URL could be used (by someone that has access to the logs
but no admin access to ironic/glance) eg for fetching private images, is it
OK?
>>
>> Even though we say that debug shouldn't be used in production, we can't
enforce what folks choose to do. And we know of at least one company that
runs their production environment with the debug setting. Which isn't to
say we shouldn't put things in debug, but I think it would be useful to
have some guidelines as to what we can safely expose or not.
>>
>> I took a quick look at the security web page [2] but nothing jumped out
at me wrt this issue.
>>
>> Thoughts?
>>
>> --ruby
>>
>> [1] https://review.openstack.org/#/c/243141/
>> [2] https://security.openstack.org
>>
>
> In this context, the URL is a time-limited access code being used in
place of a password or keystone auth token to allow an unprivileged client
temporary access to a specific privileged resource, without granting that
client access to any other resources. In some cases, that resource might be
a public Glance image and so one might say, "oh, it's not _that_
sensitive". However, the same module being affected by [1] is also used by
the iLO driver to upload a temporary image containing sensitive
instance-specific data.
>
> I agree that it's not the same risk as exposing a password, but I still
consider this an access token, and therefore don't think it should be
written to log files, even at DEBUG.
>
> -Deva
>
>

Also keep in mind that DEBUG logging, while still should have some masking
of data, since it is explicitly called out (or should be) as not safe for
production, can contain some " sensitive" data. Credentials should still be
scrubbed, but I would say the swift temp URL is something that may line up
with this more flexible level of filtering logs.

Now, if the service (and I don't think ironic suffers from this issue) is
only really runnable with debug on (because there is no useful information
otherwise) then I would aim to fix that before putting even potentially
sensitive data in DEBUG.

The simple choice is if there is even a question, don't log it (or log it
in a way that obscures the data but still shows unique use).

Just $0.02 on this.
--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [networking-sfc] Service Chain project IRC meeting Topic for 11/19/2015

2015-11-18 Thread Cathy Zhang
Hi everyone,

I have added some topics to the following link for our project meeting 
discussion tomorrow. Feel free to add topics you would like to discuss to this 
link.

https://wiki.openstack.org/wiki/Meetings/ServiceFunctionChainingMeeting

Note that due to the day time change, the meeting time is now 9am pacific time.

Thanks,
Cathy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Master node upgrade

2015-11-18 Thread Oleg Gelbukh
We are going to address the problem using the following approach:

1. Backup settings from the Fuel 7.0 Master node, including configuration
files for bootstrap script, data from state database, security keys and
certs, etc
2. Restore settings on top of freshly installed Fuel 8.0 Master.
3. Upload database dump into the DB of Fuel 8.0 Master.
3. Perform required actions to apply migrations to Fuel state DB.

In this case, rollback scenario is either revert to Fuel 7.0 Master (if it
wasn't reinstalled), or apply the same procedure to fresh Fuel 7.0 Master
installation.

This scenario introduces different upgrade workflow vs what upgrade tarball
used. We will update user documentation with the new workload. Operators
will have to consider changes to their processes in accordance with the new
workflow.

I will update this list once we have some progress on this task. You can
also track it in the following blueprint:

https://blueprints.launchpad.net/fuel/+spec/upgrade-master-node-centos7

--
Best regards,
Oleg Gelbukh

On Tue, Nov 10, 2015 at 8:52 AM, Vladimir Kuklin 
wrote:

> Evgeniy
>
> I am not sure you addressed me, but, anyway, - yes, we will have a
> situation with old containers on new host node. This will be identical to
> old host node from database migration point of view.
>
> On Tue, Nov 10, 2015 at 7:38 PM, Evgeniy L  wrote:
>
>> Hi Vladimir,
>>
>> Just to make sure that we are on the same page. We'll have to use upgrade
>> script anyway, since you will need to run database migration and register
>> new releases.
>>
>> Thanks,
>>
>> On Monday, 9 November 2015, Vladimir Kozhukalov 
>> wrote:
>>
>>> Looks like most people thing that building backup/re-install approach is
>>> more viable. So, we certainly need to invent completely new upgrade from
>>> and thus my suggestion is disable building/testing upgrade tarball right
>>> now, because anyway it makes no sense.
>>>
>>>
>>> Vladimir Kozhukalov
>>>
>>> On Fri, Nov 6, 2015 at 8:21 PM, Vladimir Kuklin 
>>> wrote:
>>>
 Just my 2 cents here - let's do docker backup and roll it up onto brand
 new Fuel 8 node.

 On Fri, Nov 6, 2015 at 7:54 PM, Oleg Gelbukh 
 wrote:

> Matt,
>
> You are talking about this part of Operations guide [1], or you mean
> something else?
>
> If yes, then we still need to extract data from backup containers. I'd
> prefer backup of DB in simple plain text file, since our DBs are not that
> big.
>
> [1]
> https://docs.mirantis.com/openstack/fuel/fuel-7.0/operations.html#howto-backup-and-restore-fuel-master
>
> --
> Best regards,
> Oleg Gelbukh
>
> On Fri, Nov 6, 2015 at 6:03 PM, Matthew Mosesohn <
> mmoses...@mirantis.com> wrote:
>
>> Oleg,
>>
>> All the volatile information, including a DB dump, are contained in
>> the small Fuel Master backup. There should be no information lost unless
>> there was manual customization done inside the containers (such as puppet
>> manifest changes). There shouldn't be a need to back up the entire
>> containers.
>>
>> The information we would lose would include the IP configuration
>> interfaces besides the one used for the Fuel PXE network and any custom
>> configuration done on the Fuel Master.
>>
>> I want #1 to work smoothly, but #2 should also be a safe route.
>>
>> On Fri, Nov 6, 2015 at 5:39 PM, Oleg Gelbukh 
>> wrote:
>>
>>> Evgeniy,
>>>
>>> On Fri, Nov 6, 2015 at 3:27 PM, Evgeniy L  wrote:
>>>
 Also we should decide when to run containers
 upgrade + host upgrade? Before or after new CentOS is installed?
 Probably
 it should be done before we run backup, in order to get the latest
 scripts for
 backup/restore actions.

>>>
>>> We're working to determine if we need to backup/upgrade containers
>>> at all. My expectation is that we should be OK with just backup of DB, 
>>> IP
>>> addresses settings from astute.yaml for the master node, and credentials
>>> from configuration files for the services.
>>>
>>> --
>>> Best regards,
>>> Oleg Gelbukh
>>>
>>>

 Thanks,

 On Fri, Nov 6, 2015 at 1:29 PM, Vladimir Kozhukalov <
 vkozhuka...@mirantis.com> wrote:

> Dear colleagues,
>
> At the moment I'm working on deprecating Fuel upgrade tarball.
> Currently, it includes the following:
>
> * RPM repository (upstream + mos)
> * DEB repository (mos)
> * openstack.yaml
> * version.yaml
> * upgrade script itself (+ virtualenv)
>
> Apart from upgrading docker containers this upgrade script makes
> copies of the RPM/DEB 

Re: [openstack-dev] Future of telco working group and weekly meeting reminder

2015-11-18 Thread Barrett, Carol L
Steve - Thanks for the update. Look forward to seeing the User Stories in the 
Repo and coming up to speed on them. If you have priorities around them, that 
would be good to know too as look for commonalities between these stories, the 
existing user stories and new ones under development.
Carol

-Original Message-
From: Steve Gordon [mailto:sgor...@redhat.com] 
Sent: Wednesday, November 18, 2015 5:11 AM
To: Shamail
Cc: OpenStack Development Mailing List (not for usage questions); product-wg; 
openstack-operators
Subject: Re: [Product] [nfv][telco][product] Future of telco working group and 
weekly meeting reminder

- Original Message -
> From: "Shamail" 
> To: "Steve Gordon" 
> 
> Hi Steve,
> 
> > On Nov 18, 2015, at 6:39 AM, Steve Gordon  wrote:
> > 
> > Hi all,
> > 
> > Back in Vancouver [1] we began discussing the growing overlap 
> > between OPNFV requirements projects and the current mission of the telco 
> > working group.
> > During this period the product working group, also in part focused 
> > on recording and prioritizing user stories, has also been hitting its 
> > straps.
> > As we have recently lost a couple of core members of the telco 
> > working group particularly on the technical side due to role changes 
> > etc. I think it is time to roll its activities into these efforts.
> > 
> > With that in mind I would like to propose:
> > 
> > * Submitting the existing telcowg-usecases to the 
> > openstack-user-stories repository
> > * Engaging within the product working group (assuming they will have 
> > us ;)) as owners of these user stories
> This is a very similar model to what the enterprise WG did recently as well.
> Please let me know if I can do anything to help with the transition of 
> the user stories.
> > 
> > There is of course still a need to actually *implement* the 
> > requirements exposed by these user stories but I am hopeful that 
> > together we can work through a common process for this rather than 
> > continuing to attack it separately. I would personally still like to 
> > see greater direct engagement from service providers, but it seems 
> > like OPNFV and/or the OpenStack User Committee [2] itself might be the 
> > right place for this going forward.
> > 
> > I'd like to discuss this proposal further in the weekly meeting. 
> > This week's Telco Working Group meeting is scheduled for Wednesday 
> > the 18th at
> > 1400 UTC. As always the agenda etherpad is here:
> > 
> >   https://etherpad.openstack.org/p/nfv-meeting-agenda
> Would it make sense for someone else (besides yourself :)) from the 
> Product WG to join this session for Q as well?

The more the merrier as they say... ;)

-Steve

___
Product-wg mailing list
product...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/product-wg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][tempest]Tempest test plugin and microversions

2015-11-18 Thread GHANSHYAM MANN
Hi,

Yea, IMO also we can merge Ironic patch in Tempest first but do not
have much strong opinion. Ironic patch can be updated on top of
microversion testing framework base patch [1] and we can test that
while residing in Tempest which avoid updates in Ironic repo (if move
Ironic tests first) if anything needs to be updated.

For testing or sample we are going to implement Nova 2.2 version
tests[2]. On the same line Ironic one can be updated.

[1] https://review.openstack.org/#/c/242296/
[2] https://review.openstack.org/#/c/244996/

Thanks
Ghanshyam

On Thu, Nov 19, 2015 at 8:48 AM, Ken'ichi Ohmichi  wrote:
> Hi,
>
> Nice description for our situation.
> Yes, qa-spec of microversions testing is already approved.
> But as you said, we need time for the implementation.
>
> Your migration patch for ironic doesn't seem hard to be merged, I feel. So
> it is also nice option to merge the migration patch first.
> Technically, the microversions testing framework will be merged into
> tempest-lib at early stage, and we can share it on many projects.
>
> Thanks
>
>
> 2015年11月18日(水) 午後6:54 Yuiko Takada :
>>
>> Hi Ironic folks,
>>
>> As we discussed in Design Summit, we will move forward with tempest test
>> tasks.
>> I've posted a patch for tempest test plugin interface [1]
>> (Now it fails because of flake8-ignore-difference, anyway).
>>
>> Then, I'd like to discuss about our tests migration procedure.
>> As you know, we are also working for Tempest microversions, so
>> our tests in Tempest need to be fixed for working with microversions.
>> Its spec has been approved and now microversion testing frame work has
>> been posted [2].
>> IMO, tests in Tempest should be fixed before moving into Ironic tree,
>> but maybe [2] will take long time to be merged.
>> What do you think?
>>
>> [1] https://review.openstack.org/#/c/246161/
>> [2] https://review.openstack.org/#/c/242346/
>>
>>
>> Best Regards,
>> Yuiko Takada
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards
Ghanshyam Mann
+81-8084200646

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [releases] Release countdown for week R-19, Nov 23-27

2015-11-18 Thread Matt Riedemann



On 11/18/2015 4:09 PM, Doug Hellmann wrote:

Focus
-

We are currently working towards the Mitaka 1 milestone scheduled
for Dec 1-3. Teams should be focusing on wrapping up incomplete
work left over from the end of the Liberty cycle, finalizing and
announcing plans from the summit, and completing specs and blueprints.

Keep in mind that the US Thanksgiving holiday is Nov 26-27, and
that may affect review times for work being completed before the
milestone.

Release Actions
---

We will be using the openstack/releases repository to manage the
Mitaka 1 milestone tags. I will send out a separate email with more
detailed instructions the week before the milestone.

We will be using reno instead of launchpad for tracking completed
work, so please make sure any release notes for work already done
this cycle are committed to your master branches before proposing
the milestone tag.

All deliverables should have reno configured before Mitaka 1. See
http://lists.openstack.org/pipermail/openstack-dev/2015-November/078301.html
for details, and follow up on that thread with questions. We're
making good progress on this, but have a ways to go.

Review stable/liberty branches for patches that have landed since
the last release and determine if your deliverables need new tags.


What do you mean about new tags for stable/liberty? Release notes or 
actual git tags? I'm not sure how stable/liberty is related to the 
mitaka-1 milestone.




Important Dates
---

US Holiday: Nov 26-27 (the end of week R-19)

Mitaka 1: Dec 1-3 (1 week later)

Mitaka release schedule: https://wiki.openstack.org/wiki/Mitaka_Release_Schedule

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][taas] proposal: dedicated tunnel for carrying mirrored traffic

2015-11-18 Thread Soichi Shigeta


 Hi,

As we decided in the last weekly meeting,
  I'd like to use this mailing list to discuss
  a proposal about creating dedicated tunnel for
  carrying mirrored traffic between hosts.

  link: 
https://wiki.openstack.org/w/images/7/78/TrafficIsolation_20151116-01.pdf


  Best Regards,
  Soichi Shigeta



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Redundant checks in form values

2015-11-18 Thread Suraj Deshmukh
On Thu, Nov 19, 2015 at 3:10 AM, Akihiro Motoki  wrote:
> In the current AddRule form, validate_port_range is used as a validator,
> so a value from 1 to 65535 is accepted.
> I think this is the reason that _clean_rule_icmp check the exact value range.
> You can improve it as you think.
>
> bug 1511748 is a regression of validate_port_range change in
> https://review.openstack.org/#/c/116508/.
> Previously it accepts -1 and we can use it both for TCP/UDP and ICMP,
> but after that change it no longer cannot be applied to ICMP.
>
> IMO we need to use separate validators for TCP/UDP and ICMP.
>
> Akihiro
>

Yes we need seperate validators for ICMP's `icmp_type` and
`icmp_code`, I figured that out earlier and hence wrote validator code
in `oslo_utils.netutils` [1] and [2].

So `icmp_type` can be any integer between 0 to 255 and `icmp_code` can
be either None or integer between 0 to 255, so code at [2] does take
care of that. More information on ICMP can be found at [3].

So `_clean_rule_icmp` checks related to `icmp_type` and `icmp_code`
can go away. Since while validating itself Django sets the data to
None if `ValidationError` is raised by validator.


[1] https://review.openstack.org/#/c/240661/
[2] 
https://github.com/openstack/oslo.utils/blob/master/oslo_utils/netutils.py#L207
[3] http://www.nthelp.com/icmp.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][neutron] How we handle Kilo backports

2015-11-18 Thread Kevin Benton
+1. Anything that lands in the high category is usually something that will
have a big operational impact.

On Wed, Nov 18, 2015 at 8:44 AM, Ihar Hrachyshka 
wrote:

> Hi all,
>
> as per [1] I imply that all projects under stable-maint-core team
> supervision must abide the stable policy [2] which limits the types of
> backports for N-2 branches (now it’s stable/kilo) to "Only critical
> bugfixes and security patches”. With that, I remind all stable core members
> about the rule.
>
> Since we are limited to ‘critical bugfixes’ only, and since there is no
> clear definition of what ‘critical’ means, I guess we should define it for
> ourselves.
>
> In Neutron world, we usually use Critical importance for those bugs that
> break gate. High is used for those bugs that have high impact production
> wise. With that in mind, I suggest we define ‘critical’ bugfixes as
> Critical + High in LP. Comments on that?
>
> (My understanding is that we can also advocate for the change in the
> global policy if we think the ‘critical only’ rule should be relaxed, but
> till then it makes sense to stick to what policy says.)
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-November/078649.html
> [2] http://docs.openstack.org/project-team-guide/stable-branches.html
>
> Ihar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][neutron] upper constraints for stable/liberty

2015-11-18 Thread Sachi King
On Wed, 18 Nov 2015 02:07:45 PM Ihar Hrachyshka wrote:
> Robert Collins  wrote:
> > On 14 November 2015 at 02:53, Ihar Hrachyshka  wrote:
> >> I was recently looking into how stable/liberty branches are set for
> >> neutron
> >> in terms of requirements caps, and I realized that we don’t have neither
> >> version caps nor upper constraints applied to unit test jobs in
> >> stable/liberty gate. We have -constraints targets defined in tox.ini, but
> >> they are not running in gate.
> >> 
> >> I believe this situation leaves us open to breakages by any random
> >> library
> >> releases out there. Am I right? If so, I would like to close the breakage
> >> vector for projects I care (all neutron stadium).
> >> 
> >> I suggest we do the following:
> >> 
> >> - unless there is some specific reason for that, stop running
> >> unconstrained
> >> jobs in neutron/master;
> > 
> > Sachi King is working up a bit of data mining to confirm that the
> > constraints jobs are only failing when unconstrained jobs fail - then
> > we're going to propose the change to project-config to switch around
> > which vote.
> 
> For what I saw in neutron, it never fails unless there is actual constraint
> not bumped.

Scraping before flipping the switch was just to be really sure we were not 
going to break anything before we went for it.  After scraping master and 
stable/liberty everything does indeed look good.  The script found some 
issues, but they were all caused by a bug in a tox release. 

If anyone is interested in pulling the stats out to verify I have uploaded my 
scraper to [0].  It's rough, but it got me the data.

> >> - enable voting for constraints jobs in neutron/liberty; once proved to
> >> work
> >> fine, stop running unconstrained jobs in neutron/liberty;
> > 
> > I expect the same query can answer this as well.
> > 
> >> - for neutron-*aas, introduce constraints targets in tox.ini, enable
> >> jobs in
> >> gate; make them vote there/remove old jobs;
> >> - after that, backport constraints targets to stable/liberty; make them
> >> vote
> >> there/remove old jobs.
> > 
> > We're going to advocate widespread adoption once the neutron master
> > ones are voting
> > 
> >> Does the plan make sense?
> > 
> > Totally :) As non-Neutron-contributors we've just been conservative in
> > our recommendations; if Neutron wants to move a little faster by
> > taking on a little risk, thats *totally cool* IMO.
> 
> I believe there is general understanding that it’s the way to go, and we
> were already happy to be guinea pigs for initial data mining, so I don’t
> expect problems getting the core team onboard.
> 
> My question was more about what we do with stable/liberty branches. Is it
> part of the plan that we backport the constraint jobs there?

Yes, the plan is to switch the -constraints jobs to voting in liberty as well.  
We've got -constraints operating in a non-voting fasion there just as in 
master and it looks good to flip over as well.

I've pushed [1] up for review to flip neutron's -constraints to voting on both 
master and liberty.  I could definatly use some eyes over it, as well as voice 
from the neutron team to give the signal that it has the go-ahead.

[0] https://github.com/nakato/checkconstraints
[1] https://review.openstack.org/#/c/247306/

Cheers,
Sachi



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >