Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-10-01 Thread Bence Romsics
Hi All,

I'm quite late to the discussion, because I'm on vacation and I missed
the beginning of this thread, but let me share a few thoughts.

On Fri, Sep 28, 2018 at 6:13 PM Jay Pipes  wrote:
> * Does the provider belong to physical network "corpnet" and also
> support creation of virtual NICs of type either "DIRECT" or "NORMAL"?

I'd like to split this question into two, because I think modeling
vnic_types and physnets as traits are different. I'll start with the
simpler: vnic_types.

I may have missed some of the arguments in this very long thread, but
I honestly do not see what is the problem with vnic_type traits. These
are true capabilities of the backend - not binary though. When it
comes to DIRECT and NORMAL the difference is basically if the backend
can do SR-IOV or not.

On the other hand I have my reservations about physnet traits. I have
an item on my todo list to look into Placement aggregates and explore
if those are better representing a physnet. Before committing to using
aggregates for physnets I know I should fully discover the aggregates
API though. And mention one concern which could lead to a usability
problem today: aggregates seem to have no names. I think they should.
The operator is helpless without them.

On Fri, Sep 28, 2018 at 11:51 PM Jay Pipes  wrote:
> That's correct, because you're encoding >1 piece of information into the
> single string (the fact that it's a temperature *and* the value of that
> temperature are the two pieces of information encoded into the single
> string).
>
> Now that there's multiple pieces of information encoded in the string
> the reader of the trait string needs to know how to decode those bits of
> information, which is exactly what we're trying to avoid doing [...].

Technically Placement traits today can be used as a covert
communication channel. And doing that is tempting. One component
encodes information into a trait name. Another reads it (ie. the trait
on the allocated RP) and decodes it. Maybe that trait wasn't
influencing placement at all. This is the metadata use case. (If it is
a use case at all.) I think the most problematic is when we
unknowingly mix placement-influencing info and effectless-metadata
into a single blob (as a trait name). One good way to avoid this is to
fully and actively discourage the use of traits as a covert
communication channel. I can totally support that.

I want to mention that in the work-in-progress implementation of the
minimum guaranteed bandwidth we considered and then conciously avoided
using this covert communication channel. Neutron agents and servers
use their usual communication channels to share resource information
between them. None of them ever decodes a trait name. All we ever ask
of them after allocation is this: Are you responsible for this RP
UUID? (For example see https://review.openstack.org/574783.)

Cheers,
Bence

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [I18n] No Office Hour this week

2018-10-01 Thread Frank Kloeker

Hello,

due the national holiday in Germany tomorrow and the long weekend after 
that, there will be no I18n Office Hour this week. Next session will be 
held on 2018/10/11 13:00 UTC [1].


kind regards

Frank

[1] https://wiki.openstack.org/wiki/Meetings/I18nTeamMeeting

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] I have some problems with Prometheus alarms in vitrage.

2018-10-01 Thread Won
I have some problems with Prometheus alarms in vitrage.
I receive a list of alarms from the Prometheus alarm manager well, but the
alarm does not disappear when the problem(alarm) is resolved. The alarm
that came once in both the alarm list and the entity graph does not
disappear in vitrage.  The alarm sent by zabbix disappears when alarm
solved, I wonder how to clear the Prometheus alarm from vitrage and how to
update the alarm automatically like zabbix.
thank you.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Horizon tutorial didn`t work

2018-10-01 Thread Jea-Min Lim
Thanks for the reply.

If you need any detailed information, let me know.

Regards,

2018년 10월 1일 (월) 오후 6:53, Ivan Kolodyazhny 님이 작성:

> Hi  Jea-Min,
>
> Thank you for your report. I'll check the manual and fix it asap.
>
>
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/
>
>
> On Mon, Oct 1, 2018 at 9:38 AM Jea-Min Lim  wrote:
>
>> Hello everyone,
>>
>> I`m following a tutorial of Building a Dashboard using Horizon.
>> (link:
>> https://docs.openstack.org/horizon/latest/contributor/tutorials/dashboard.html#tutorials-dashboard
>> )
>>
>> However, provided custom management command doesn't create boilerplate
>> code.
>>
>> I typed tox -e manage -- startdash mydashboard --target
>> openstack_dashboard/dashboards/mydashboard
>>
>> and the attached screenshot file is the execution result.
>>
>> Are there any recommendations to solve this problem?
>>
>> Regards.
>>
>> [image: result_jmlim.PNG]
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][octavia] Containerize the amphora-agent

2018-10-01 Thread Hongbin Lu
On Sun, Sep 30, 2018 at 8:47 PM Adam Harwell  wrote:

> I was coming to the same conclusion for a completely different goal --
> baking lighter weight VMs (and eliminating a number of compatibility
> issues) by putting exactly what we need in containers and making the base
> OS irrelevant. So, I am interested in helping to do this in a way that will
> work well for both goals.
> My thought is that containerizing the agent AND using (existing?)
> containerized haproxy distributions, we can better standardize things
> between different amphora base OSes at the same time as setting up for full
> containerization.
> We should discuss further on IRC next week maybe, if we can find a good
> time.
>

Sure. Feel free to ping me if you see me online. My IRC nick is hongbin and
you will find me in #openstack-zun or #openstack-dev


>
>--Adam
>
> On Sun, Sep 30, 2018, 11:56 Hongbin Lu  wrote:
>
>> Hi all,
>>
>> I am working on the Zun integration for Octavia. I did a preliminary
>> research and it seems what we need to do is to containerize the amphora
>> agent that was packaged and shipped by a VM image. I wonder if anyone
>> already has a containerized docker image that I can leverage. If not, I
>> will create one.
>>
>> Best regards,
>> Hongbin
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter

2018-10-01 Thread Julia Kreger
On Mon, Oct 1, 2018 at 3:37 PM Jay Pipes  wrote:

> On 10/01/2018 06:04 PM, Julia Kreger wrote:
> > On Mon, Oct 1, 2018 at 2:41 PM Eric Fried  wrote:
> >
> >
> >  > So say the user requests a node that supports UEFI because their
> > image
> >  > needs UEFI. Which workflow would you want here?
> >  >
> >  > 1) The operator (or ironic?) has already configured the node to
> > boot in
> >  > UEFI mode. Only pre-configured nodes advertise the "supports
> > UEFI" trait.
> >  >
> >  > 2) Any node that supports UEFI mode advertises the trait. Ironic
> > ensures
> >  > that UEFI mode is enabled before provisioning the machine.
> >  >
> >  > I imagine doing #2 by passing the traits which were specifically
> >  > requested by the user, from Nova to Ironic, so that Ironic can do
> the
> >  > right thing for the user.
> >  >
> >  > Your proposal suggests that the user request the "supports UEFI"
> > trait,
> >  > and *also* pass some glance UUID which the user understands will
> make
> >  > sure the node actually boots in UEFI mode. Something like:
> >  >
> >  > openstack server create --flavor METAL_12CPU_128G --trait
> > SUPPORTS_UEFI
> >  > --config-data $TURN_ON_UEFI_UUID
> >  >
> >  > Note that I pass --trait because I hope that will one day be
> > supported
> >  > and we can slow down the flavor explosion.
> >
> > IMO --trait would be making things worse (but see below). I think
> UEFI
> > with Jay's model would be more like:
> >
> >openstack server create --flavor METAL_12CPU_128G --config-data
> $UEFI
> >
> > where the UEFI profile would be pretty trivial, consisting of
> > placement.traits.required = ["BOOT_MODE_UEFI"] and object.boot_mode =
> > "uefi".
> >
> > I agree that this seems kind of heavy, and that it would be nice to
> be
> > able to say "boot mode is UEFI" just once. OTOH I get Jay's point
> that
> > we need to separate the placement decision from the instance
> > configuration.
> >
> > That said, what if it was:
> >
> >   openstack config-profile create --name BOOT_MODE_UEFI --json -
> >   {
> >"type": "boot_mode_scheme",
> >"version": 123,
> >"object": {
> >"boot_mode": "uefi"
> >},
> >"placement": {
> > "traits": {
> >  "required": [
> >   "BOOT_MODE_UEFI"
> >  ]
> > }
> >}
> >   }
> >   ^D
> >
> > And now you could in fact say
> >
> >   openstack server create --flavor foo --config-profile
> BOOT_MODE_UEFI
> >
> > using the profile name, which happens to be the same as the trait
> name
> > because you made it so. Does that satisfy the yen for saying it
> once? (I
> > mean, despite the fact that you first had to say it three times to
> get
> > it set up.)
> >
> > 
> >
> > I do want to zoom out a bit and point out that we're talking about
> > implementing a new framework of substantial size and impact when the
> > original proposal - using the trait for both - would just work out of
> > the box today with no changes in either API. Is it really worth it?
> >
> >
> > +1000. Reading both of these threads, it feels like we're basically
> > trying to make something perfect. I think that is a fine goal, except it
> > is unrealistic because the enemy of good is perfection.
> >
> > 
> >
> > By the way, with Jim's --trait suggestion, this:
> >
> >  > ...dozens of flavors that look like this:
> >  > - 12CPU_128G_RAID10_DRIVE_LAYOUT_X
> >  > - 12CPU_128G_RAID5_DRIVE_LAYOUT_X
> >  > - 12CPU_128G_RAID01_DRIVE_LAYOUT_X
> >  > - 12CPU_128G_RAID10_DRIVE_LAYOUT_Y
> >  > - 12CPU_128G_RAID5_DRIVE_LAYOUT_Y
> >  > - 12CPU_128G_RAID01_DRIVE_LAYOUT_Y
> >
> > ...could actually become:
> >
> >   openstack server create --flavor 12CPU_128G --trait $WHICH_RAID
> > --trait
> > $WHICH_LAYOUT
> >
> > No flavor explosion.
> >
> >
> > ++ I believe this was where this discussion kind of ended up in..
> ?Dublin?
> >
> > The desire and discussion that led us into complex configuration
> > templates and profiles being submitted were for highly complex scenarios
> > where users wanted to assert detailed raid configurations to disk.
> > Naturally, there are many issues there. The ability to provide such
> > detail would be awesome for those 10% of operators that need such
> > functionality. Of course, if that is the only path forward, then we
> > delay the 90% from getting the minimum viable feature they need.
> >
> >
> > (Maybe if we called it something other than --trait, like maybe
> > --config-option, it would let us pretend we're not really
> overloading a
> > trait to do config - it's just a coincidence that the config option
> has
> > the same name as the trait it causes to be required.)
> >
> >
> > I feel like it might 

Re: [openstack-dev] Berlin Community Contributor Awards

2018-10-01 Thread Kendall Nelson
Hello :)

I wanted to bring this to the top of people's inboxes as we have three
weeks left to submit community members[1].

I can think of a dozen people right now that deserve an award and I am sure
you all could do the same. It only takes a few minutes and its an easy way
to make sure they get the recognition they deserve. Show your appreciation
and nominate one person.

-Kendall (diablo_rojo)

[1] https://openstackfoundation.formstack.com/forms/berlin_stein_ccas

On Fri, Aug 24, 2018 at 11:15 AM Kendall Nelson 
wrote:

> Hello Everyone!
>
> As we approach the Summit (still a ways away thankfully), I thought I
> would kick off the Community Contributor Award nominations early this
> round.
>
> For those of you that already know what they are, here is the form[1].
>
> For those of you that have never heard of the CCA, I'll briefly explain
> what they are :) We all know people in the community that do the dirty
> jobs, we all know people that will bend over backwards trying to help
> someone new, we all know someone that is a savant in some area of the code
> we could never hope to understand. These people rarely get the thanks they
> deserve and the Community Contributor Awards are a chance to make sure they
> know that they are appreciated for the amazing work they do and skills they
> have.
>
> So go forth and nominate these amazing community members[1]! Nominations
> will close on October 21st at 7:00 UTC and winners will be announced at the
> OpenStack Summit in Berlin.
>
> -Kendall (diablo_rojo)
>
> [1] https://openstackfoundation.formstack.com/forms/berlin_stein_ccas
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter

2018-10-01 Thread Jay Pipes

On 10/01/2018 06:04 PM, Julia Kreger wrote:

On Mon, Oct 1, 2018 at 2:41 PM Eric Fried  wrote:


 > So say the user requests a node that supports UEFI because their
image
 > needs UEFI. Which workflow would you want here?
 >
 > 1) The operator (or ironic?) has already configured the node to
boot in
 > UEFI mode. Only pre-configured nodes advertise the "supports
UEFI" trait.
 >
 > 2) Any node that supports UEFI mode advertises the trait. Ironic
ensures
 > that UEFI mode is enabled before provisioning the machine.
 >
 > I imagine doing #2 by passing the traits which were specifically
 > requested by the user, from Nova to Ironic, so that Ironic can do the
 > right thing for the user.
 >
 > Your proposal suggests that the user request the "supports UEFI"
trait,
 > and *also* pass some glance UUID which the user understands will make
 > sure the node actually boots in UEFI mode. Something like:
 >
 > openstack server create --flavor METAL_12CPU_128G --trait
SUPPORTS_UEFI
 > --config-data $TURN_ON_UEFI_UUID
 >
 > Note that I pass --trait because I hope that will one day be
supported
 > and we can slow down the flavor explosion.

IMO --trait would be making things worse (but see below). I think UEFI
with Jay's model would be more like:

   openstack server create --flavor METAL_12CPU_128G --config-data $UEFI

where the UEFI profile would be pretty trivial, consisting of
placement.traits.required = ["BOOT_MODE_UEFI"] and object.boot_mode =
"uefi".

I agree that this seems kind of heavy, and that it would be nice to be
able to say "boot mode is UEFI" just once. OTOH I get Jay's point that
we need to separate the placement decision from the instance
configuration.

That said, what if it was:

  openstack config-profile create --name BOOT_MODE_UEFI --json -
  {
   "type": "boot_mode_scheme",
   "version": 123,
   "object": {
       "boot_mode": "uefi"
   },
   "placement": {
    "traits": {
     "required": [
      "BOOT_MODE_UEFI"
     ]
    }
   }
  }
  ^D

And now you could in fact say

  openstack server create --flavor foo --config-profile BOOT_MODE_UEFI

using the profile name, which happens to be the same as the trait name
because you made it so. Does that satisfy the yen for saying it once? (I
mean, despite the fact that you first had to say it three times to get
it set up.)



I do want to zoom out a bit and point out that we're talking about
implementing a new framework of substantial size and impact when the
original proposal - using the trait for both - would just work out of
the box today with no changes in either API. Is it really worth it?


+1000. Reading both of these threads, it feels like we're basically 
trying to make something perfect. I think that is a fine goal, except it 
is unrealistic because the enemy of good is perfection.




By the way, with Jim's --trait suggestion, this:

 > ...dozens of flavors that look like this:
 > - 12CPU_128G_RAID10_DRIVE_LAYOUT_X
 > - 12CPU_128G_RAID5_DRIVE_LAYOUT_X
 > - 12CPU_128G_RAID01_DRIVE_LAYOUT_X
 > - 12CPU_128G_RAID10_DRIVE_LAYOUT_Y
 > - 12CPU_128G_RAID5_DRIVE_LAYOUT_Y
 > - 12CPU_128G_RAID01_DRIVE_LAYOUT_Y

...could actually become:

  openstack server create --flavor 12CPU_128G --trait $WHICH_RAID
--trait
$WHICH_LAYOUT

No flavor explosion.


++ I believe this was where this discussion kind of ended up in.. ?Dublin?

The desire and discussion that led us into complex configuration 
templates and profiles being submitted were for highly complex scenarios 
where users wanted to assert detailed raid configurations to disk. 
Naturally, there are many issues there. The ability to provide such 
detail would be awesome for those 10% of operators that need such 
functionality. Of course, if that is the only path forward, then we 
delay the 90% from getting the minimum viable feature they need.



(Maybe if we called it something other than --trait, like maybe
--config-option, it would let us pretend we're not really overloading a
trait to do config - it's just a coincidence that the config option has
the same name as the trait it causes to be required.)


I feel like it might be confusing, but totally +1 to matching required 
trait name being a thing. That way scheduling is completely decoupled 
and if everything was correct then the request should already be 
scheduled properly.


I guess I'll just drop the idea of doing this properly then. It's true 
that the placement traits concept can be hacked up and the virt driver 
can just pass a list of trait strings to the Ironic API and that's the 
most expedient way to get what the 90% of people apparently want. It's 
also true that it will add a bunch of 

Re: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter

2018-10-01 Thread Julia Kreger
On Mon, Oct 1, 2018 at 2:41 PM Eric Fried  wrote:

>
> > So say the user requests a node that supports UEFI because their image
> > needs UEFI. Which workflow would you want here?
> >
> > 1) The operator (or ironic?) has already configured the node to boot in
> > UEFI mode. Only pre-configured nodes advertise the "supports UEFI" trait.
> >
> > 2) Any node that supports UEFI mode advertises the trait. Ironic ensures
> > that UEFI mode is enabled before provisioning the machine.
> >
> > I imagine doing #2 by passing the traits which were specifically
> > requested by the user, from Nova to Ironic, so that Ironic can do the
> > right thing for the user.
> >
> > Your proposal suggests that the user request the "supports UEFI" trait,
> > and *also* pass some glance UUID which the user understands will make
> > sure the node actually boots in UEFI mode. Something like:
> >
> > openstack server create --flavor METAL_12CPU_128G --trait SUPPORTS_UEFI
> > --config-data $TURN_ON_UEFI_UUID
> >
> > Note that I pass --trait because I hope that will one day be supported
> > and we can slow down the flavor explosion.
>
> IMO --trait would be making things worse (but see below). I think UEFI
> with Jay's model would be more like:
>
>   openstack server create --flavor METAL_12CPU_128G --config-data $UEFI
>
> where the UEFI profile would be pretty trivial, consisting of
> placement.traits.required = ["BOOT_MODE_UEFI"] and object.boot_mode =
> "uefi".
>
> I agree that this seems kind of heavy, and that it would be nice to be
> able to say "boot mode is UEFI" just once. OTOH I get Jay's point that
> we need to separate the placement decision from the instance configuration.
>
> That said, what if it was:
>
>  openstack config-profile create --name BOOT_MODE_UEFI --json -
>  {
>   "type": "boot_mode_scheme",
>   "version": 123,
>   "object": {
>   "boot_mode": "uefi"
>   },
>   "placement": {
>"traits": {
> "required": [
>  "BOOT_MODE_UEFI"
> ]
>}
>   }
>  }
>  ^D
>
> And now you could in fact say
>
>  openstack server create --flavor foo --config-profile BOOT_MODE_UEFI
>
> using the profile name, which happens to be the same as the trait name
> because you made it so. Does that satisfy the yen for saying it once? (I
> mean, despite the fact that you first had to say it three times to get
> it set up.)
>
> 
>
> I do want to zoom out a bit and point out that we're talking about
> implementing a new framework of substantial size and impact when the
> original proposal - using the trait for both - would just work out of
> the box today with no changes in either API. Is it really worth it?
>
>
+1000. Reading both of these threads, it feels like we're basically trying
to make something perfect. I think that is a fine goal, except it is
unrealistic because the enemy of good is perfection.


>
> By the way, with Jim's --trait suggestion, this:
>
> > ...dozens of flavors that look like this:
> > - 12CPU_128G_RAID10_DRIVE_LAYOUT_X
> > - 12CPU_128G_RAID5_DRIVE_LAYOUT_X
> > - 12CPU_128G_RAID01_DRIVE_LAYOUT_X
> > - 12CPU_128G_RAID10_DRIVE_LAYOUT_Y
> > - 12CPU_128G_RAID5_DRIVE_LAYOUT_Y
> > - 12CPU_128G_RAID01_DRIVE_LAYOUT_Y
>
> ...could actually become:
>
>  openstack server create --flavor 12CPU_128G --trait $WHICH_RAID --trait
> $WHICH_LAYOUT
>
> No flavor explosion.
>

++ I believe this was where this discussion kind of ended up in.. ?Dublin?

The desire and discussion that led us into complex configuration templates
and profiles being submitted were for highly complex scenarios where users
wanted to assert detailed raid configurations to disk. Naturally, there are
many issues there. The ability to provide such detail would be awesome for
those 10% of operators that need such functionality. Of course, if that is
the only path forward, then we delay the 90% from getting the minimum
viable feature they need.

>
> (Maybe if we called it something other than --trait, like maybe
> --config-option, it would let us pretend we're not really overloading a
> trait to do config - it's just a coincidence that the config option has
> the same name as the trait it causes to be required.)
>

I feel like it might be confusing, but totally +1 to matching required
trait name being a thing. That way scheduling is completely decoupled and
if everything was correct then the request should already be scheduled
properly.


> -efried
> .
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter

2018-10-01 Thread Eric Fried

> So say the user requests a node that supports UEFI because their image
> needs UEFI. Which workflow would you want here?
> 
> 1) The operator (or ironic?) has already configured the node to boot in
> UEFI mode. Only pre-configured nodes advertise the "supports UEFI" trait.
> 
> 2) Any node that supports UEFI mode advertises the trait. Ironic ensures
> that UEFI mode is enabled before provisioning the machine.
> 
> I imagine doing #2 by passing the traits which were specifically
> requested by the user, from Nova to Ironic, so that Ironic can do the
> right thing for the user.
> 
> Your proposal suggests that the user request the "supports UEFI" trait,
> and *also* pass some glance UUID which the user understands will make
> sure the node actually boots in UEFI mode. Something like:
> 
> openstack server create --flavor METAL_12CPU_128G --trait SUPPORTS_UEFI
> --config-data $TURN_ON_UEFI_UUID
> 
> Note that I pass --trait because I hope that will one day be supported
> and we can slow down the flavor explosion.

IMO --trait would be making things worse (but see below). I think UEFI
with Jay's model would be more like:

  openstack server create --flavor METAL_12CPU_128G --config-data $UEFI

where the UEFI profile would be pretty trivial, consisting of
placement.traits.required = ["BOOT_MODE_UEFI"] and object.boot_mode =
"uefi".

I agree that this seems kind of heavy, and that it would be nice to be
able to say "boot mode is UEFI" just once. OTOH I get Jay's point that
we need to separate the placement decision from the instance configuration.

That said, what if it was:

 openstack config-profile create --name BOOT_MODE_UEFI --json -
 {
  "type": "boot_mode_scheme",
  "version": 123,
  "object": {
  "boot_mode": "uefi"
  },
  "placement": {
   "traits": {
"required": [
 "BOOT_MODE_UEFI"
]
   }
  }
 }
 ^D

And now you could in fact say

 openstack server create --flavor foo --config-profile BOOT_MODE_UEFI

using the profile name, which happens to be the same as the trait name
because you made it so. Does that satisfy the yen for saying it once? (I
mean, despite the fact that you first had to say it three times to get
it set up.)



I do want to zoom out a bit and point out that we're talking about
implementing a new framework of substantial size and impact when the
original proposal - using the trait for both - would just work out of
the box today with no changes in either API. Is it really worth it?



By the way, with Jim's --trait suggestion, this:

> ...dozens of flavors that look like this:
> - 12CPU_128G_RAID10_DRIVE_LAYOUT_X
> - 12CPU_128G_RAID5_DRIVE_LAYOUT_X
> - 12CPU_128G_RAID01_DRIVE_LAYOUT_X
> - 12CPU_128G_RAID10_DRIVE_LAYOUT_Y
> - 12CPU_128G_RAID5_DRIVE_LAYOUT_Y
> - 12CPU_128G_RAID01_DRIVE_LAYOUT_Y

...could actually become:

 openstack server create --flavor 12CPU_128G --trait $WHICH_RAID --trait
$WHICH_LAYOUT

No flavor explosion.

(Maybe if we called it something other than --trait, like maybe
--config-option, it would let us pretend we're not really overloading a
trait to do config - it's just a coincidence that the config option has
the same name as the trait it causes to be required.)

-efried
.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Bug deputy report week of Sept 24

2018-10-01 Thread Nate Johnston
All,

Here is my Bug Deputy report for the week of 9/24 - 10/1.  It was a busy
week!  It is my recollection that prety much anything that doesn't have
a patchset next to it is probably unowned and available for working.

High priority
==
* https://bugs.launchpad.net/bugs/1794406 - neutron.objects lost PortFording in 
setup.cfg
- Fix released: https://review.openstack.org/605302

* https://bugs.launchpad.net/bugs/1794545 - 
PlacementAPIClient.update_resource_class wrong client call, missing argument
- Fix in progress: https://review.openstack.org/605455

* https://bugs.launchpad.net/bugs/1795280 - netns deletion on newer kernels 
fails with errno 16
- May or may not be a Neutron issue, applies at kernel version 4.18 but not 
3.10.  Left unconfirmed but checking it out within Red Hat.

* https://bugs.launchpad.net/bugs/1795482 - Deleting network namespaces 
sometimes fails in check/gate queue with ENOENT
- Fix in progress: https://review.openstack.org/607009

Medium priority
==
* https://bugs.launchpad.net/bugs/1794305 - [dvr_no_external][ha] centralized 
fip show up in the backup snat-namespace on the restore host (restaring or 
down/up
- Fix in progress: https://review.openstack.org/605359  
https://review.openstack.org/606384

* https://bugs.launchpad.net/bugs/1794809 - Gateway ports are down after reboot 
of control plane nodes
- Fix in progress: https://review.openstack.org/606085

* https://bugs.launchpad.net/bugs/1794865 - Failed to check policy on listing 
loggable resources

* https://bugs.launchpad.net/bugs/1794870 - NetworkNotFound failures on network 
test teardown because of retries due to the initial request taking >60 seconds

* https://bugs.launchpad.net/bugs/1794809 - Gateway ports are down after reboot 
of control plane nodes
- Similar to https://bugs.launchpad.net/neutron/+bug/1793529 but not quite 
the same
- Fix in progress: https://review.openstack.org/606085

* https://bugs.launchpad.net/bugs/1794259 - rocky upgrade path broken 
requirements pecan too low
- Fix released: https://review.openstack.org/605027

* https://bugs.launchpad.net/bugs/1794535 - Consider all router ports for dvr 
arp updates
- Fix in progress: https://review.openstack.org/605434

* https://bugs.launchpad.net/bugs/1795126 - Race condition of DHCP agent 
updating port after ownership removed and given to another agent will cause 
extra port creation
* Fix in progress: https://review.openstack.org/606383

* https://bugs.launchpad.net/bugs/1794424 - trunk: can not delete bound trunk 
for agent which allow create trunk on bound port
- Fix in progress: https://review.openstack.org/605589 

Low priority
==
* https://bugs.launchpad.net/bugs/1795127 - [dvr][ha] router state change cause 
unnecessary router_update

Invalid/Won't Fix
==
* https://bugs.launchpad.net/bugs/1794569 - DVR with static routes may cause 
routed traffic to be dropped
- Marked invalid since it was filed against Newton (neutron 9.4.1)

* https://bugs.launchpad.net/bugs/1794695 - resources can't be filtered by 
tag-related parameterso
- Marked 'Won't Fix' because neutronclient is deprecated, but the 
functionality works correctly in openstackclient.

* https://bugs.launchpad.net/bugs/1794919 - [RFE] To decide create port with 
specific IP version
- Marked 'Opinion'; can accomplish the same goal with current parameters 
for creating a port.

RFE/Wishlist
==
* https://bugs.launchpad.net/bugs/1795212 - [RFE] Prevent DHCP agent from 
processing stale RPC messages when restarting up

* https://bugs.launchpad.net/bugs/1794771 - SRIOV trunk port - multiple vlans 
on same VF

Still Under Discussion
==
* https://bugs.launchpad.net/bugs/1794450 - When creating a server instance 
with an IPv4 and an IPv6 addresses, the IPv6 is not assigned

* https://bugs.launchpad.net/bugs/1794991 - Inconsistent flows with DVR l2pop 
VxLAN on br-tun

* https://bugs.launchpad.net/bugs/1795432 - neutron does not create the 
necessary iptables rules for dhcp agents when linuxbridge is used
- Reporter notes that this is a variant scenario of 
https://bugs.launchpad.net/neutron/+bug/1720205, which was fixed in June

Thanks,

Nate

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][neutron] SmartNics with Ironic

2018-10-01 Thread Julia Kreger
Greetings, Comments in-line.

Thanks,

-Julia

On Sat, Sep 29, 2018 at 11:27 PM Moshe Levi  wrote:

> Hi Julia,
>
>
>
> I don't mind to update the ironic spec [1]. Unfortunately, I wasn't in the
> PTG but I had a sync meeting with Isuku.
>
>
>
> As I see it there is 2 use-cases:
>
>1. Running the neutron ovs agent in the smartnic
>2. Running the neutron super ovs agent which manage the ovs running on
>the smartnic.
>
>
My takeaway from the meeting with neutron is that there would not be a
neutron ovs agent running on the smartnic. That the configuration would
need to be pushed at all times, which is ultimately better security wise if
the tenant NIC is somehow compromised it reduces the control plane exposure.


>1.
>
>
>
> It seem that most of the discussion was around the second use-case.
>

By the time Ironic and Neutron met together, it seemed like the first use
case was no longer under consideration. I may be wrong, but very strong
preference existed for the second scenario when we met the next day.


>
> This is my understanding on the ironic neutron PTG meeting:
>
>1. Ironic cores don't want to change the deployment interface as
>proposed in [1].
>2. We should  a new network_interface for use case 2. But what about
>the first use case? Should it be a new network_interface as well?
>3. We should delay the port binding until the baremetal is powered on
>the ovs is running.
>   1. For the first use case I was thinking to change the neutron
>   server to just keep the port binding information in the neutron DB. Then
>   when the neutron ovs agent is a live it will retrieve all the baremeal 
> port
>   , add them to the ovsdb and start the neutron ovs agent fullsync.
>   2. For the second use case the agent is alive so the agent itself
>   can monitor the ovsdb of the baremetal and configure it when it up
>4. How to notify that neutron agent successfully/unsuccessfully bind
>the port ?
>   1. In both use-cases we should use neutron-ironic notification to
>   make sure the port binding was done successfully.
>
>
>
> Is my understanding correct?
>
>
> Not quite.

1) We as in Ironic recognize that there would need to be changes, it is the
method as to how that we would prefer to be explicit and have chosen by the
interface. The underlying behavior needs to be different, and the new
network_interface should support both cases 1 and 2 because that interface
contain needed logic for the conductor to determine the appropriate path
forward. We should likely also put some guards in to prevent non-smart
interfaces from being used in the same configuration due to the security
issues that creates.
3) I believe this would be more of a matter of the network_interface
knowing that the machine is powered up, and attempting to assert
configuration through Neutron to push configuration to the smartnic.
3a) The consensus is that the information to access the smartnic is
hardware configuration metadata and that ironic should be the source of
truth for information about that hardware. The discussion was push that as
needed into neutron to help enable the attachment. I proposed just
including it in the binding profile as a possibility, since it is transient
information.
3b) As I understood it, this would ultimately be the default operating
behavior.
4) Was not discussed, but something along the path is going to have to
check and retry as necessary. That item could be in the network_interface
code.
4a) This doesn't exist yet.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][neutron] SmartNics with Ironic

2018-10-01 Thread Isaku Yamahata
Hello. I'm willing to help it.

For detailed tech discussion, gerrit with updated spec would be better, though.
I'd like to supplement on Neutron port binding.

On Sun, Sep 30, 2018 at 06:25:58AM +,
Moshe Levi  wrote:

> Hi Julia,
> 
> I don't mind to update the ironic spec [1]. Unfortunately, I wasn't in the 
> PTG but I had a sync meeting with Isuku.
> 
> As I see it there is 2 use-cases:
> 
>   1.  Running the neutron ovs agent in the smartnic
>   2.  Running the neutron super ovs agent which manage the ovs running on the 
> smartnic.
> 
> It seem that most of the discussion was around the second use-case.
> 
> This is my understanding on the ironic neutron PTG meeting:
> 
>   1.  Ironic cores don't want to change the deployment interface as proposed 
> in [1].
>   2.  We should  a new network_interface for use case 2. But what about the 
> first use case? Should it be a new network_interface as well?

  * Which component, Ironic or Neutron, should take care that
SmartNIC is up/ready?
  * How is the up/readiness of smartnic defined?
Common way or device-dependent way. For example,
- able to connect via ovsdb/openflow
- agent responsible for smartNIC has reported to Neutron agentdb.
- able to ssh into smartnic
- device specific way with driver for each device.
- etc.


>   3.  We should delay the port binding until the baremetal is powered on the 
> ovs is running.
>  *   For the first use case I was thinking to change the neutron server 
> to just keep the port binding information in the neutron DB. Then when the 
> neutron ovs agent is a live it will retrieve all the baremeal port , add them 
> to the ovsdb and start the neutron ovs agent fullsync.
>  *   For the second use case the agent is alive so the agent itself can 
> monitor the ovsdb of the baremetal and configure it when it up

Can you please elaborate why port binding for smartnic should be delayed?
I'm failing to see the necessity. Probably I'm missing something.
Since we can skip the check of agent liveness with the assumption that
hostid given by Ironic is correct, we don't have to delay the port binding
for both case, 1 and 2 as above.


>   4.  How to notify that neutron agent successfully/unsuccessfully bind the 
> port ?
>  *   In both use-cases we should use neutron-ironic notification to make 
> sure the port binding was done successfully.

I agree that neutron-ironic notification is necessary.

There seems the confusion of port-binding and ovs-programing. The
success/failure of port-binding is the result of neutron port-update.
The current code synchronously checks the ovs-agent liveness on the host and
parameter validity. port-binding doesn't directly/synchronously program ovs.
It only triggers to start ovs programming eventually.

On the other hand, the success/failure of ovs programming is
asynchronous regarding to neutron REST API because ovs-agent does it
asynchronously on behalf of neutron server.
So here neutron-ironic notification is necessary.
When the ovs programming is done, the agent sets port::status = UP from DOWN.
(and nova is notified that port is ready to use.)
In the case of the failure, port::status is set to ERROR.

Thanks,

>
> Is my understanding correct?
> 
> 
> 
> [1] - https://review.openstack.org/#/c/582767/
> 
> From: Miguel Lavalle 
> Sent: Sunday, September 30, 2018 3:20 AM
> To: OpenStack Development Mailing List 
> Subject: Re: [openstack-dev] [ironic][neutron] SmartNics with Ironic
> 
> Hi,
> 
> Yes, this also matches the recollection of the joint conversation in Denver. 
> Please look at the "Ironic x-project discussion - Smartnics" section in 
> http://lists.openstack.org/pipermail/openstack-dev/2018-September/135032.html
> 
> Regards
> 
> Miguel
> 
> 
> 
> On Thu, Sep 27, 2018 at 1:31 PM Julia Kreger 
> mailto:juliaashleykre...@gmail.com>> wrote:
> Greetings everyone,
> 
> Now that the PTG is over, I would like to go ahead and get the
> specification that was proposed to ironic-specs updated to represent
> the discussions that took place at the PTG.
> 
> A few highlights from my recollection:
> 
> * Ironic being the source of truth for the hardware configuration for
> the neutron agent to determine where to push configuration to. This
> would include the address and credential information (certificates
> right?).
> * The information required is somehow sent to neutron (possibly as
> part of the binding profile, which we could send at each time port
> actions are requested by Ironic.)
> * The Neutron agent running on the control plane connects outbound to
> the smartnic, using information supplied to perform the appropriate
> 

Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-10-01 Thread Dan Smith
>> I still want to use something like "Is capable of RAID5" and/or "Has
>> RAID5 already configured" as part of a scheduling and placement
>> decision. Being able to have the GET /a_c response filtered down to
>> providers with those, ahem, traits is the exact purpose of that operation.
>
> And yep, I have zero problem with this either, as I've noted. This is
> precisely what placement and traits were designed for.

Same.

>> While we're in the neighborhood, we agreed in Denver to use a trait to
>> indicate which service "owns" a provider [1], so we can eventually
>> coordinate a smooth handoff of e.g. a device provider from nova to
>> cyborg. This is certainly not a capability (but it is a trait), and it
>> can certainly be construed as a key/value (owning_service=cyborg). Are
>> we rescinding that decision?
>
> Unfortunately I have zero recollection of a conversation about using
> traits for indicating who "owns" a provider. :(

I definitely do.

> I don't think I would support such a thing -- rather, I would support
> adding an attribute to the provider model itself for an owning service
> or such thing.
>
> That's a great example of where the attribute has specific conceptual
> meaning to placement (the concept of ownership) and should definitely
> not be tucked away, encoded into a trait string.

No, as I recall it means nothing to placement - it means something to
the consumers. A gentleperson's agreement for identifying who owns what
if we're going to, say, remove things that might be stale from placement
when updating the provider tree.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][qa] Should we enable multiattach in tempest-full?

2018-10-01 Thread Jay S Bryant



On 10/1/2018 10:28 AM, Matt Riedemann wrote:

On 10/1/2018 8:37 AM, Ghanshyam Mann wrote:
+1 on adding multiattach on integrated job. It is always good to 
cover more features in integrate-gate instead of separate jobs. These 
tests does not take much time, it should be ok to add in tempest-full 
[1]. We should make only really slow test as 'slow' otherwise it 
should be fine to run in tempest-full.


I thought adding tempest-slow on cinder was merged but it is not[2]

[1]http://logs.openstack.org/80/606880/2/check/nova-multiattach/7f8681e/job-output.txt.gz#_2018-10-01_10_12_55_482653 


[2]https://review.openstack.org/#/c/591354/2


Sean and I are working on getting those changes merged.  So, that will 
be good.
Actually it will be enabled in both tempest-full and tempest-slow, 
because there is also a multiattach test marked as 'slow': 
TestMultiAttachVolumeSwap.


I'll push patches today.


Thank you!  I think this is the right way to go.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-10-01 Thread Jay Pipes

On 10/01/2018 01:20 PM, Eric Fried wrote:

I agree that we should not overload placement as a mechanism to pass
configuration information ("set up RAID5 on my storage, please") to the
driver. So let's put that aside. (Looking forward to that spec.)


ack.


I still want to use something like "Is capable of RAID5" and/or "Has
RAID5 already configured" as part of a scheduling and placement
decision. Being able to have the GET /a_c response filtered down to
providers with those, ahem, traits is the exact purpose of that operation.


And yep, I have zero problem with this either, as I've noted. This is 
precisely what placement and traits were designed for.



While we're in the neighborhood, we agreed in Denver to use a trait to
indicate which service "owns" a provider [1], so we can eventually
coordinate a smooth handoff of e.g. a device provider from nova to
cyborg. This is certainly not a capability (but it is a trait), and it
can certainly be construed as a key/value (owning_service=cyborg). Are
we rescinding that decision?


Unfortunately I have zero recollection of a conversation about using 
traits for indicating who "owns" a provider. :(


I don't think I would support such a thing -- rather, I would support 
adding an attribute to the provider model itself for an owning service 
or such thing.


That's a great example of where the attribute has specific conceptual 
meaning to placement (the concept of ownership) and should definitely 
not be tucked away, encoded into a trait string.


OK, I'll get back to that spec now... :)

Best,
-jay


[1] https://review.openstack.org/#/c/602160/


I'm working on a spec that will describe a way for the user to instruct
Nova to pass configuration data to the virt driver (or device manager)
before instance spawn. This will have nothing to do with placement or
traits, since this configuration data is not modeling scheduling and
placement decisions.

I hope to have that spec done by Monday so we can discuss on the spec.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-10-01 Thread Eric Fried


On 09/29/2018 10:40 AM, Jay Pipes wrote:
> On 09/28/2018 04:36 PM, Eric Fried wrote:
>> So here it is. Two of the top influencers in placement, one saying we
>> shouldn't overload traits, the other saying we shouldn't add a primitive
>> that would obviate the need for that. Historically, this kind of
>> disagreement seems to result in an impasse: neither thing happens and
>> those who would benefit are forced to find a workaround or punt.
>> Frankly, I don't particularly care which way we go; I just want to be
>> able to do the things.
> 
> I don't think that's a fair statement. You absolutely *do* care which
> way we go. You want to encode multiple bits of information into a trait
> string -- such as "PCI_ADDRESS_01_AB_23_CD" -- and leave it up to the
> caller to have to understand that this trait string has multiple bits of
> information encoded in it (the fact that it's a PCI device and that the
> PCI device is at 01_AB_23_CD).

It was an oversimplification to say I don't care. I would like, ideally,
long-term, to see a true key/value primitive, because I think it's much
more powerful and less hacky. But am sympathetic to what Chris brought
up about full plate and timeline. So while we're waiting for that to fit
into the schedule, I wouldn't mind the ability to use encoded traits to
some extent to satisfy the use cases.

> You don't see a problem encoding these variants inside a string. Chris
> doesn't either.

Yeah, I see the problem, and I don't love the idea - as I say, I would
prefer a true key/value primitive. But I would rather use encoded traits
as a temporary measure to get stuff done than a) work around things with
a mess of extra specs and/or b) wait, potentially until the heat death
of the universe if we remain deadlocked on whether a key/value primitive
should happen.

> 
> I *do* see a problem with it, based on my experience in Nova where this
> kind of thing leads to ugly, unmaintainable, and incomprehensible code
> as I have pointed to in previous responses.
> 
> Furthermore, your point isn't that "you just want to be able to do the
> things". Your point (and the point of others, from Cyborg and Ironic) is
> that you want to be able to use placement to pass various bits of
> information to an instance, and placement wasn't designed for that
> purpose. Nova was.
>
> So, instead of working out a solution with the Nova team for passing
> configuration data about an instance, the proposed solution is instead
> to hack/encode multiple bits of information into a trait string. This
> proposed solution is seen as a way around having to work out a more
> appropriate solution that has Nova pass that configuration data (as is
> appropriate, since nova is the project that manages instances) to the
> virt driver or generic device manager (i.e. Cyborg) before the instance
> spawns.

I agree that we should not overload placement as a mechanism to pass
configuration information ("set up RAID5 on my storage, please") to the
driver. So let's put that aside. (Looking forward to that spec.)

I still want to use something like "Is capable of RAID5" and/or "Has
RAID5 already configured" as part of a scheduling and placement
decision. Being able to have the GET /a_c response filtered down to
providers with those, ahem, traits is the exact purpose of that operation.

While we're in the neighborhood, we agreed in Denver to use a trait to
indicate which service "owns" a provider [1], so we can eventually
coordinate a smooth handoff of e.g. a device provider from nova to
cyborg. This is certainly not a capability (but it is a trait), and it
can certainly be construed as a key/value (owning_service=cyborg). Are
we rescinding that decision?

[1] https://review.openstack.org/#/c/602160/

> I'm working on a spec that will describe a way for the user to instruct
> Nova to pass configuration data to the virt driver (or device manager)
> before instance spawn. This will have nothing to do with placement or
> traits, since this configuration data is not modeling scheduling and
> placement decisions.
> 
> I hope to have that spec done by Monday so we can discuss on the spec.
> 
> Best,
> -jay
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-10-01 Thread Dan Smith
> It sounds like you might be saying, "I would rather not see encoded
> trait names OR a new key/value primitive; but if the alternative is
> ending up with 'a much larger mess', I would accept..." ...which?
>
> Or is it, "We should not implement a key/value primitive, nor should we
> implement restrictions on trait names; but we should continue to
> discourage (ab)use of trait names by steering placement consumers to..."
> ...do what?

The second one.

> The restriction is real, not perceived. Without key/value (either
> encoded or explicit), how should we steer placement consumers to satisfy
> e.g., "Give me disk from a provider with RAID5"?

Sure, I'm not doubting the need to find providers with certain
abilities. What I'm saying (and I assume Jay is as well), is that
finding things with more domain-specific attributes is the job of the
domain controller (i.e. nova). Placement's strength, IMHO, is the
unified and extremely simple data model and consistency guarantees that
it provides. It takes a lot of the work of searching and atomic
accounting of enumerable and qualitative things out of the scheduler of
the consumer. IMHO, it doesn't (i.e. won't ever) and shouldn't replace
all the things that nova's scheduler needs to do. I think it's useful to
draw the line in front of a full-blown key=value store and DSL grammar
for querying everything with all the operations anyone could ever need.

Unifying the simpler and more common bits into placement and keeping the
domain-specific consideration and advanced filtering of the results in
nova/ironic/etc is the right separation of responsibilities, IMHO. RAID
level is, of course, an overly simplistic example to use, which makes
the problem seem small, but we know more complicated examples exist.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-10-01 Thread Eric Fried
Dan-

On 10/01/2018 10:06 AM, Dan Smith wrote:
> I was out when much of this conversation happened, so I'm going to
> summarize my opinion here.
> 
>> So from a code perspective _placement_ is completely agnostic to
>> whether a trait is "PCI_ADDRESS_01_AB_23_CD", "STORAGE_DISK_SSD", or
>> "JAY_LIKES_CRUNCHIE_BARS".
>>
>> However, things which are using traits (e.g., nova, ironic) need to
>> make their own decisions about how the value of traits are
>> interpreted. I don't have a strong position on that except to say
>> that _if_ we end up in a position of there being lots of traits
>> willy nilly, people who have chosen to do that need to know that the
>> contract presented by traits right now (present or not present, no
>> value comprehension) is fixed.
> 
> I agree with what Chris holds sacred here, which is that placement
> shouldn't ever care about what the trait names are or what they mean to
> someone else. That also extends to me hoping we never implement a
> generic key=value store on resource providers in placement.
> 
>>> I *do* see a problem with it, based on my experience in Nova where
>>> this kind of thing leads to ugly, unmaintainable, and
>>> incomprehensible code as I have pointed to in previous responses.
> 
> I definitely agree with what Jay holds sacred here, which is that
> abusing the data model to encode key=value information into single trait
> strings is bad (which is what you're doing with something like
> PCI_ADDRESS_01_AB_23_CD).
> 
> I don't want placement (the code) to try to put any technical
> restrictions on the meaning of trait names, in an attempt to try to
> prevent the above abuse. I agree that means people _can_ abuse it if
> they wish, which I think is Chris' point. However, I think it _is_
> important for the placement team (the people) to care about how
> consumers (nova, etc) use traits, and thus provide guidance on that is
> necessary. Not everyone will follow that guidance, but we should provide
> it. Projects with history-revering developers on both sides of the fence
> can help this effort if they lead by example.
> 
> If everyone goes off and implements their way around the perceived
> restriction of not being able to ask placement for RAID_LEVEL>=5, we're
> going to have a much larger mess than the steaming pile of extra specs
> in nova that we're trying to avoid.

Sorry, I'm having a hard time understanding where you're landing here.

It sounds like you might be saying, "I would rather not see encoded
trait names OR a new key/value primitive; but if the alternative is
ending up with 'a much larger mess', I would accept..." ...which?

Or is it, "We should not implement a key/value primitive, nor should we
implement restrictions on trait names; but we should continue to
discourage (ab)use of trait names by steering placement consumers to..."
...do what?

The restriction is real, not perceived. Without key/value (either
encoded or explicit), how should we steer placement consumers to satisfy
e.g., "Give me disk from a provider with RAID5"?

(Put aside the ability to do comparisons other than straight equality -
so limiting the discussion to RAID_LEVEL=5, ignoring RAID_LEVEL>=5. Also
limiting the discussion to what we want out of GET /a_c - so this
excludes, "And then go configure RAID5 on my storage.")

> 
> --Dan
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][plugins] npm jobs fail due to new XStatic-jQuery release (was: Horizon gates are broken)

2018-10-01 Thread Duc Truong
Hi Shu,

Thanks for proposing your fix.  It looks good to me.  I have submitted
a similar patch for senlin-dashboard to unblock the broken gate test [1].

[1] https://review.openstack.org/#/c/607003/

Regards,

Duc (dtruong)
On Fri, Sep 28, 2018 at 2:24 AM Shu M.  wrote:
>
> Hi Ivan,
>
> Thank you for your help to our plugins and sorry for bothering you.
> I found problem on installing horizon in "post-install", e.g. we should 
> install horizon with upper-constraints.txt in "post-install".
> I proposed patch[1] in zun-ui, please check it. If we can merge this, I will 
> expand it the other remaining plugins.
>
> [1] https://review.openstack.org/#/c/606010/
>
> Thanks,
> Shu Muto
>
> 2018年9月28日(金) 3:34 Ivan Kolodyazhny :
>>
>> Hi,
>>
>> Unfortunately, this issue affects some of the plugins too :(. At least gates 
>> for the magnum-ui, senlin-dashboard, zaqar-ui and zun-ui are broken now. I'm 
>> working both with project teams to fix it asap. Let's wait if [5] helps for 
>> senlin-dashboard and fix all the rest of plugins.
>>
>>
>> [5] https://review.openstack.org/#/c/605826/
>>
>> Regards,
>> Ivan Kolodyazhny,
>> http://blog.e0ne.info/
>>
>>
>> On Wed, Sep 26, 2018 at 4:50 PM Ivan Kolodyazhny  wrote:
>>>
>>> Hi all,
>>>
>>> Patch [1]  is merged and our gates are un-blocked now. I went throw review 
>>> list and post 'recheck' where it was needed.
>>>
>>> We need to cherry-pick this fix to stable releases too. I'll do it asap
>>>
>>> Regards,
>>> Ivan Kolodyazhny,
>>> http://blog.e0ne.info/
>>>
>>>
>>> On Mon, Sep 24, 2018 at 11:18 AM Ivan Kolodyazhny  wrote:

 Hi team,

 Unfortunately, horizon gates are broken now. We can't merge any patch due 
 to the -1 from CI.
 I don't want to disable tests now, that's why I proposed a fix [1].

 We'd got released some of XStatic-* packages last week. At least new 
 XStatic-jQuery [2] breaks horizon [3]. I'm working on a new job for 
 requirements repo [4] to prevent such issues in the future.

 Please, do not try 'recheck' until [1] will be merged.

 [1] https://review.openstack.org/#/c/604611/
 [2] https://pypi.org/project/XStatic-jQuery/#history
 [3] https://bugs.launchpad.net/horizon/+bug/1794028
 [4] https://review.openstack.org/#/c/604613/

 Regards,
 Ivan Kolodyazhny,
 http://blog.e0ne.info/
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures] Release of openstack/os-log-merger failed

2018-10-01 Thread Doug Hellmann
Miguel Angel Ajo Pelayo  writes:

> Thank you for the guidance and ping Doug.
>
> Was this triggered by [1] ? or By the 1.1.0 tag pushed to gerrit?

The release jobs are always triggered by the git tagging event. The
patches in openstack/releases run a job that adds tags, but the patch
you linked to hasn't been merged yet, so it looks like it was caused by
pushing the tag manually.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python3-first] support in stable branches

2018-10-01 Thread Doug Hellmann
Dariusz Krol  writes:

> Hello Doug,
>
> thanks for your explanation. I was a little bit confused by changes to 
> stable branches with python3-first topic as I thought it has to do 
> something with adding new test configuration for python3.
>
> But as you explained this is about moving zuul-related configuration, 
> which is a part of python3-first goal (but it is not related to 
> supporting python3 by projects IMHO :) )
>
> Anyway, it is now clear to me and sorry for making this confusion.

Thanks for asking for clarification!

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stadium][networking] Seeking proposals for non-voting Stadium projects in Neutron check queue

2018-10-01 Thread Miguel Lavalle
Hi Takashi,

We are open to your suggestion. What job do you think will be helpful in
minimizing the possibility of Neutron patches breaking your project?

Best regards

On Sun, Sep 30, 2018 at 11:25 PM Takashi Yamamoto 
wrote:

> hi,
>
> what kind of jobs should it be?  eg. unit tests, tempest, etc...
> On Sun, Sep 30, 2018 at 9:43 AM Miguel Lavalle 
> wrote:
> >
> > Dear networking Stackers,
> >
> > During the recent PTG in Denver, we discussed measures to prevent
> patches merged in the Neutron repo breaking Stadium and related networking
> projects in general. We decided to implement the following:
> >
> > 1) For Stadium projects, we want to add non-voting jobs to the Neutron
> check queue
> > 2) For non stadium projects, we are inviting them to add 3rd party CI
> jobs
> >
> > The next step is for each project to propose the jobs that they want to
> run against Neutron patches.
> >
> > Best regards
> >
> > Miguel
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-10-01 Thread Dan Smith
I was out when much of this conversation happened, so I'm going to
summarize my opinion here.

> So from a code perspective _placement_ is completely agnostic to
> whether a trait is "PCI_ADDRESS_01_AB_23_CD", "STORAGE_DISK_SSD", or
> "JAY_LIKES_CRUNCHIE_BARS".
>
> However, things which are using traits (e.g., nova, ironic) need to
> make their own decisions about how the value of traits are
> interpreted. I don't have a strong position on that except to say
> that _if_ we end up in a position of there being lots of traits
> willy nilly, people who have chosen to do that need to know that the
> contract presented by traits right now (present or not present, no
> value comprehension) is fixed.

I agree with what Chris holds sacred here, which is that placement
shouldn't ever care about what the trait names are or what they mean to
someone else. That also extends to me hoping we never implement a
generic key=value store on resource providers in placement.

>> I *do* see a problem with it, based on my experience in Nova where
>> this kind of thing leads to ugly, unmaintainable, and
>> incomprehensible code as I have pointed to in previous responses.

I definitely agree with what Jay holds sacred here, which is that
abusing the data model to encode key=value information into single trait
strings is bad (which is what you're doing with something like
PCI_ADDRESS_01_AB_23_CD).

I don't want placement (the code) to try to put any technical
restrictions on the meaning of trait names, in an attempt to try to
prevent the above abuse. I agree that means people _can_ abuse it if
they wish, which I think is Chris' point. However, I think it _is_
important for the placement team (the people) to care about how
consumers (nova, etc) use traits, and thus provide guidance on that is
necessary. Not everyone will follow that guidance, but we should provide
it. Projects with history-revering developers on both sides of the fence
can help this effort if they lead by example.

If everyone goes off and implements their way around the perceived
restriction of not being able to ask placement for RAID_LEVEL>=5, we're
going to have a much larger mess than the steaming pile of extra specs
in nova that we're trying to avoid.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-10-01 Thread Artom Lifshitz
> So from a code perspective _placement_ is completely agnostic to
> whether a trait is "PCI_ADDRESS_01_AB_23_CD", "STORAGE_DISK_SSD", or
> "JAY_LIKES_CRUNCHIE_BARS".

Right, but words have meanings, and everyone is better off if that
meaning is common amongst everyone doing the talking. So if placement
understands traits as "a unitary piece of information that is either
true or false" (ex: HAS_SSD), but nova understands it as "multiple
pieces of information, all of which are either true or false" (ex:
HAS_PCI_DE_AD_BE_EF), then that's asking for trouble. Can it work out?
Probably, but it'll be more by accident that by design, sort of like
French and Spanish sharing certain words, but then having some similar
sounding words mean something completely different.

> However, things which are using traits (e.g., nova, ironic) need to
> make their own decisions about how the value of traits are
> interpreted.

Well... if placement is saying "here's the primitives I can work with
and can expose to my users", but nova is saying "well, we like this
one primitive, but what we really need is this other primitive, and
you don't have it, but we can totally hack this first primitive that
you do have to do what we want"... That's ugly. From what I
understand, Nova needs *resources* (not resources providers) to have
*quantities* of things, and this is not something that placement can
currently work with, which is why we're having this flamewar ;)

> I don't have a strong position on that except to say
> that _if_ we end up in a position of there being lots of traits
> willy nilly, people who have chosen to do that need to know that the
> contract presented by traits right now (present or not present, no
> value comprehension) is fixed.
>
> > I *do* see a problem with it, based on my experience in Nova where this kind
> > of thing leads to ugly, unmaintainable, and incomprehensible code as I have
> > pointed to in previous responses.
>
> I think there are many factors that have led to nova being
> incomprehensible and indeed bad representations is one of them, but
> I think reasonable people can disagree on which factors are the most
> important and with sufficient discussion come to some reasonable
> compromises. I personally feel that while the bad representations
> (encoding stuff in strings or json blobs) thing is a big deal,
> another major factor is a predilection to make new apis, new
> abstractions, and new representations rather than working with and
> adhering to the constraints of the existing ones. This leads to a
> lot of code that encodes business logic in itself (e.g., several
> different ways and layers of indirection to think about allocation
> ratios) rather than working within strong and constraining
> contracts.
>
> From my standpoint there isn't much to talk about here from a
> placement code standpoint. We should clearly document the functional
> contract (and stick to it) and we should come up with exemplars
> for how to make the best use of traits.
>
> I think this conversation could allow us to find those examples.
>
> I don't, however, want placement to be a traffic officer for how
> people do things. In the context of the orchestration between nova
> and ironic and how that interaction happens, nova has every right to
> set some guidelines if it needs to.
>
> --
> Chris Dent   ٩◔̯◔۶   https://anticdent.org/
> freenode: cdent tw: 
> @anticdent__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
--
Artom Lifshitz
Software Engineer, OpenStack Compute DFG

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter

2018-10-01 Thread Jim Rollenhagen
On Mon, Oct 1, 2018 at 10:13 AM Jay Pipes  wrote:

> On 10/01/2018 09:01 AM, Jim Rollenhagen wrote:
> > On Mon, Oct 1, 2018 at 8:03 AM Jay Pipes  > > wrote:
> >
> > On 10/01/2018 04:36 AM, John Garbutt wrote:
> >  > On Fri, 28 Sep 2018 at 00:46, Jay Pipes  > 
> >  > >> wrote:
> >  >
> >  > On 09/27/2018 06:23 PM, Matt Riedemann wrote:
> >  >  > On 9/27/2018 3:02 PM, Jay Pipes wrote:
> >  >  >> A great example of this would be the proposed "deploy
> > template"
> >  > from
> >  >  >> [2]. This is nothing more than abusing the placement
> > traits API in
> >  >  >> order to allow passthrough of instance configuration data
> > from the
> >  >  >> nova flavor extra spec directly into the
> nodes.instance_info
> >  > field in
> >  >  >> the Ironic database. It's a hack that is abusing the
> entire
> >  > concept of
> >  >  >> the placement traits concept, IMHO.
> >  >  >>
> >  >  >> We should have a way *in Nova* of allowing instance
> > configuration
> >  >  >> key/value information to be passed through to the virt
> > driver's
> >  >  >> spawn() method, much the same way we provide for
> > user_data that
> >  > gets
> >  >  >> exposed after boot to the guest instance via configdrive
> > or the
> >  >  >> metadata service API. What this deploy template thing is
> > is just a
> >  >  >> hack to get around the fact that nova doesn't have a
> > basic way of
> >  >  >> passing through some collated instance configuration
> > key/value
> >  >  >> information, which is a darn shame and I'm really kind of
> >  > annoyed with
> >  >  >> myself for not noticing this sooner. :(
> >  >  >
> >  >  > We talked about this in Dublin through right? We said a
> good
> >  > thing to do
> >  >  > would be to have some kind of
> template/profile/config/whatever
> >  > stored
> >  >  > off in glare where schema could be registered on that
> > thing, and
> >  > then
> >  >  > you pass a handle (ID reference) to that to nova when
> > creating the
> >  >  > (baremetal) server, nova pulls it down from glare and
> hands it
> >  > off to
> >  >  > the virt driver. It's just that no one is doing that work.
> >  >
> >  > No, nobody is doing that work.
> >  >
> >  > I will if need be if it means not hacking the placement API
> > to serve
> >  > this purpose (for which it wasn't intended).
> >  >
> >  >
> >  > Going back to the point Mark Goddard made, there are two things
> here:
> >  >
> >  > 1) Picking the correct resource provider
> >  > 2) Telling Ironic to transform the picked node in some way
> >  >
> >  > Today we allow the use Capabilities for both.
> >  >
> >  > I am suggesting we move to using Traits only for (1), leaving (2)
> in
> >  > place for now, while we decide what to do (i.e. future of "deploy
> >  > template" concept).
> >  >
> >  > It feels like Ironic's plan to define the "deploy templates" in
> > Ironic
> >  > should replace the dependency on Glare for this use case, largely
> >  > because the definition of the deploy template (in my mind) is very
> >  > heavily related to inspector and driver properties, etc. Mark is
> > looking
> >  > at moving that forward at the moment.
> >
> > That won't do anything about the flavor explosion problem, though,
> > right
> > John?
> >
> >
> > Does nova still plan to allow passing additional desired traits into the
> > server create request?
> > I (we?) was kind of banking on that to solve the Baskin Robbins thing.
>
> That's precisely what I've been looking into.


Right, well aware of that.


> From what I can tell,
> Ironic was planning on using these CUSTOM_DEPLOY_TEMPLATE_XXX traits in
> two ways:
>
> 1) To tell Nova what scheduling constraints the instance needed -- e.g.
> "hey Nova, make sure I land on a node that supports UEFI boot mode
> because my boot image relies on that".
>
> 2) As a convenient (because it would require no changes to Nova) way of
> passing instance pre-spawn configuration data to the Ironic virt driver
> -- e.g. pass the entire set of traits that are in the RequestSpec's
> flavor and image extra specs to Ironic before calling the Ironic node
> provision API.
>
> #1 is fine IMHO, since it (mostly) represents a "capability" that the
> resource provider (the Ironic baremetal node) must support in order for
> the instance to successfully boot.
>

This is the sort of thing I want to initially target. I understand the
deploy templates thing proposes solving both #1 and #2, but I want to back

Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-10-01 Thread Zane Bitter

On 28/09/18 1:19 PM, Chris Dent wrote:
They aren't arbitrary. They are there for a reason: a trait is a 
boolean capability. It describes something that either a provider is 
capable of supporting or it isn't.


This is somewhat (maybe even only slightly) different from what I
think the definition of a trait is, and that nuance may be relevant.

I describe a trait as a "quality that a resource provider has" (the
car is blue). This contrasts with a resource class which is a
"quantity that a resource provider has" (the car has 4 doors).


I'm not sure that quality vs. quantity is actually the right distinction 
here... someone could equally argue that having 4 doors is itself a 
quality[1] of a car, and they could certainly come up with a formulation 
that obscures the role of quantity at all (the car is a sedan).


I think the actual distinction you're describing is between discrete (or 
perhaps just enumerable) and continuous (or at least innumerable) values.


What that misses is that if the car is blue, it cannot also be green. 
Since placement absolutely should not know anything at all about the 
meaning of traits, this means that clients will be required to implement 
a bunch of business logic to maintain consistency. Furthermore, should 
the colour of the car change from blue to green at some point in the 
future[2], I am assuming that placement will not offer an API that 
allows both traits to be updated atomically. Those are problems that 
key-value solves.


It could be the case that those problems are not considered important in 
this context; if so I'd expect to see the reasons explained as part of 
this discussion.


cheers,
Zane.

[1] Resisting the urge to quote Stalin here.
[2] https://en.wikipedia.org/wiki/New_riddle_of_induction

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter

2018-10-01 Thread Jay Pipes

On 10/01/2018 09:01 AM, Jim Rollenhagen wrote:
On Mon, Oct 1, 2018 at 8:03 AM Jay Pipes > wrote:


On 10/01/2018 04:36 AM, John Garbutt wrote:
 > On Fri, 28 Sep 2018 at 00:46, Jay Pipes mailto:jaypi...@gmail.com>
 > >> wrote:
 >
 >     On 09/27/2018 06:23 PM, Matt Riedemann wrote:
 >      > On 9/27/2018 3:02 PM, Jay Pipes wrote:
 >      >> A great example of this would be the proposed "deploy
template"
 >     from
 >      >> [2]. This is nothing more than abusing the placement
traits API in
 >      >> order to allow passthrough of instance configuration data
from the
 >      >> nova flavor extra spec directly into the nodes.instance_info
 >     field in
 >      >> the Ironic database. It's a hack that is abusing the entire
 >     concept of
 >      >> the placement traits concept, IMHO.
 >      >>
 >      >> We should have a way *in Nova* of allowing instance
configuration
 >      >> key/value information to be passed through to the virt
driver's
 >      >> spawn() method, much the same way we provide for
user_data that
 >     gets
 >      >> exposed after boot to the guest instance via configdrive
or the
 >      >> metadata service API. What this deploy template thing is
is just a
 >      >> hack to get around the fact that nova doesn't have a
basic way of
 >      >> passing through some collated instance configuration
key/value
 >      >> information, which is a darn shame and I'm really kind of
 >     annoyed with
 >      >> myself for not noticing this sooner. :(
 >      >
 >      > We talked about this in Dublin through right? We said a good
 >     thing to do
 >      > would be to have some kind of template/profile/config/whatever
 >     stored
 >      > off in glare where schema could be registered on that
thing, and
 >     then
 >      > you pass a handle (ID reference) to that to nova when
creating the
 >      > (baremetal) server, nova pulls it down from glare and hands it
 >     off to
 >      > the virt driver. It's just that no one is doing that work.
 >
 >     No, nobody is doing that work.
 >
 >     I will if need be if it means not hacking the placement API
to serve
 >     this purpose (for which it wasn't intended).
 >
 >
 > Going back to the point Mark Goddard made, there are two things here:
 >
 > 1) Picking the correct resource provider
 > 2) Telling Ironic to transform the picked node in some way
 >
 > Today we allow the use Capabilities for both.
 >
 > I am suggesting we move to using Traits only for (1), leaving (2) in
 > place for now, while we decide what to do (i.e. future of "deploy
 > template" concept).
 >
 > It feels like Ironic's plan to define the "deploy templates" in
Ironic
 > should replace the dependency on Glare for this use case, largely
 > because the definition of the deploy template (in my mind) is very
 > heavily related to inspector and driver properties, etc. Mark is
looking
 > at moving that forward at the moment.

That won't do anything about the flavor explosion problem, though,
right
John?


Does nova still plan to allow passing additional desired traits into the 
server create request?

I (we?) was kind of banking on that to solve the Baskin Robbins thing.


That's precisely what I've been looking into. From what I can tell, 
Ironic was planning on using these CUSTOM_DEPLOY_TEMPLATE_XXX traits in 
two ways:


1) To tell Nova what scheduling constraints the instance needed -- e.g. 
"hey Nova, make sure I land on a node that supports UEFI boot mode 
because my boot image relies on that".


2) As a convenient (because it would require no changes to Nova) way of 
passing instance pre-spawn configuration data to the Ironic virt driver 
-- e.g. pass the entire set of traits that are in the RequestSpec's 
flavor and image extra specs to Ironic before calling the Ironic node 
provision API.


#1 is fine IMHO, since it (mostly) represents a "capability" that the 
resource provider (the Ironic baremetal node) must support in order for 
the instance to successfully boot.


#2 is a problem, though, because it *doesn't* represent a capability. In 
fact, it can represent any and all sorts of key/value, JSON/dict or 
other information and this information is not intended to be passed to 
the placement/scheduler service as a constraint. It is this data, also, 
that tends to create the flavor explosion problem because it means that 
Ironic deployers need to create dozens of flavors that vary only 
slightly based on a user's desired deployment configuration.


So, deployers end up needing to create dozens of flavors varying only 
slightly by things like RAID level or some pre-defined 

Re: [openstack-dev] [nova][cinder][qa] Should we enable multiattach in tempest-full?

2018-10-01 Thread Ghanshyam Mann
  On Mon, 01 Oct 2018 21:22:46 +0900 Erlon Cruz  wrote 
 
 > 
 > 
 > Em seg, 1 de out de 2018 às 05:26, Balázs Gibizer 
 >  escreveu:
 > 
 >  
 >  On Sat, Sep 29, 2018 at 10:35 PM, Matt Riedemann  
 >  wrote:
 >  > Nova, cinder and tempest run the nova-multiattach job in their check 
 >  > and gate queues. The job was added in Queens and was a specific job 
 >  > because we had to change the ubuntu cloud archive we used in Queens 
 >  > to get multiattach working. Since Rocky, devstack defaults to a 
 >  > version of the UCA that works for multiattach, so there isn't really 
 >  > anything preventing us from running the tempest multiattach tests in 
 >  > the integrated gate. The job tries to be as minimal as possible by 
 >  > only running tempest.api.compute.* tests, but it still means spinning 
 >  > up a new node and devstack for testing.
 >  > 
 >  > Given the state of the gate recently, I'm thinking it would be good 
 >  > if we dropped the nova-multiattach job in Stein and just enable the 
 >  > multiattach tests in one of the other integrated gate jobs.
 >  
 >  +1
 >  
 >  > I initially was just going to enable it in the nova-next job, but we 
 >  > don't run that on cinder or tempest changes. I'm not sure if 
 >  > tempest-full is a good place for this though since that job already 
 >  > runs a lot of tests and has been timing out a lot lately [1][2].
 >  > 
 >  > The tempest-slow job is another option, but cinder doesn't currently 
 >  > run that job (it probably should since it runs volume-related tests, 
 >  > including the only tempest tests that use encrypted volumes).
 >  
 >  If the multiattach test qualifies as a slow test then I'm in favor of 
 >  adding it to the tempest-slow and not lengthening the tempest-full 
 >  further.
 >  
 > +1 On having this on tempest-slow and add this to Cinder, provided that it 
 > would also cover encryption .

+1 on adding multiattach on integrated job. It is always good to cover more 
features in integrate-gate instead of separate jobs. These tests does not take 
much time, it should be ok to add in tempest-full [1]. We should make only 
really slow test as 'slow' otherwise it should be fine to run in tempest-full.

I thought adding tempest-slow on cinder was merged but it is not[2]

[1]  
http://logs.openstack.org/80/606880/2/check/nova-multiattach/7f8681e/job-output.txt.gz#_2018-10-01_10_12_55_482653
[2] https://review.openstack.org/#/c/591354/2

-gmann

 >   gibi
 >  
 >  > 
 >  > Are there other ideas/options for enabling multiattach in another job 
 >  > that nova/cinder/tempest already use so we can drop the now mostly 
 >  > redundant nova-multiattach job?
 >  > 
 >  > [1] http://status.openstack.org/elastic-recheck/#1686542
 >  > [2] http://status.openstack.org/elastic-recheck/#1783405
 >  > 
 >  > --
 >  > 
 >  > Thanks,
 >  > 
 >  > Matt
 >  > 
 >  > __
 >  > OpenStack Development Mailing List (not for usage questions)
 >  > Unsubscribe: 
 >  > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 >  > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >  
 >  
 >  __
 >  OpenStack Development Mailing List (not for usage questions)
 >  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 >  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >   __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-10-01 Thread Chris Dent

On Sat, 29 Sep 2018, Jay Pipes wrote:
I don't think that's a fair statement. You absolutely *do* care which way we 
go. You want to encode multiple bits of information into a trait string -- 
such as "PCI_ADDRESS_01_AB_23_CD" -- and leave it up to the caller to have to 
understand that this trait string has multiple bits of information encoded in 
it (the fact that it's a PCI device and that the PCI device is at 
01_AB_23_CD).


You don't see a problem encoding these variants inside a string. Chris 
doesn't either.


Lest I be misconstrued, I'd like to clarify: What I was trying to
say elsewhere in the thread was that placement should never be aware
of _anything_ that is in the trait string (except CUSTOM_* when
validating ones that are added, and MISC_SHARES[...] for sharing
providers). On the placement server side, input is compared solely
for equality with stored data and nothing else, and we should never
allow value comparisons, string fragments, regex, etc.

So from a code perspective _placement_ is completely agnostic to
whether a trait is "PCI_ADDRESS_01_AB_23_CD", "STORAGE_DISK_SSD", or
"JAY_LIKES_CRUNCHIE_BARS".

However, things which are using traits (e.g., nova, ironic) need to
make their own decisions about how the value of traits are
interpreted. I don't have a strong position on that except to say
that _if_ we end up in a position of there being lots of traits
willy nilly, people who have chosen to do that need to know that the
contract presented by traits right now (present or not present, no
value comprehension) is fixed.

I *do* see a problem with it, based on my experience in Nova where this kind 
of thing leads to ugly, unmaintainable, and incomprehensible code as I have 
pointed to in previous responses.


I think there are many factors that have led to nova being
incomprehensible and indeed bad representations is one of them, but
I think reasonable people can disagree on which factors are the most
important and with sufficient discussion come to some reasonable
compromises. I personally feel that while the bad representations
(encoding stuff in strings or json blobs) thing is a big deal,
another major factor is a predilection to make new apis, new
abstractions, and new representations rather than working with and
adhering to the constraints of the existing ones. This leads to a
lot of code that encodes business logic in itself (e.g., several
different ways and layers of indirection to think about allocation
ratios) rather than working within strong and constraining
contracts.


From my standpoint there isn't much to talk about here from a

placement code standpoint. We should clearly document the functional
contract (and stick to it) and we should come up with exemplars
for how to make the best use of traits.

I think this conversation could allow us to find those examples.

I don't, however, want placement to be a traffic officer for how
people do things. In the context of the orchestration between nova
and ironic and how that interaction happens, nova has every right to
set some guidelines if it needs to.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-10-01 Thread Ghanshyam Mann
  On Sat, 29 Sep 2018 07:23:30 +0900 Lance Bragstad  
wrote  
 > Alright - I've worked up the majority of what we have in this thread and 
 > proposed a documentation patch for oslo.policy [0].
 > I think we're at the point where we can finish the rest of this discussion 
 > in gerrit if folks are ok with that.
 > [0] https://review.openstack.org/#/c/606214/

+1, thanks for that. let's start the discussion there.

-gmann

 > On Fri, Sep 28, 2018 at 3:33 PM Sean McGinnis  wrote:
 > On Fri, Sep 28, 2018 at 01:54:01PM -0500, Lance Bragstad wrote:
 >  > On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki  wrote:
 >  > 
 >  > > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg
 >  > >  wrote:
 >  > > >
 >  > > > Ideally I would like to see it in the form of least specific to most
 >  > > specific. But more importantly in a way that there is no additional
 >  > > delimiters between the service type and the resource. Finally, I do not
 >  > > like the change of plurality depending on action type.
 >  > > >
 >  > > > I propose we consider
 >  > > >
 >  > > > ::[:]
 >  > > >
 >  > > > Example for keystone (note, action names below are strictly examples I
 >  > > am fine with whatever form those actions take):
 >  > > > identity:projects:create
 >  > > > identity:projects:delete
 >  > > > identity:projects:list
 >  > > > identity:projects:get
 >  > > >
 >  > > > It keeps things simple and consistent when you're looking through
 >  > > overrides / defaults.
 >  > > > --Morgan
 >  > > +1 -- I think the ordering if `resource` comes before
 >  > > `action|subaction` will be more clean.
 >  > >
 >  > 
 >  
 >  Great idea. This is looking better and better.
 >   __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-10-01 Thread Ghanshyam Mann
  On Sat, 29 Sep 2018 03:54:01 +0900 Lance Bragstad  
wrote  
 > 
 > On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki  wrote:
 > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg
 >   wrote:
 >  >
 >  > Ideally I would like to see it in the form of least specific to most 
 > specific. But more importantly in a way that there is no additional 
 > delimiters between the service type and the resource. Finally, I do not like 
 > the change of plurality depending on action type.
 >  >
 >  > I propose we consider
 >  >
 >  > ::[:]
 >  >
 >  > Example for keystone (note, action names below are strictly examples I am 
 > fine with whatever form those actions take):
 >  > identity:projects:create
 >  > identity:projects:delete
 >  > identity:projects:list
 >  > identity:projects:get
 >  >
 >  > It keeps things simple and consistent when you're looking through 
 > overrides / defaults.
 >  > --Morgan
 >  +1 -- I think the ordering if `resource` comes before
 >  `action|subaction` will be more clean.
 > 
 > ++
 > These are excellent points. I especially like being able to omit the 
 > convention about plurality. Furthermore, I'd like to add that I think we 
 > should make the resource singular (e.g., project instead or projects). For 
 > example:
 > compute:server:list
 > compute:server:updatecompute:server:createcompute:server:deletecompute:server:action:rebootcompute:server:action:confirm_resize
 >  (or confirm-resize)

Do we need "action" word there? I think action name itself should convey the 
operation. IMO below notation without "äction" word looks clear enough. what 
you say?

compute:server:reboot
compute:server:confirm_resize

-gmann

 > 
 > Otherwise, someone might mistake compute:servers:get, as "list". This is 
 > ultra-nick-picky, but something I thought of when seeing the usage of 
 > "get_all" in policy names in favor of "list."
 > In summary, the new convention based on the most recent feedback should be:
 > ::[:]
 > Rules:service-type is always defined in the service types authority
 > resources are always singular
 > Thanks to all for sticking through this tedious discussion. I appreciate it. 
 >  
 >  /R
 >  
 >  Harry
 >  >
 >  > On Fri, Sep 28, 2018 at 6:49 AM Lance Bragstad  
 > wrote:
 >  >>
 >  >> Bumping this thread again and proposing two conventions based on the 
 > discussion here. I propose we decide on one of the two following conventions:
 >  >>
 >  >> ::
 >  >>
 >  >> or
 >  >>
 >  >> :_
 >  >>
 >  >> Where  is the corresponding service type of the project 
 > [0], and  is either create, get, list, update, or delete. I think 
 > decoupling the method from the policy name should aid in consistency, 
 > regardless of the underlying implementation. The HTTP method specifics can 
 > still be relayed using oslo.policy's DocumentedRuleDefault object [1].
 >  >>
 >  >> I think the plurality of the resource should default to what makes sense 
 > for the operation being carried out (e.g., list:foobars, create:foobar).
 >  >>
 >  >> I don't mind the first one because it's clear about what the delimiter 
 > is and it doesn't look weird when projects have something like:
 >  >>
 >  >> :::
 >  >>
 >  >> If folks are ok with this, I can start working on some documentation 
 > that explains the motivation for this. Afterward, we can figure out how we 
 > want to track this work.
 >  >>
 >  >> What color do you want the shed to be?
 >  >>
 >  >> [0] https://service-types.openstack.org/service-types.json
 >  >> [1] 
 > https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule
 >  >>
 >  >> On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad  
 > wrote:
 >  >>>
 >  >>>
 >  >>> On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann 
 >  wrote:
 >  
 >     On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt 
 >  wrote 
 >    > tl;dr+1 consistent names
 >    > I would make the names mirror the API... because the Operator 
 > setting them knows the API, not the codeIgnore the crazy names in Nova, I 
 > certainly hate them
 >  
 >   Big +1 on consistent naming  which will help operator as well as 
 > developer to maintain those.
 >  
 >    >
 >    > Lance Bragstad  wrote:
 >    > > I'm curious if anyone has context on the "os-" part of the format?
 >    >
 >    > My memory of the Nova policy mess...* Nova's policy rules 
 > traditionally followed the patterns of the code
 >    > ** Yes, horrible, but it happened.* The code used to have the 
 > OpenStack API and the EC2 API, hence the "os"* API used to expand with 
 > extensions, so the policy name is often based on extensions** note most of 
 > the extension code has now gone, including lots of related policies* Policy 
 > in code was focused on getting us to a place where we could rename policy** 
 > Whoop whoop by the way, it feels like we are really close to something 
 > sensible now!
 >    > Lance Bragstad  wrote:
 >    > Thoughts on using create, list, 

Re: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter

2018-10-01 Thread Jim Rollenhagen
On Mon, Oct 1, 2018 at 8:03 AM Jay Pipes  wrote:

> On 10/01/2018 04:36 AM, John Garbutt wrote:
> > On Fri, 28 Sep 2018 at 00:46, Jay Pipes  > > wrote:
> >
> > On 09/27/2018 06:23 PM, Matt Riedemann wrote:
> >  > On 9/27/2018 3:02 PM, Jay Pipes wrote:
> >  >> A great example of this would be the proposed "deploy template"
> > from
> >  >> [2]. This is nothing more than abusing the placement traits API
> in
> >  >> order to allow passthrough of instance configuration data from
> the
> >  >> nova flavor extra spec directly into the nodes.instance_info
> > field in
> >  >> the Ironic database. It's a hack that is abusing the entire
> > concept of
> >  >> the placement traits concept, IMHO.
> >  >>
> >  >> We should have a way *in Nova* of allowing instance configuration
> >  >> key/value information to be passed through to the virt driver's
> >  >> spawn() method, much the same way we provide for user_data that
> > gets
> >  >> exposed after boot to the guest instance via configdrive or the
> >  >> metadata service API. What this deploy template thing is is just
> a
> >  >> hack to get around the fact that nova doesn't have a basic way of
> >  >> passing through some collated instance configuration key/value
> >  >> information, which is a darn shame and I'm really kind of
> > annoyed with
> >  >> myself for not noticing this sooner. :(
> >  >
> >  > We talked about this in Dublin through right? We said a good
> > thing to do
> >  > would be to have some kind of template/profile/config/whatever
> > stored
> >  > off in glare where schema could be registered on that thing, and
> > then
> >  > you pass a handle (ID reference) to that to nova when creating the
> >  > (baremetal) server, nova pulls it down from glare and hands it
> > off to
> >  > the virt driver. It's just that no one is doing that work.
> >
> > No, nobody is doing that work.
> >
> > I will if need be if it means not hacking the placement API to serve
> > this purpose (for which it wasn't intended).
> >
> >
> > Going back to the point Mark Goddard made, there are two things here:
> >
> > 1) Picking the correct resource provider
> > 2) Telling Ironic to transform the picked node in some way
> >
> > Today we allow the use Capabilities for both.
> >
> > I am suggesting we move to using Traits only for (1), leaving (2) in
> > place for now, while we decide what to do (i.e. future of "deploy
> > template" concept).
> >
> > It feels like Ironic's plan to define the "deploy templates" in Ironic
> > should replace the dependency on Glare for this use case, largely
> > because the definition of the deploy template (in my mind) is very
> > heavily related to inspector and driver properties, etc. Mark is looking
> > at moving that forward at the moment.
>
> That won't do anything about the flavor explosion problem, though, right
> John?
>

Does nova still plan to allow passing additional desired traits into the
server create request?
I (we?) was kind of banking on that to solve the Baskin Robbins thing.

// jim


>
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-10-01 Thread Ghanshyam Mann
 On Fri, 21 Sep 2018 23:13:02 +0900 Lance Bragstad  
wrote  
 > 
 > On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann  
 > wrote:
 >   On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt  
 > wrote  
 >   > tl;dr+1 consistent names
 >   > I would make the names mirror the API... because the Operator setting 
 > them knows the API, not the codeIgnore the crazy names in Nova, I certainly 
 > hate them
 > 
 >  Big +1 on consistent naming  which will help operator as well as developer 
 > to maintain those. 
 > 
 >   > 
 >   > Lance Bragstad  wrote:
 >   > > I'm curious if anyone has context on the "os-" part of the format?
 >   > 
 >   > My memory of the Nova policy mess...* Nova's policy rules traditionally 
 > followed the patterns of the code
 >   > ** Yes, horrible, but it happened.* The code used to have the OpenStack 
 > API and the EC2 API, hence the "os"* API used to expand with extensions, so 
 > the policy name is often based on extensions** note most of the extension 
 > code has now gone, including lots of related policies* Policy in code was 
 > focused on getting us to a place where we could rename policy** Whoop whoop 
 > by the way, it feels like we are really close to something sensible now!
 >   > Lance Bragstad  wrote:
 >   > Thoughts on using create, list, update, and delete as opposed to post, 
 > get, put, patch, and delete in the naming convention?
 >   > I could go either way as I think about "list servers" in the API.But my 
 > preference is for the URL stub and POST, GET, etc.
 >   >  On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad  
 > wrote:If we consider dropping "os", should we entertain dropping "api", too? 
 > Do we have a good reason to keep "api"?I wouldn't be opposed to simple 
 > service types (e.g "compute" or "loadbalancer").
 >   > +1The API is known as "compute" in api-ref, so the policy should be for 
 > "compute", etc.
 > 
 >  Agree on mapping the policy name with api-ref as much as possible. Other 
 > than policy name having 'os-', we have 'os-' in resource name also in nova 
 > API url like /os-agents, /os-aggregates etc (almost every resource except 
 > servers , flavors).  As we cannot get rid of those from API url, we need to 
 > keep the same in policy naming too? or we can have policy name like 
 > compute:agents:create/post but that mismatch from api-ref where agents 
 > resource url is os-agents.
 > 
 > Good question. I think this depends on how the service does policy 
 > enforcement.
 > I know we did something like this in keystone, which required policy names 
 > and method names to be the same:
 >   "identity:list_users": "..."
 > Because the initial implementation of policy enforcement used a decorator 
 > like this:
 >   from keystone import controller
 >   @controller.protected  def list_users(self):  ...
 > Having the policy name the same as the method name made it easier for the 
 > decorator implementation to resolve the policy needed to protect the API 
 > because it just looked at the name of the wrapped method. The advantage was 
 > that it was easy to implement new APIs because you only needed to add a 
 > policy, implement the method, and make sure you decorate the implementation.
 > While this worked, we are moving away from it entirely. The decorator 
 > implementation was ridiculously complicated. Only a handful of keystone 
 > developers understood it. With the addition of system-scope, it would have 
 > only become more convoluted. It also enables a much more copy-paste pattern 
 > (e.g., so long as I wrap my method with this decorator implementation, 
 > things should work right?). Instead, we're calling enforcement within the 
 > controller implementation to ensure things are easier to understand. It 
 > requires developers to be cognizant of how different token types affect the 
 > resources within an API. That said, coupling the policy name to the method 
 > name is no longer a requirement for keystone.
 > Hopefully, that helps explain why we needed them to match. 
 >  Also we have action API (i know from nova not sure from other services) 
 > like POST /servers/{server_id}/action {addSecurityGroup} and their current 
 > policy name is all inconsistent.  few have policy name including their 
 > resource name like "os_compute_api:os-flavor-access:add_tenant_access", few 
 > has 'action' in policy name like 
 > "os_compute_api:os-admin-actions:reset_state" and few has direct action name 
 > like "os_compute_api:os-console-output"
 > 
 > Since the actions API relies on the request body and uses a single HTTP 
 > method, does it make sense to have the HTTP method in the policy name? It 
 > feels redundant, and we might be able to establish a convention that's more 
 > meaningful for things like action APIs. It looks like cinder has a similar 
 > pattern [0].

Yes, HTTP method is not necessary to be in action policy. action name itself 
should be self explanatory. 

 > [0] 
 > 

Re: [openstack-dev] [qa] [infra] [placement] tempest plugins virtualenv

2018-10-01 Thread Ghanshyam Mann



  On Fri, 28 Sep 2018 23:10:06 +0900 Matthew Treinish 
 wrote  
 > On Fri, Sep 28, 2018 at 02:39:24PM +0100, Chris Dent wrote: 
 > >  
 > > I'm still trying to figure out how to properly create a "modern" (as 
 > > in zuul v3 oriented) integration test for placement using gabbi and 
 > > tempest. That work is happening at 
 > > https://review.openstack.org/#/c/601614/ 
 > >  
 > > There was lots of progress made after the last message on this 
 > > topic 
 > > http://lists.openstack.org/pipermail/openstack-dev/2018-September/134837.html
 > >  
 > > but I've reached another interesting impasse. 
 > >  
 > > From devstack's standpoint, the way to say "I want to use a tempest 
 > > plugin" is to set TEMPEST_PLUGINS to alist of where the plugins are. 
 > > devstack:lib/tempest then does a: 
 > >  
 > > tox -evenv-tempest -- pip install -c 
 > > $REQUIREMENTS_DIR/upper-constraints.txt $TEMPEST_PLUGINS 
 > >  
 > > http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_12_58_138163
 > >  
 > >  
 > > I have this part working as expected. 
 > >  
 > > However, 
 > >  
 > > The advice is then to create a new job that has a parent of 
 > > devstack-tempest. That zuul job runs a variety of tox environments, 
 > > depending on the setting of the `tox_envlist` var. If you wish to 
 > > use a `tempest_test_regex` (I do) the preferred tox environment is 
 > > 'all'. 
 > >  
 > > That venv doesn't have the plugin installed, thus no gabbi tests are 
 > > found: 
 > >  
 > > http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_13_25_798683
 > >  
 >  
 > Right above this line it shows that the gabbi-tempest plugin is installed in 
 > the venv: 
 >  
 > http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_13_25_650661
 >  
 >  
 > at version 0.1.1. It's a bit weird because it's line wrapped in my browser. 
 > The devstack logs also shows the plugin: 
 >  
 > http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/controller/logs/devstacklog.txt.gz#_2018-09-28_11_13_13_076
 >  
 >  
 > All the tempest tox jobs that run tempest (and the tempest-venv command used 
 > by 
 > devstack) run inside the same tox venv: 
 >  
 > https://github.com/openstack/tempest/blob/master/tox.ini#L52 
 >  
 > My guess is that the plugin isn't returning any tests that match the regex. 
 >  
 > I'm also a bit alarmed that tempest run is returning 0 there when no tests 
 > are 
 > being run. That's definitely a bug because things should fail with no tests 
 > being successfully run. 

Tempest run fail on "no test" run [1]

.. [1] 
https://github.com/openstack/tempest/blob/807f0dec66689aced05c2bb970f344cbb8a3c6a3/tempest/cmd/run.py#L182

-gmann

 >  
 > -Matt Treinish 
 >  
 > >  
 > > How do I get my plugin installed into the right venv while still 
 > > following the guidelines for good zuul behavior? 
 > >  
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][qa] Should we enable multiattach in tempest-full?

2018-10-01 Thread Erlon Cruz
Em seg, 1 de out de 2018 às 05:26, Balázs Gibizer <
balazs.gibi...@ericsson.com> escreveu:

>
>
> On Sat, Sep 29, 2018 at 10:35 PM, Matt Riedemann 
> wrote:
> > Nova, cinder and tempest run the nova-multiattach job in their check
> > and gate queues. The job was added in Queens and was a specific job
> > because we had to change the ubuntu cloud archive we used in Queens
> > to get multiattach working. Since Rocky, devstack defaults to a
> > version of the UCA that works for multiattach, so there isn't really
> > anything preventing us from running the tempest multiattach tests in
> > the integrated gate. The job tries to be as minimal as possible by
> > only running tempest.api.compute.* tests, but it still means spinning
> > up a new node and devstack for testing.
> >
> > Given the state of the gate recently, I'm thinking it would be good
> > if we dropped the nova-multiattach job in Stein and just enable the
> > multiattach tests in one of the other integrated gate jobs.
>
> +1
>
> > I initially was just going to enable it in the nova-next job, but we
> > don't run that on cinder or tempest changes. I'm not sure if
> > tempest-full is a good place for this though since that job already
> > runs a lot of tests and has been timing out a lot lately [1][2].
> >
> > The tempest-slow job is another option, but cinder doesn't currently
> > run that job (it probably should since it runs volume-related tests,
> > including the only tempest tests that use encrypted volumes).
>
> If the multiattach test qualifies as a slow test then I'm in favor of
> adding it to the tempest-slow and not lengthening the tempest-full
> further.
>
> +1 On having this on tempest-slow and add this to Cinder, provided that it
would also cover encryption .


> gibi
>
> >
> > Are there other ideas/options for enabling multiattach in another job
> > that nova/cinder/tempest already use so we can drop the now mostly
> > redundant nova-multiattach job?
> >
> > [1] http://status.openstack.org/elastic-recheck/#1686542
> > [2] http://status.openstack.org/elastic-recheck/#1783405
> >
> > --
> >
> > Thanks,
> >
> > Matt
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter

2018-10-01 Thread Jay Pipes

On 10/01/2018 04:36 AM, John Garbutt wrote:
On Fri, 28 Sep 2018 at 00:46, Jay Pipes > wrote:


On 09/27/2018 06:23 PM, Matt Riedemann wrote:
 > On 9/27/2018 3:02 PM, Jay Pipes wrote:
 >> A great example of this would be the proposed "deploy template"
from
 >> [2]. This is nothing more than abusing the placement traits API in
 >> order to allow passthrough of instance configuration data from the
 >> nova flavor extra spec directly into the nodes.instance_info
field in
 >> the Ironic database. It's a hack that is abusing the entire
concept of
 >> the placement traits concept, IMHO.
 >>
 >> We should have a way *in Nova* of allowing instance configuration
 >> key/value information to be passed through to the virt driver's
 >> spawn() method, much the same way we provide for user_data that
gets
 >> exposed after boot to the guest instance via configdrive or the
 >> metadata service API. What this deploy template thing is is just a
 >> hack to get around the fact that nova doesn't have a basic way of
 >> passing through some collated instance configuration key/value
 >> information, which is a darn shame and I'm really kind of
annoyed with
 >> myself for not noticing this sooner. :(
 >
 > We talked about this in Dublin through right? We said a good
thing to do
 > would be to have some kind of template/profile/config/whatever
stored
 > off in glare where schema could be registered on that thing, and
then
 > you pass a handle (ID reference) to that to nova when creating the
 > (baremetal) server, nova pulls it down from glare and hands it
off to
 > the virt driver. It's just that no one is doing that work.

No, nobody is doing that work.

I will if need be if it means not hacking the placement API to serve
this purpose (for which it wasn't intended).


Going back to the point Mark Goddard made, there are two things here:

1) Picking the correct resource provider
2) Telling Ironic to transform the picked node in some way

Today we allow the use Capabilities for both.

I am suggesting we move to using Traits only for (1), leaving (2) in 
place for now, while we decide what to do (i.e. future of "deploy 
template" concept).


It feels like Ironic's plan to define the "deploy templates" in Ironic 
should replace the dependency on Glare for this use case, largely 
because the definition of the deploy template (in my mind) is very 
heavily related to inspector and driver properties, etc. Mark is looking 
at moving that forward at the moment.


That won't do anything about the flavor explosion problem, though, right 
John?


-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Horizon tutorial didn`t work

2018-10-01 Thread Ivan Kolodyazhny
Hi  Jea-Min,

Thank you for your report. I'll check the manual and fix it asap.


Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/


On Mon, Oct 1, 2018 at 9:38 AM Jea-Min Lim  wrote:

> Hello everyone,
>
> I`m following a tutorial of Building a Dashboard using Horizon.
> (link:
> https://docs.openstack.org/horizon/latest/contributor/tutorials/dashboard.html#tutorials-dashboard
> )
>
> However, provided custom management command doesn't create boilerplate
> code.
>
> I typed tox -e manage -- startdash mydashboard --target
> openstack_dashboard/dashboards/mydashboard
>
> and the attached screenshot file is the execution result.
>
> Are there any recommendations to solve this problem?
>
> Regards.
>
> [image: result_jmlim.PNG]
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Extend created(updated)_at by started(finished)_at to clarify the duration of the task

2018-10-01 Thread Dougal Matthews
On Wed, 26 Sep 2018 at 12:03, Олег Овчарук  wrote:

> Hi everyone! Please take a look to the blueprint that i've just created
> https://blueprints.launchpad.net/mistral/+spec/mistral-add-started-finished-at
>
> I'd like to implement this feature, also I want to update CloudFlow when
> this will be done. Please let me know in the blueprint if I can start
> implementing.
>

I agree with Renat, this sounds like a useful addition to me. I have added
to to the Stein launchpad milestone.


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter

2018-10-01 Thread John Garbutt
On Fri, 28 Sep 2018 at 00:46, Jay Pipes  wrote:

> On 09/27/2018 06:23 PM, Matt Riedemann wrote:
> > On 9/27/2018 3:02 PM, Jay Pipes wrote:
> >> A great example of this would be the proposed "deploy template" from
> >> [2]. This is nothing more than abusing the placement traits API in
> >> order to allow passthrough of instance configuration data from the
> >> nova flavor extra spec directly into the nodes.instance_info field in
> >> the Ironic database. It's a hack that is abusing the entire concept of
> >> the placement traits concept, IMHO.
> >>
> >> We should have a way *in Nova* of allowing instance configuration
> >> key/value information to be passed through to the virt driver's
> >> spawn() method, much the same way we provide for user_data that gets
> >> exposed after boot to the guest instance via configdrive or the
> >> metadata service API. What this deploy template thing is is just a
> >> hack to get around the fact that nova doesn't have a basic way of
> >> passing through some collated instance configuration key/value
> >> information, which is a darn shame and I'm really kind of annoyed with
> >> myself for not noticing this sooner. :(
> >
> > We talked about this in Dublin through right? We said a good thing to do
> > would be to have some kind of template/profile/config/whatever stored
> > off in glare where schema could be registered on that thing, and then
> > you pass a handle (ID reference) to that to nova when creating the
> > (baremetal) server, nova pulls it down from glare and hands it off to
> > the virt driver. It's just that no one is doing that work.
>
> No, nobody is doing that work.
>
> I will if need be if it means not hacking the placement API to serve
> this purpose (for which it wasn't intended).
>

Going back to the point Mark Goddard made, there are two things here:

1) Picking the correct resource provider
2) Telling Ironic to transform the picked node in some way

Today we allow the use Capabilities for both.

I am suggesting we move to using Traits only for (1), leaving (2) in place
for now, while we decide what to do (i.e. future of "deploy template"
concept).

It feels like Ironic's plan to define the "deploy templates" in Ironic
should replace the dependency on Glare for this use case, largely because
the definition of the deploy template (in my mind) is very heavily related
to inspector and driver properties, etc. Mark is looking at moving that
forward at the moment.

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python3-first] support in stable branches

2018-10-01 Thread Dariusz Krol
Hello Doug,

thanks for your explanation. I was a little bit confused by changes to 
stable branches with python3-first topic as I thought it has to do 
something with adding new test configuration for python3.

But as you explained this is about moving zuul-related configuration, 
which is a part of python3-first goal (but it is not related to 
supporting python3 by projects IMHO :) )

Anyway, it is now clear to me and sorry for making this confusion.


Best,

Dariusz Krol

On 09/28/2018 06:05 PM, Doug Hellmann wrote:
> Dariusz Krol  writes:
>
>> Hello,
>>
>>
>> I'm specifically referring to branches mentioned in:
>> https://github.com/openstack/goal-tools/blob/4125c31e74776a7dc6a15d2276ab51ff3e73cd16/goal_tools/python3_first/jobs.py#L54
> I'm still not entirely sure what you're saying is happening that you do
> not expect to have happening, but I'll take a guess.
>
> The zuul migration portion of the goal work needs to move *all* of the
> Zuul settings for a repo into the correct branch because after the
> migration the job settings will no longer be in project-config at all
> and so zuul won't know which jobs to run on the stable branches if we
> haven't imported the settings.
>
> The migration script tries to figure out which jobs apply to which
> branches of each repo by looking at the branch specifier settings in
> project-config, and then it creates an import patch for each branch with
> the relevant jobs. Subsequent steps in the script change the
> documentation and release notes jobs and then add new python 3.6 testing
> jobs. Those steps only apply to the master branch.
>
> So, if you have a patch importing a python 3 job setting to a stable
> branch of a repo where you aren't expecting it (and it isn't supported),
> that's most likely because project-config has no branch specifiers for
> the job (meaning it should run on all branches). We did find several
> cases where that was true because projects added jobs without branch
> specifiers after the branches were created, and then back-ported no
> patches to the stable branch. See
> http://lists.openstack.org/pipermail/openstack-dev/2018-August/133594.html
> for details.
>
> Doug
>
>> I hope this helps.
>>
>>
>> Best,
>>
>> Dariusz Krol
>>
>>
>> On 09/27/2018 06:04 PM, Ben Nemec wrote:
>>>
>>> On 9/27/18 10:36 AM, Doug Hellmann wrote:
 Dariusz Krol  writes:

> Hello Champions :)
>
>
> I work on the Trove project and we are wondering if python3 should be
> supported in previous releases as well?
>
> Actually this question was asked by Alan Pevec from the stable branch
> maintainers list.
>
> I saw you added releases up to ocata to support python3 and there are
> already changes on gerrit waiting to be merged but after reading [1] I
> have my doubts about this.
 I'm not sure what you're referring to when you say "added releases up to
 ocata" here. Can you link to the patches that you have questions about?
>>> Possibly the zuul migration patches for all the stable branches? If
>>> so, those don't change the status of python 3 support on the stable
>>> branches, they just split the zuul configuration to make it easier to
>>> add new python 3 jobs on master without affecting the stable branches.
>>>
> Could you elaborate why it is necessary to support previous releases ?
>
>
> Best,
>
> Dariusz Krol
>
>
> [1] https://docs.openstack.org/project-team-guide/stable-branches.html
> __
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures] Release of openstack/os-log-merger failed

2018-10-01 Thread Miguel Angel Ajo Pelayo
Oh, ok 1.1.0 tag didn't have 'venv' in tox.ini, but master has it since:

https://review.openstack.org/#/c/548618/7/tox.ini@37



On Mon, Oct 1, 2018 at 10:01 AM Miguel Angel Ajo Pelayo 
wrote:

> Thank you for the guidance and ping Doug.
>
> Was this triggered by [1] ? or By the 1.1.0 tag pushed to gerrit?
>
>
> I'm working to make os-log-merger part of the OpenStack governance
> projects, and to make sure we release it as a tarball.
>
> It's a small tool I've been using for years making my life easier every
> time I've needed to debug complex scenarios. It's not a big project, but I
> hope the extra exposure will make developers, and admins life easier.
>
>
> Some projects use it as a way of aggregating logs [2] In a way that those
> can then be easily consumed by logstash/kibana.
>
>
> Best regards,
> Miguel Ángel Ajo
>
> [1] https://review.openstack.org/#/c/605641/
> [2]
> http://git.openstack.org/cgit/openstack/neutron/tree/neutron/tests/contrib/post_test_hook.sh#n41
> [3]
> http://logs.openstack.org/58/605358/4/check/neutron-functional/18de376/logs/dsvm-functional-index.txt.gz
>
>
> On Fri, Sep 28, 2018 at 5:45 PM Doug Hellmann 
> wrote:
>
>> z...@openstack.org writes:
>>
>> > Build failed.
>> >
>> > - release-openstack-python
>> http://logs.openstack.org/d4/d445ff62676bc5b2753fba132a3894731a289fb9/release/release-openstack-python/629c35f/
>> : FAILURE in 3m 57s
>> > - announce-release announce-release : SKIPPED
>> > - propose-update-constraints propose-update-constraints : SKIPPED
>>
>> The error here is
>>
>>   ERROR: unknown environment 'venv'
>>
>> It looks like os-log-merger is not set up for the
>> release-openstack-python job, which expects a specific tox setup.
>>
>>
>> http://logs.openstack.org/d4/d445ff62676bc5b2753fba132a3894731a289fb9/release/release-openstack-python/629c35f/ara-report/result/7c6fd37c-82d8-48f7-b653-5bdba90cbc31/
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> --
> Miguel Ángel Ajo
> OSP / Networking DFG, OVN Squad Engineering
>


-- 
Miguel Ángel Ajo
OSP / Networking DFG, OVN Squad Engineering
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][qa] Should we enable multiattach in tempest-full?

2018-10-01 Thread Balázs Gibizer



On Sat, Sep 29, 2018 at 10:35 PM, Matt Riedemann  
wrote:
Nova, cinder and tempest run the nova-multiattach job in their check 
and gate queues. The job was added in Queens and was a specific job 
because we had to change the ubuntu cloud archive we used in Queens 
to get multiattach working. Since Rocky, devstack defaults to a 
version of the UCA that works for multiattach, so there isn't really 
anything preventing us from running the tempest multiattach tests in 
the integrated gate. The job tries to be as minimal as possible by 
only running tempest.api.compute.* tests, but it still means spinning 
up a new node and devstack for testing.


Given the state of the gate recently, I'm thinking it would be good 
if we dropped the nova-multiattach job in Stein and just enable the 
multiattach tests in one of the other integrated gate jobs.


+1

I initially was just going to enable it in the nova-next job, but we 
don't run that on cinder or tempest changes. I'm not sure if 
tempest-full is a good place for this though since that job already 
runs a lot of tests and has been timing out a lot lately [1][2].


The tempest-slow job is another option, but cinder doesn't currently 
run that job (it probably should since it runs volume-related tests, 
including the only tempest tests that use encrypted volumes).


If the multiattach test qualifies as a slow test then I'm in favor of 
adding it to the tempest-slow and not lengthening the tempest-full 
further.


gibi



Are there other ideas/options for enabling multiattach in another job 
that nova/cinder/tempest already use so we can drop the now mostly 
redundant nova-multiattach job?


[1] http://status.openstack.org/elastic-recheck/#1686542
[2] http://status.openstack.org/elastic-recheck/#1783405

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures] Release of openstack/os-log-merger failed

2018-10-01 Thread Miguel Angel Ajo Pelayo
Thank you for the guidance and ping Doug.

Was this triggered by [1] ? or By the 1.1.0 tag pushed to gerrit?


I'm working to make os-log-merger part of the OpenStack governance
projects, and to make sure we release it as a tarball.

It's a small tool I've been using for years making my life easier every
time I've needed to debug complex scenarios. It's not a big project, but I
hope the extra exposure will make developers, and admins life easier.


Some projects use it as a way of aggregating logs [2] In a way that those
can then be easily consumed by logstash/kibana.


Best regards,
Miguel Ángel Ajo

[1] https://review.openstack.org/#/c/605641/
[2]
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/tests/contrib/post_test_hook.sh#n41
[3]
http://logs.openstack.org/58/605358/4/check/neutron-functional/18de376/logs/dsvm-functional-index.txt.gz


On Fri, Sep 28, 2018 at 5:45 PM Doug Hellmann  wrote:

> z...@openstack.org writes:
>
> > Build failed.
> >
> > - release-openstack-python
> http://logs.openstack.org/d4/d445ff62676bc5b2753fba132a3894731a289fb9/release/release-openstack-python/629c35f/
> : FAILURE in 3m 57s
> > - announce-release announce-release : SKIPPED
> > - propose-update-constraints propose-update-constraints : SKIPPED
>
> The error here is
>
>   ERROR: unknown environment 'venv'
>
> It looks like os-log-merger is not set up for the
> release-openstack-python job, which expects a specific tox setup.
>
>
> http://logs.openstack.org/d4/d445ff62676bc5b2753fba132a3894731a289fb9/release/release-openstack-python/629c35f/ara-report/result/7c6fd37c-82d8-48f7-b653-5bdba90cbc31/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
Miguel Ángel Ajo
OSP / Networking DFG, OVN Squad Engineering
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python3-first] support in stable branches

2018-10-01 Thread Dariusz Krol
Hello Doug,

thanks for your explanation. I was a little bit confused by changes to 
stable branches with python3-first topic as I thought it has to do 
something with adding new test configuration for python3.

But as you explained this is about moving zuul-related configuration, 
which is a part of python3-first goal (but it is not related to 
supporting python3 by projects IMHO :) )

Anyway, it is now clear to me and sorry for making this confusion.


Best,

Dariusz Krol

On 9/28/18 6:05 PM, Doug Hellmann wrote:
> Dariusz Krol  writes:
>
>> Hello,
>>
>>
>> I'm specifically referring to branches mentioned in:
>> https://github.com/openstack/goal-tools/blob/4125c31e74776a7dc6a15d2276ab51ff3e73cd16/goal_tools/python3_first/jobs.py#L54
> I'm still not entirely sure what you're saying is happening that you do
> not expect to have happening, but I'll take a guess.
>
> The zuul migration portion of the goal work needs to move *all* of the
> Zuul settings for a repo into the correct branch because after the
> migration the job settings will no longer be in project-config at all
> and so zuul won't know which jobs to run on the stable branches if we
> haven't imported the settings.
>
> The migration script tries to figure out which jobs apply to which
> branches of each repo by looking at the branch specifier settings in
> project-config, and then it creates an import patch for each branch with
> the relevant jobs. Subsequent steps in the script change the
> documentation and release notes jobs and then add new python 3.6 testing
> jobs. Those steps only apply to the master branch.
>
> So, if you have a patch importing a python 3 job setting to a stable
> branch of a repo where you aren't expecting it (and it isn't supported),
> that's most likely because project-config has no branch specifiers for
> the job (meaning it should run on all branches). We did find several
> cases where that was true because projects added jobs without branch
> specifiers after the branches were created, and then back-ported no
> patches to the stable branch. See
> http://lists.openstack.org/pipermail/openstack-dev/2018-August/133594.html
> for details.
>
> Doug
>
>> I hope this helps.
>>
>>
>> Best,
>>
>> Dariusz Krol
>>
>>
>> On 09/27/2018 06:04 PM, Ben Nemec wrote:
>>>
>>> On 9/27/18 10:36 AM, Doug Hellmann wrote:
 Dariusz Krol  writes:

> Hello Champions :)
>
>
> I work on the Trove project and we are wondering if python3 should be
> supported in previous releases as well?
>
> Actually this question was asked by Alan Pevec from the stable branch
> maintainers list.
>
> I saw you added releases up to ocata to support python3 and there are
> already changes on gerrit waiting to be merged but after reading [1] I
> have my doubts about this.
 I'm not sure what you're referring to when you say "added releases up to
 ocata" here. Can you link to the patches that you have questions about?
>>> Possibly the zuul migration patches for all the stable branches? If
>>> so, those don't change the status of python 3 support on the stable
>>> branches, they just split the zuul configuration to make it easier to
>>> add new python 3 jobs on master without affecting the stable branches.
>>>
> Could you elaborate why it is necessary to support previous releases ?
>
>
> Best,
>
> Dariusz Krol
>
>
> [1] https://docs.openstack.org/project-team-guide/stable-branches.html
> __
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Horizon tutorial didn`t work

2018-10-01 Thread Jea-Min Lim
Hello everyone,

I`m following a tutorial of Building a Dashboard using Horizon.
(link:
https://docs.openstack.org/horizon/latest/contributor/tutorials/dashboard.html#tutorials-dashboard
)

However, provided custom management command doesn't create boilerplate
code.

I typed tox -e manage -- startdash mydashboard --target
openstack_dashboard/dashboards/mydashboard

and the attached screenshot file is the execution result.

Are there any recommendations to solve this problem?

Regards.

[image: result_jmlim.PNG]
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev