Re: [openstack-dev] [cinder] Target classes in Cinder

2017-06-02 Thread John Griffith
On Fri, Jun 2, 2017 at 3:51 PM, Jay Bryant 
wrote:

> I had forgotten that we added this and am guessing that other cores did as
> well. As a result, it likely, was not enforced in driver reviews.
>
> I need to better understand the benefit. In don't think there is a hurry
> to remove this right now. Can we put it on the agenda for Denver?

Yeah, I think it's an out of sight out of mind... and maybe just having the
volume/targets module alone
is good enough regardless of whether drivers want to do child inheritance
or member inheritance against
it.

Meh... ok, never mind.​


>
>
> Jay
>
> On Fri, Jun 2, 2017 at 4:14 PM Eric Harney  wrote:
>
>> On 06/02/2017 03:47 PM, John Griffith wrote:
>> > Hey Everyone,
>> >
>> > So quite a while back we introduced a new model for dealing with target
>> > management in the drivers (ie initialize_connection, ensure_export etc).
>> >
>> > Just to summarize a bit:  The original model was that all of the target
>> > related stuff lived in a base class of the base drivers.  Folks would
>> > inherit from said base class and off they'd go.  This wasn't very
>> flexible,
>> > and it's why we ended up with things like two drivers per backend in the
>> > case of FibreChannel support.  So instead of just say having
>> "driver-foo",
>> > we ended up with "driver-foo-iscsi" and "driver-foo-fc", each with their
>> > own CI, configs etc.  Kind of annoying.
>>
>> We'd need separate CI jobs for the different target classes too.
>>
>>
>> > So we introduced this new model for targets, independent connectors or
>> > fabrics so to speak that live in `cinder/volume/targets`.  The idea
>> being
>> > that drivers were no longer locked in to inheriting from a base class to
>> > get the transport layer they wanted, but instead, the targets class was
>> > decoupled, and your driver could just instantiate whichever type they
>> > needed and use it.  This was great in theory for folks like me that if I
>> > ever did FC, rather than create a second driver (the pattern of 3
>> classes:
>> > common, iscsi and FC), it would just be a config option for my driver,
>> and
>> > I'd use the one you selected in config (or both).
>> >
>> > Anyway, I won't go too far into the details around the concept (unless
>> > somebody wants to hear more), but the reality is it's been a couple
>> years
>> > now and currently it looks like there are a total of 4 out of the 80+
>> > drivers in Cinder using this design, blockdevice, solidfire, lvm and
>> drbd
>> > (and I implemented 3 of them I think... so that's not good).
>> >
>> > What I'm wondering is, even though I certainly think this is a FAR
>> SUPERIOR
>> > design to what we had, I don't like having both code-paths and designs
>> in
>> > the code base.  Should we consider reverting the drivers that are using
>> the
>> > new model back and remove cinder/volume/targets?  Or should we start
>> > flagging those new drivers that don't use the new model during review?
>> > Also, what about the legacy/burden of all the other drivers that are
>> > already in place?
>> >
>> > Like I said, I'm biased and I think the new approach is much better in a
>> > number of ways, but that's a different debate.  I'd be curious to see
>> what
>> > others think and what might be the best way to move forward.
>> >
>> > Thanks,
>> > John
>> >
>>
>> Some perspective from my side here:  before reading this mail, I had a
>> bit different idea of what the target_drivers were actually for.
>>
>> The LVM, block_device, and DRBD drivers use this target_driver system
>> because they manage "local" storage and then layer an iSCSI target on
>> top of it.  (scsi-target-utils, or LIO, etc.)  This makes sense from the
>> original POV of the LVM driver, which was doing this to work on multiple
>> different distributions that had to pick scsi-target-utils or LIO to
>> function at all.  The important detail here is that the
>> scsi-target-utils/LIO code could also then be applied to different
>> volume drivers.
>>
>> The Solidfire driver is doing something different here, and using the
>> target_driver classes as an interface upon which it defines its own
>> target driver.  In this case, this splits up the code within the driver
>> itself, but doesn't enable plugging in other target drivers to the
>> Solidfire driver.  So the fact that it's tied to this defined
>> target_driver class interface doesn't change much.
>>
>> The question, I think, mostly comes down to whether you get better code,
>> or better deployment configurability, by a) defining a few target
>> classes for your driver or b) defining a few volume driver classes for
>> your driver.   (See coprhd or Pure for some examples.)
>>
>> I'm not convinced there is any difference in the outcome, so I can't see
>> why we would enforce any policy around this.  The main difference is in
>> which cinder.conf fields you set during deployment, the rest pretty much
>> ends up the same in either scheme.
>>

Re: [openstack-dev] [cinder] Target classes in Cinder

2017-06-02 Thread John Griffith
On Fri, Jun 2, 2017 at 3:11 PM, Eric Harney  wrote:

> On 06/02/2017 03:47 PM, John Griffith wrote:
> > Hey Everyone,
> >
> > So quite a while back we introduced a new model for dealing with target
> > management in the drivers (ie initialize_connection, ensure_export etc).
> >
> > Just to summarize a bit:  The original model was that all of the target
> > related stuff lived in a base class of the base drivers.  Folks would
> > inherit from said base class and off they'd go.  This wasn't very
> flexible,
> > and it's why we ended up with things like two drivers per backend in the
> > case of FibreChannel support.  So instead of just say having
> "driver-foo",
> > we ended up with "driver-foo-iscsi" and "driver-foo-fc", each with their
> > own CI, configs etc.  Kind of annoying.
>
> We'd need separate CI jobs for the different target classes too.
>
>
> > So we introduced this new model for targets, independent connectors or
> > fabrics so to speak that live in `cinder/volume/targets`.  The idea being
> > that drivers were no longer locked in to inheriting from a base class to
> > get the transport layer they wanted, but instead, the targets class was
> > decoupled, and your driver could just instantiate whichever type they
> > needed and use it.  This was great in theory for folks like me that if I
> > ever did FC, rather than create a second driver (the pattern of 3
> classes:
> > common, iscsi and FC), it would just be a config option for my driver,
> and
> > I'd use the one you selected in config (or both).
> >
> > Anyway, I won't go too far into the details around the concept (unless
> > somebody wants to hear more), but the reality is it's been a couple years
> > now and currently it looks like there are a total of 4 out of the 80+
> > drivers in Cinder using this design, blockdevice, solidfire, lvm and drbd
> > (and I implemented 3 of them I think... so that's not good).
> >
> > What I'm wondering is, even though I certainly think this is a FAR
> SUPERIOR
> > design to what we had, I don't like having both code-paths and designs in
> > the code base.  Should we consider reverting the drivers that are using
> the
> > new model back and remove cinder/volume/targets?  Or should we start
> > flagging those new drivers that don't use the new model during review?
> > Also, what about the legacy/burden of all the other drivers that are
> > already in place?
> >
> > Like I said, I'm biased and I think the new approach is much better in a
> > number of ways, but that's a different debate.  I'd be curious to see
> what
> > others think and what might be the best way to move forward.
> >
> > Thanks,
> > John
> >
>
> Some perspective from my side here:  before reading this mail, I had a
> bit different idea of what the target_drivers were actually for.
>
> The LVM, block_device, and DRBD drivers use this target_driver system
> because they manage "local" storage and then layer an iSCSI target on
> top of it.  (scsi-target-utils, or LIO, etc.)  This makes sense from the
> original POV of the LVM driver, which was doing this to work on multiple
> different distributions that had to pick scsi-target-utils or LIO to
> function at all.  The important detail here is that the
> scsi-target-utils/LIO code could also then be applied to different
> volume drivers.
>

​Yeah, that's fair; it is different that they're
creating a target etc.  At least the new code is
sucked up by default and we don't have that mixin
iscsi class any more.  Meaning that drivers that
don't need LIO/Tgt etc don't get it in the import.

Regardless of which way you use things here you end
up sharing this interface anyway, so I guess maybe
none of this topic is even relevant any more.

>
> The Solidfire driver is doing something different here, and using the
> target_driver classes as an interface upon which it defines its own
> target driver.  In this case, this splits up the code within the driver
> itself, but doesn't enable plugging in other target drivers to the
> Solidfire driver.  So the fact that it's tied to this defined
> target_driver class interface doesn't change much.
>
> The question, I think, mostly comes down to whether you get better code,
> or better deployment configurability, by a) defining a few target
> classes for your driver or b) defining a few volume driver classes for
> your driver.   (See coprhd or Pure for some examples.)
>
> I'm not convinced there is any difference in the outcome, so I can't see
> why we would enforce any policy around this.  The main difference is in
> which cinder.conf fields you set during deployment, the rest pretty much
> ends up the same in either scheme.
>

​That's fair, I was just wondering if there was any
opportunity to slim down some of the few remaining things
​in the base driver and the chain of inheritance that we
have driver--->iscsi-->san--->foo, but to your point maybe
it's not really any benefit.

Just thought it might be worth looking to see if there were
some 

Re: [openstack-dev] [cinder] Target classes in Cinder

2017-06-02 Thread Jay Bryant
I had forgotten that we added this and am guessing that other cores did as
well. As a result, it likely, was not enforced in driver reviews.

I need to better understand the benefit. In don't think there is a hurry to
remove this right now. Can we put it on the agenda for Denver?

Jay
On Fri, Jun 2, 2017 at 4:14 PM Eric Harney  wrote:

> On 06/02/2017 03:47 PM, John Griffith wrote:
> > Hey Everyone,
> >
> > So quite a while back we introduced a new model for dealing with target
> > management in the drivers (ie initialize_connection, ensure_export etc).
> >
> > Just to summarize a bit:  The original model was that all of the target
> > related stuff lived in a base class of the base drivers.  Folks would
> > inherit from said base class and off they'd go.  This wasn't very
> flexible,
> > and it's why we ended up with things like two drivers per backend in the
> > case of FibreChannel support.  So instead of just say having
> "driver-foo",
> > we ended up with "driver-foo-iscsi" and "driver-foo-fc", each with their
> > own CI, configs etc.  Kind of annoying.
>
> We'd need separate CI jobs for the different target classes too.
>
>
> > So we introduced this new model for targets, independent connectors or
> > fabrics so to speak that live in `cinder/volume/targets`.  The idea being
> > that drivers were no longer locked in to inheriting from a base class to
> > get the transport layer they wanted, but instead, the targets class was
> > decoupled, and your driver could just instantiate whichever type they
> > needed and use it.  This was great in theory for folks like me that if I
> > ever did FC, rather than create a second driver (the pattern of 3
> classes:
> > common, iscsi and FC), it would just be a config option for my driver,
> and
> > I'd use the one you selected in config (or both).
> >
> > Anyway, I won't go too far into the details around the concept (unless
> > somebody wants to hear more), but the reality is it's been a couple years
> > now and currently it looks like there are a total of 4 out of the 80+
> > drivers in Cinder using this design, blockdevice, solidfire, lvm and drbd
> > (and I implemented 3 of them I think... so that's not good).
> >
> > What I'm wondering is, even though I certainly think this is a FAR
> SUPERIOR
> > design to what we had, I don't like having both code-paths and designs in
> > the code base.  Should we consider reverting the drivers that are using
> the
> > new model back and remove cinder/volume/targets?  Or should we start
> > flagging those new drivers that don't use the new model during review?
> > Also, what about the legacy/burden of all the other drivers that are
> > already in place?
> >
> > Like I said, I'm biased and I think the new approach is much better in a
> > number of ways, but that's a different debate.  I'd be curious to see
> what
> > others think and what might be the best way to move forward.
> >
> > Thanks,
> > John
> >
>
> Some perspective from my side here:  before reading this mail, I had a
> bit different idea of what the target_drivers were actually for.
>
> The LVM, block_device, and DRBD drivers use this target_driver system
> because they manage "local" storage and then layer an iSCSI target on
> top of it.  (scsi-target-utils, or LIO, etc.)  This makes sense from the
> original POV of the LVM driver, which was doing this to work on multiple
> different distributions that had to pick scsi-target-utils or LIO to
> function at all.  The important detail here is that the
> scsi-target-utils/LIO code could also then be applied to different
> volume drivers.
>
> The Solidfire driver is doing something different here, and using the
> target_driver classes as an interface upon which it defines its own
> target driver.  In this case, this splits up the code within the driver
> itself, but doesn't enable plugging in other target drivers to the
> Solidfire driver.  So the fact that it's tied to this defined
> target_driver class interface doesn't change much.
>
> The question, I think, mostly comes down to whether you get better code,
> or better deployment configurability, by a) defining a few target
> classes for your driver or b) defining a few volume driver classes for
> your driver.   (See coprhd or Pure for some examples.)
>
> I'm not convinced there is any difference in the outcome, so I can't see
> why we would enforce any policy around this.  The main difference is in
> which cinder.conf fields you set during deployment, the rest pretty much
> ends up the same in either scheme.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [cinder] Target classes in Cinder

2017-06-02 Thread Eric Harney
On 06/02/2017 03:47 PM, John Griffith wrote:
> Hey Everyone,
> 
> So quite a while back we introduced a new model for dealing with target
> management in the drivers (ie initialize_connection, ensure_export etc).
> 
> Just to summarize a bit:  The original model was that all of the target
> related stuff lived in a base class of the base drivers.  Folks would
> inherit from said base class and off they'd go.  This wasn't very flexible,
> and it's why we ended up with things like two drivers per backend in the
> case of FibreChannel support.  So instead of just say having "driver-foo",
> we ended up with "driver-foo-iscsi" and "driver-foo-fc", each with their
> own CI, configs etc.  Kind of annoying.

We'd need separate CI jobs for the different target classes too.


> So we introduced this new model for targets, independent connectors or
> fabrics so to speak that live in `cinder/volume/targets`.  The idea being
> that drivers were no longer locked in to inheriting from a base class to
> get the transport layer they wanted, but instead, the targets class was
> decoupled, and your driver could just instantiate whichever type they
> needed and use it.  This was great in theory for folks like me that if I
> ever did FC, rather than create a second driver (the pattern of 3 classes:
> common, iscsi and FC), it would just be a config option for my driver, and
> I'd use the one you selected in config (or both).
> 
> Anyway, I won't go too far into the details around the concept (unless
> somebody wants to hear more), but the reality is it's been a couple years
> now and currently it looks like there are a total of 4 out of the 80+
> drivers in Cinder using this design, blockdevice, solidfire, lvm and drbd
> (and I implemented 3 of them I think... so that's not good).
> 
> What I'm wondering is, even though I certainly think this is a FAR SUPERIOR
> design to what we had, I don't like having both code-paths and designs in
> the code base.  Should we consider reverting the drivers that are using the
> new model back and remove cinder/volume/targets?  Or should we start
> flagging those new drivers that don't use the new model during review?
> Also, what about the legacy/burden of all the other drivers that are
> already in place?
> 
> Like I said, I'm biased and I think the new approach is much better in a
> number of ways, but that's a different debate.  I'd be curious to see what
> others think and what might be the best way to move forward.
> 
> Thanks,
> John
> 

Some perspective from my side here:  before reading this mail, I had a
bit different idea of what the target_drivers were actually for.

The LVM, block_device, and DRBD drivers use this target_driver system
because they manage "local" storage and then layer an iSCSI target on
top of it.  (scsi-target-utils, or LIO, etc.)  This makes sense from the
original POV of the LVM driver, which was doing this to work on multiple
different distributions that had to pick scsi-target-utils or LIO to
function at all.  The important detail here is that the
scsi-target-utils/LIO code could also then be applied to different
volume drivers.

The Solidfire driver is doing something different here, and using the
target_driver classes as an interface upon which it defines its own
target driver.  In this case, this splits up the code within the driver
itself, but doesn't enable plugging in other target drivers to the
Solidfire driver.  So the fact that it's tied to this defined
target_driver class interface doesn't change much.

The question, I think, mostly comes down to whether you get better code,
or better deployment configurability, by a) defining a few target
classes for your driver or b) defining a few volume driver classes for
your driver.   (See coprhd or Pure for some examples.)

I'm not convinced there is any difference in the outcome, so I can't see
why we would enforce any policy around this.  The main difference is in
which cinder.conf fields you set during deployment, the rest pretty much
ends up the same in either scheme.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Target classes in Cinder

2017-06-02 Thread Patrick East
On Fri, Jun 2, 2017 at 12:47 PM, John Griffith 
wrote:

> Hey Everyone,
>
> So quite a while back we introduced a new model for dealing with target
> management in the drivers (ie initialize_connection, ensure_export etc).
>
> Just to summarize a bit:  The original model was that all of the target
> related stuff lived in a base class of the base drivers.  Folks would
> inherit from said base class and off they'd go.  This wasn't very flexible,
> and it's why we ended up with things like two drivers per backend in the
> case of FibreChannel support.  So instead of just say having "driver-foo",
> we ended up with "driver-foo-iscsi" and "driver-foo-fc", each with their
> own CI, configs etc.  Kind of annoying.
>
> So we introduced this new model for targets, independent connectors or
> fabrics so to speak that live in `cinder/volume/targets`.  The idea being
> that drivers were no longer locked in to inheriting from a base class to
> get the transport layer they wanted, but instead, the targets class was
> decoupled, and your driver could just instantiate whichever type they
> needed and use it.  This was great in theory for folks like me that if I
> ever did FC, rather than create a second driver (the pattern of 3 classes:
> common, iscsi and FC), it would just be a config option for my driver, and
> I'd use the one you selected in config (or both).
>
> Anyway, I won't go too far into the details around the concept (unless
> somebody wants to hear more), but the reality is it's been a couple years
> now and currently it looks like there are a total of 4 out of the 80+
> drivers in Cinder using this design, blockdevice, solidfire, lvm and drbd
> (and I implemented 3 of them I think... so that's not good).
>
> What I'm wondering is, even though I certainly think this is a FAR
> SUPERIOR design to what we had, I don't like having both code-paths and
> designs in the code base.  Should we consider reverting the drivers that
> are using the new model back and remove cinder/volume/targets?  Or should
> we start flagging those new drivers that don't use the new model during
> review?  Also, what about the legacy/burden of all the other drivers that
> are already in place?
>

My guess is that trying to push all the drivers into the model would almost
definitely ensure that both code paths are alive and require maintenance
for years to come. Trying to get everyone moved over would be a pretty
large effort and (unless we get real harsh about it) would take a looong
time to get everyone on board. After the transition we would probably end
up with shims all over support the older driver class naming too. Either
that or we would end up with the same top level driver classes we have now,
and maybe internally they use a target instance but not in the configurable
pick and choose way that the model was intended for, and the whole exercise
wouldn't really do much other than have more drivers implement targets and
cause some code churn.

IMO the target stuff is a nice architecture for drivers to follow, but I
don't think its really something we need to do. I could see this being much
more important to push on if we had plans to split up the driver apis into
a provisioner and target kinda thing that the volume manager knows about,
but as long as they all are sent through a single driver class api then
it's all just implementation details behind that.


>
> Like I said, I'm biased and I think the new approach is much better in a
> number of ways, but that's a different debate.  I'd be curious to see what
> others think and what might be the best way to move forward.
>
> Thanks,
> John
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] rh1 issues post-mortem

2017-06-02 Thread Wesley Hayutin
On Fri, Jun 2, 2017 at 4:42 PM, Ben Nemec  wrote:

>
>
> On 03/28/2017 05:01 PM, Ben Nemec wrote:
>
>> Final (hopefully) update:
>>
>> All active compute nodes have been rebooted and things seem to be stable
>> again.  Jobs are even running a little faster, so I'm thinking this had
>> a detrimental effect on performance too.  I've set a reminder for about
>> two months from now to reboot again if we're still using this environment.
>>
>
> The reminder popped up this week, and I've rebooted all the compute nodes
> again.  It went pretty smoothly so I doubt anyone noticed that it happened
> (except that I forgot to restart the zuul-status webapp), but if you run
> across any problems let me know.


Thanks Ben! http://zuul-status.tripleo.org/ is awesome, I missed it.


>
>
>
>> On 03/24/2017 12:48 PM, Ben Nemec wrote:
>>
>>> To follow-up on this, we've continued to hit this issue on other compute
>>> nodes.  Not surprising, of course.  They've all been up for about the
>>> same period of time and have had largely even workloads.
>>>
>>> It has caused problems though because it is cropping up faster than I
>>> can respond (it takes a few hours to cycle all the instances off a
>>> compute node, and I need to sleep sometime :-), so I've started
>>> pre-emptively rebooting compute nodes to get ahead of it.  Hopefully
>>> I'll be able to get all of the potentially broken nodes at least
>>> disabled by the end of the day so we'll have another 3 months before we
>>> have to worry about this again.
>>>
>>> On 03/24/2017 11:47 AM, Derek Higgins wrote:
>>>
 On 22 March 2017 at 22:36, Ben Nemec  wrote:

> Hi all (owl?),
>
> You may have missed it in all the ci excitement the past couple of
> days, but
> we had a partial outage of rh1 last night.  It turns out the OVS port
> issue
> Derek discussed in
> http://lists.openstack.org/pipermail/openstack-dev/2016-Dece
> mber/109182.html
>
>
> reared its ugly head on a few of our compute nodes, which caused them
> to be
> unable to spawn new instances.  They kept getting scheduled since it
> looked
> like they were underutilized, which caused most of our testenvs to
> fail.
>
> I've rebooted the affected nodes, as well as a few more that looked
> like
> they might run into the same problem in the near future.  Everything
> looks
> to be working well again since sometime this morning (when I disabled
> the
> broken compute nodes), but there aren't many jobs passing due to the
> plethora of other issues we're hitting in ci.  There have been some
> stable
> job passes though so I believe things are working again.
>
> As far as preventing this in the future, the right thing to do would
> probably be to move to a later release of OpenStack (either point or
> major)
> where hopefully this problem would be fixed.  However, I'm hesitant
> to do
> that for a few reasons.  First is "the devil you know". Outside of this
> issue, we've gotten rh1 pretty rock solid lately.  It's been
> overworked, but
> has been cranking away for months with no major cloud-related outages.
> Second is that an upgrade would be a major process, probably
> involving some
> amount of downtime.  Since the long-term plan is to move everything
> to RDO
> cloud I'm not sure that's the best use of our time at this point.
>

 +1 on keeping the status quo until moving to rdo-cloud.


> Instead, my plan for the near term is to keep a closer eye on the error
> notifications from the services.  We previously haven't had anything
> consuming those, but I've dropped a little tool on the controller
> that will
> dump out error notifications so we can watch for signs of this
> happening
> again.  I suspect the signs were there long before the actual breakage
> happened, but nobody was looking for them.  Now I will be.
>
> So that's where things stand with rh1.  Any comments or concerns
> welcome.
>
> Thanks.
>
> -Ben
>
> 
> __
>
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

 
 __


 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>> 
>>> __
>>>
>>> OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [tripleo] rh1 issues post-mortem

2017-06-02 Thread Ben Nemec



On 03/28/2017 05:01 PM, Ben Nemec wrote:

Final (hopefully) update:

All active compute nodes have been rebooted and things seem to be stable
again.  Jobs are even running a little faster, so I'm thinking this had
a detrimental effect on performance too.  I've set a reminder for about
two months from now to reboot again if we're still using this environment.


The reminder popped up this week, and I've rebooted all the compute 
nodes again.  It went pretty smoothly so I doubt anyone noticed that it 
happened (except that I forgot to restart the zuul-status webapp), but 
if you run across any problems let me know.




On 03/24/2017 12:48 PM, Ben Nemec wrote:

To follow-up on this, we've continued to hit this issue on other compute
nodes.  Not surprising, of course.  They've all been up for about the
same period of time and have had largely even workloads.

It has caused problems though because it is cropping up faster than I
can respond (it takes a few hours to cycle all the instances off a
compute node, and I need to sleep sometime :-), so I've started
pre-emptively rebooting compute nodes to get ahead of it.  Hopefully
I'll be able to get all of the potentially broken nodes at least
disabled by the end of the day so we'll have another 3 months before we
have to worry about this again.

On 03/24/2017 11:47 AM, Derek Higgins wrote:

On 22 March 2017 at 22:36, Ben Nemec  wrote:

Hi all (owl?),

You may have missed it in all the ci excitement the past couple of
days, but
we had a partial outage of rh1 last night.  It turns out the OVS port
issue
Derek discussed in
http://lists.openstack.org/pipermail/openstack-dev/2016-December/109182.html


reared its ugly head on a few of our compute nodes, which caused them
to be
unable to spawn new instances.  They kept getting scheduled since it
looked
like they were underutilized, which caused most of our testenvs to
fail.

I've rebooted the affected nodes, as well as a few more that looked
like
they might run into the same problem in the near future.  Everything
looks
to be working well again since sometime this morning (when I disabled
the
broken compute nodes), but there aren't many jobs passing due to the
plethora of other issues we're hitting in ci.  There have been some
stable
job passes though so I believe things are working again.

As far as preventing this in the future, the right thing to do would
probably be to move to a later release of OpenStack (either point or
major)
where hopefully this problem would be fixed.  However, I'm hesitant
to do
that for a few reasons.  First is "the devil you know". Outside of this
issue, we've gotten rh1 pretty rock solid lately.  It's been
overworked, but
has been cranking away for months with no major cloud-related outages.
Second is that an upgrade would be a major process, probably
involving some
amount of downtime.  Since the long-term plan is to move everything
to RDO
cloud I'm not sure that's the best use of our time at this point.


+1 on keeping the status quo until moving to rdo-cloud.



Instead, my plan for the near term is to keep a closer eye on the error
notifications from the services.  We previously haven't had anything
consuming those, but I've dropped a little tool on the controller
that will
dump out error notifications so we can watch for signs of this
happening
again.  I suspect the signs were there long before the actual breakage
happened, but nobody was looking for them.  Now I will be.

So that's where things stand with rh1.  Any comments or concerns
welcome.

Thanks.

-Ben

__


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ptls][all] Potential Queens Goal: Migrate Off Paste

2017-06-02 Thread Clay Gerrard
On Fri, Jun 2, 2017 at 1:21 PM, Matt Riedemann  wrote:

>
> I don't think the maintenance issue is the prime motivator, it's the fact
> paste is in /etc which makes it a config file and therefore an impediment
> to smooth upgrades. The more we can move into code, like default policy and
> privsep, the better.


Ah, that makes sense, Swift has had to do all kinds of non-sense to
manipulate pipelines to facilitate smooth upgrade.  But I always assumed
our heavy use of middleware and support for custom extension via third
party middleware just meant it was complexity inherent to our problem we
had to eat until we wrote something better.

https://github.com/openstack/swift/blob/d51ecb4ecc559bf4628159edc2119e96c05fe6c5/swift/proxy/server.py#L50

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ptls][all] Potential Queens Goal: Move policy and policy docs into code

2017-06-02 Thread Matt Riedemann

On 6/1/2017 12:54 PM, Lance Bragstad wrote:

Hi all,

I've proposed a community-wide goal for Queens to move policy into code 
and supply documentation for each policy [0]. I've included references 
to existing documentation and specifications completed by various 
projects and attempted to lay out the benefits for both developers and 
operators.


I'd greatly appreciate any feedback or discussion.

Thanks!

Lance


[0] https://review.openstack.org/#/c/469954/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



+1, especially because now I don't have to write the governance patch 
for this which was a TODO of mine from the summit.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] rdo and tripleo container builds and CI

2017-06-02 Thread Wesley Hayutin
On Fri, Jun 2, 2017 at 11:42 AM, Attila Darazs  wrote:

> If the topics below interest you and you want to contribute to the
> discussion, feel free to join the next meeting:
>
> Time: Thursdays, 14:30-15:30 UTC
> Place: https://bluejeans.com/4113567798/
>
> Full minutes: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting
>
> = CI Promotion problems =
>
> The last promoted DLRN hash is from 21st of May, so now it's 12 day old.
> This is mostly due to not being able to thoroughly gate everything that
> consists of TripleO and we're right in the middle of the cycle where most
> work happens and a lot of code gets merged into every project.
>
> However we should still try our best to improve the situation. If you're
> in any position to help solve our blocker problems (the bugs are announced
> on #tripleo regularly), please lend a hand!
>
> = Smaller topics =
>
> * We also had a couple of issues due to trying to bump Ansible from 2.2 to
> version 2.3 in Quickstart. This uncovered a couple of gaps in our gating,
> and we decided to revert until we fix them.
>
> * We're on track with transitioning some OVB jobs to RDO Cloud, now we
> need to create our infrastructure there and add the cloud definition to
> openstack-infra/project-config.
>
> * We have RDO containers built on the CentOS CI system[1]. We should
> eventually integrate them into the promotion pipeline. Maybe use them as
> the basis for upstream CI runs eventually?
>

Thanks for sending this out Attila..

So after some discussion with David and others I wanted to spell out a bit
of a nuance that may cause this to take a little bit more time and effort.

The original plan was to build and test containers as part of the rdo
master pipeline [1].  We were on track to complete this work in the next
couple days.
However what we realized was that rdo has to feed tripleo container builds
for tripleo promotions, and tripleo promotions are always done on a random
new delorean hash.
There is no way to determine which hash tripleo will pick up, and therefore
no way to ensure the containers and rpms are at the exact same versions.
It's critical that rpms and containers are built using the exact same repos
afaik.

It is also good form and upstream policy for the tools, jobs and build
artifacts to be created upstream.

So the new plan is to build and test containers in the tripleo periodic
jobs that are used for the tripleo promotions.
When the containers pass a build they will be uploaded to the container
registry in rdo with a tag, e.g. current-tripleo.

The main point of this email is to levelset expectations that it will take
a little more time to get this done upstream.
I am very open to hearing suggestions, comments and critques of the new
high level plan.

Thank you!

[1]
https://ci.centos.org/view/rdo/view/promotion-pipeline/job/rdo_trunk-promote-master-current-tripleo/



>
> * Our periodic tempest jobs are getting good results on both Ocata and
> Master, Arx keeps ironing out the remaining failures. See the current
> status here: [2].
>
> * The featureset discussion is coming to an end, we have a good idea how
> what should go in which config files, now the cores should document that to
> help contributors make the right calls when creating new config files or
> modifying existing ones.
>
> Thank you for reading the summary. Have a great weekend!
>
> Best regards,
> Attila
>
> [1] https://ci.centos.org/job/rdo-tripleo-containers-build/
> [2] http://status.openstack.org/openstack-health/#/g/project/ope
> nstack-infra~2Ftripleo-ci?searchJob=
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ptls][all] Potential Queens Goal: Migrate Off Paste

2017-06-02 Thread Matt Riedemann

On 6/2/2017 1:14 PM, Clay Gerrard wrote:

Can we make this (at least) two (community?) goals?

#1 Make a thing that is not paste that is better than paste (i.e. > 
works, ie >= works & is maintained)

#2 Have some people/projects "migrate" to it

If the goal is just "take over paste maintenance" that's maybe ok - but 
is that an "OpenStack community" goal or just something that someone who 
has the bandwidth to do could do?  It also sounds cheaper and probably 
about as good.


Alternatively we can just keep using paste until we're tired of working 
around it's bugs/limitations - and then replace it with something in 
tree that implements only 100% of what the project using it needs to get 
done - then if a few projects do this and they see they're maintaining 
similar code they could extract it to a common library - but iff sharing 
their complexity isolated behind an abstraction sounds better than 
having multiple simpler and more agile ways to do similar-ish stuff - 
and only *then* make a thing that is not paste but serves a similar 
use-case as paste and is also maintained and easy to migrate too from 
paste.  At which point it might be reasonable to say "ok, community, new 
goal, if you're not already using the thing that's not paste but does 
about the same as paste - then we want to organize some people in the 
community experienced with the effort of such a migration to come assist 
*all openstack projects* (who use paste) in completing the goal of 
getting off paste - because srly, it's *that* important"


-Clay



I don't think the maintenance issue is the prime motivator, it's the 
fact paste is in /etc which makes it a config file and therefore an 
impediment to smooth upgrades. The more we can move into code, like 
default policy and privsep, the better.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Target classes in Cinder

2017-06-02 Thread Clay Gerrard
On Fri, Jun 2, 2017 at 12:47 PM, John Griffith 
wrote:

>
>
> What I'm wondering is, even though I certainly think this is a FAR
> SUPERIOR design to what we had, I don't like having both code-paths and
> designs in the code base.
>

Might be useful to enumerate those?  Perhaps drawing attention to the
benefits would spur some driver maintainers that haven't made the switch to
think they could leverage the work into something impactful?


> Should we consider reverting the drivers that are using the new model back
> and remove cinder/volume/targets?
>

Probably not anytime soon if it means dropping 76 of 80 drivers?  Or at
least that's a different discussion ;)


> Or should we start flagging those new drivers that don't use the new model
> during review?
>

Seems like a reasonable social construct to promote going forward - at
least it puts a tourniquet on it.  Perhaps there some intree development
documentation that could be updated to point people in the right direction
or some warnings that can be placed around the legacy patterns to keep
people for stumbling on bad examples?


> Also, what about the legacy/burden of all the other drivers that are
> already in place?
>
>
What indeed... but that's down the road right - for the moment it's just
figuring how to give things a bit of a kick in the pants?  Or maybe
admitting w/o a kick in the pants - living with the cruft is the plan of
record?

I'm curious to see how this goes, Swift has some plugin interfaces that
have been exposed through the ages and the one thing constant with
interface patterns is that the cruft builds up...

Good Luck!

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Target classes in Cinder

2017-06-02 Thread Kendall Nelson
I personally agree that the target classes route is a much cleaner and more
efficient way of doing it.  Also, that it doesn't make sense to have all
the code duplication to support doing it both ways.

If other people agree with that- maybe we can start with not taking new
drivers that do it the common/iscsi/fc way? And then pick a release to
refactor drivers and make that the focus kind of like we did with ocata
being a stabilization release? Assuming that asking the larger number of
drivers to switch formats isn't asking the impossible. I dunno, just a
thought :)

-Kendall (diablo_rojo)

On Fri, Jun 2, 2017 at 2:48 PM John Griffith 
wrote:

> Hey Everyone,
>
> So quite a while back we introduced a new model for dealing with target
> management in the drivers (ie initialize_connection, ensure_export etc).
>
> Just to summarize a bit:  The original model was that all of the target
> related stuff lived in a base class of the base drivers.  Folks would
> inherit from said base class and off they'd go.  This wasn't very flexible,
> and it's why we ended up with things like two drivers per backend in the
> case of FibreChannel support.  So instead of just say having "driver-foo",
> we ended up with "driver-foo-iscsi" and "driver-foo-fc", each with their
> own CI, configs etc.  Kind of annoying.
>
> So we introduced this new model for targets, independent connectors or
> fabrics so to speak that live in `cinder/volume/targets`.  The idea being
> that drivers were no longer locked in to inheriting from a base class to
> get the transport layer they wanted, but instead, the targets class was
> decoupled, and your driver could just instantiate whichever type they
> needed and use it.  This was great in theory for folks like me that if I
> ever did FC, rather than create a second driver (the pattern of 3 classes:
> common, iscsi and FC), it would just be a config option for my driver, and
> I'd use the one you selected in config (or both).
>
> Anyway, I won't go too far into the details around the concept (unless
> somebody wants to hear more), but the reality is it's been a couple years
> now and currently it looks like there are a total of 4 out of the 80+
> drivers in Cinder using this design, blockdevice, solidfire, lvm and drbd
> (and I implemented 3 of them I think... so that's not good).
>
> What I'm wondering is, even though I certainly think this is a FAR
> SUPERIOR design to what we had, I don't like having both code-paths and
> designs in the code base.  Should we consider reverting the drivers that
> are using the new model back and remove cinder/volume/targets?  Or should
> we start flagging those new drivers that don't use the new model during
> review?  Also, what about the legacy/burden of all the other drivers that
> are already in place?
>
> Like I said, I'm biased and I think the new approach is much better in a
> number of ways, but that's a different debate.  I'd be curious to see what
> others think and what might be the best way to move forward.
>
> Thanks,
> John
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Target classes in Cinder

2017-06-02 Thread John Griffith
Hey Everyone,

So quite a while back we introduced a new model for dealing with target
management in the drivers (ie initialize_connection, ensure_export etc).

Just to summarize a bit:  The original model was that all of the target
related stuff lived in a base class of the base drivers.  Folks would
inherit from said base class and off they'd go.  This wasn't very flexible,
and it's why we ended up with things like two drivers per backend in the
case of FibreChannel support.  So instead of just say having "driver-foo",
we ended up with "driver-foo-iscsi" and "driver-foo-fc", each with their
own CI, configs etc.  Kind of annoying.

So we introduced this new model for targets, independent connectors or
fabrics so to speak that live in `cinder/volume/targets`.  The idea being
that drivers were no longer locked in to inheriting from a base class to
get the transport layer they wanted, but instead, the targets class was
decoupled, and your driver could just instantiate whichever type they
needed and use it.  This was great in theory for folks like me that if I
ever did FC, rather than create a second driver (the pattern of 3 classes:
common, iscsi and FC), it would just be a config option for my driver, and
I'd use the one you selected in config (or both).

Anyway, I won't go too far into the details around the concept (unless
somebody wants to hear more), but the reality is it's been a couple years
now and currently it looks like there are a total of 4 out of the 80+
drivers in Cinder using this design, blockdevice, solidfire, lvm and drbd
(and I implemented 3 of them I think... so that's not good).

What I'm wondering is, even though I certainly think this is a FAR SUPERIOR
design to what we had, I don't like having both code-paths and designs in
the code base.  Should we consider reverting the drivers that are using the
new model back and remove cinder/volume/targets?  Or should we start
flagging those new drivers that don't use the new model during review?
Also, what about the legacy/burden of all the other drivers that are
already in place?

Like I said, I'm biased and I think the new approach is much better in a
number of ways, but that's a different debate.  I'd be curious to see what
others think and what might be the best way to move forward.

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ptls][all] Potential Queens Goal: Migrate Off Paste

2017-06-02 Thread Clay Gerrard
Can we make this (at least) two (community?) goals?

#1 Make a thing that is not paste that is better than paste (i.e. > works,
ie >= works & is maintained)
#2 Have some people/projects "migrate" to it

If the goal is just "take over paste maintenance" that's maybe ok - but is
that an "OpenStack community" goal or just something that someone who has
the bandwidth to do could do?  It also sounds cheaper and probably about as
good.

Alternatively we can just keep using paste until we're tired of working
around it's bugs/limitations - and then replace it with something in tree
that implements only 100% of what the project using it needs to get done -
then if a few projects do this and they see they're maintaining similar
code they could extract it to a common library - but iff sharing their
complexity isolated behind an abstraction sounds better than having
multiple simpler and more agile ways to do similar-ish stuff - and only
*then* make a thing that is not paste but serves a similar use-case as
paste and is also maintained and easy to migrate too from paste.  At which
point it might be reasonable to say "ok, community, new goal, if you're not
already using the thing that's not paste but does about the same as paste -
then we want to organize some people in the community experienced with the
effort of such a migration to come assist *all openstack projects* (who use
paste) in completing the goal of getting off paste - because srly, it's
*that* important"

-Clay

On Wed, May 31, 2017 at 1:38 PM, Mike  wrote:

> Hello everyone,
>
> As part of our community wide goals process [1], we will discuss the
> potential goals that came out of the forum session in Boston [2].
> These discussions will aid the TC in making a final decision of what
> goals the community will work towards in the Queens release.
>
> For this thread we will be discussing migrating off paste. This was
> suggested by Sean Dague. I’m not sure if he’s leading this effort, but
> here’s a excerpt from him to get us started:
>
> A migration path off of paste would be a huge win. Paste deploy is
> unmaintained (as noted in the etherpad) and being in etc means it's
> another piece of gratuitous state that makes upgrading harder than it
> really should be. This is one of those that is going to require
> someone to commit to working out that migration path up front. But it
> would be a pretty good chunk of debt and upgrade ease.
>
>
> [1] - https://governance.openstack.org/tc/goals/index.html
> [2] - https://etherpad.openstack.org/p/BOS-forum-Queens-Goals
>
> —
> Mike Perez
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Onboarding rooms postmortem, what did you do, what worked, lessons learned

2017-06-02 Thread Kendall Nelson
Hello Everyone :)



So I just want to summarize the successes and improvement points people
have brought so that we can make the next round of onboarding an even
bigger success!



What worked:

   -

   Having material prepared ahead of time that is more interactive to get
   people involved
   -

   Having more than one representative of the project there to help out
   -

   Projectors were an asset
   -

   Mascot stickers!



Things to Improve:

   -

   Minimize conflicts between other Summit talks and the onboarding session
   going on
   -

   Recording the sessions
   -

   Bigger rooms?
   -

   Make sure attendees and project reps are aware of what it covered in the
   upstream training
   -

   Could advertise more
   -

   Don’t overlap or go up until happy hour if possible ;)
   -

   Have different options of durations for projects to sign up for



Feel free to correct or add to this list :)


-Kendall (diablo_rojo)

On Thu, Jun 1, 2017 at 3:36 AM Thierry Carrez  wrote:

> Jeremy Stanley wrote:
> > On 2017-05-19 09:22:07 -0400 (-0400), Sean Dague wrote:
> > [...]
> >> the project,
> >
> > I hosted the onboarding session for the Infrastructure team. For
> > various logistical reasons discussed on the planning thread before
> > the PTG, it was a shared session with many other "horizontal" teams
> > (QA, Requirements, Stable, Release). We carved the 90-minute block
> > up into individual subsessions for each team, though due to
> > scheduling conflicts I was only able to attend the second half
> > (Release and Infra). Attendance was also difficult to gauge; we had
> > several other regulars from the Infra team present in the audience,
> > people associated with other teams with which we shared the room,
> > and an assortment of new faces but hard to tell which session(s)
> > they were mainly there to see.
>
> Doug and I ran the "Release management" segment of that shared slot.
>
> >> what you did in the room,
> >
> > I prepared a quick (5-10 minute) "help wanted" intro slide deck to
> > set the stage, then transitioned to a less formal mix of Q and
> > open discussion of some of the exciting things we're working on
> > currently. I felt like we didn't really get as many solid questions
> > as I was hoping, but the back-and-forth with other team members in
> > the room about our priority efforts was definitely a good way to
> > fill in the gaps between.
>
> We had a quick slidedeck to introduce what the release team actually
> does (not that much), what are the necessary skills (not really ninjas)
> and a base intro on our process. The idea was to inspire others to join
> the team by making it more approachable, and stating that new faces were
> definitely needed.
>
> >> what you think worked,
> >
> > The format wasn't bad. Given the constraints we were under for this,
> > sharing seems to have worked out pretty well for us and possibly
> > seeded the audience with people who were interested in what those
> > other teams had to say and stuck around to see me ramble.
>
> I liked the room setup (classroom style) which is conducive to learning.
>
> >> what you would have done differently
> > [...]
> >
> > The goal I had was to drum up some additional solid contributors to
> > our team, though the upshot (not necessarily negative, just not what
> > I expected) was that we seemed to get more interest from "adjacent
> > technologies" representatives interested in what we were doing and
> > how to replicate it in their ecosystems. If that ends up being a
> > significant portion of the audience going forward, it's possible we
> > could make some adjustments to our approach in an attempt to entice
> > them to collaborate further on co-development of our tools and
> > processes.
>
> Attracting the right set of people in the room is definitely a
> challenge. I don't know if regrouping several teams into the same slot
> was a good idea in that respect. Maybe have shorter slots for smaller
> teams, but still give them their own slot in the schedule ?
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Status update, Jun 2

2017-06-02 Thread Thierry Carrez
Sean McGinnis wrote:
>>
>> == Need for a TC meeting next Tuesday ==
>>
>> Based on past discussions, we'll call a meeting on Jun 6 at 20:00 UTC on
>> #openstack-meeting to specifically discuss the postgresql question,
>> hopefully unblocking the situation and defining a path forward. I'm
>> traveling on that day, so I'd like some other TC member to chair and
>> moderate that discussion. Let me know if you're interested. Sean
>> McGinnis volunteered, but since he might be traveling too it would be
>> great to have someone else signed up.
>>
> 
> I had to adjust my travel plans, so I will now be around with no conflicts.
> I can chair the meeting.

Deal!
Thanks for volunteering.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][infra][python3] how to handle release tools for python 3

2017-06-02 Thread Doug Hellmann
As we discussed in the team meeting today, I have filed reviews to add a
Python 3.5 unit test job to the release-tools repository:

https://review.openstack.org/470350  update semver module for python 3.5
https://review.openstack.org/470352  add python 3.5 unit test job for 
release-tools repository

There are 2 remaining tools that we use regularly that haven't been
ported.

openstack-infra/project-config/jenkins/scripts/release-tools/launchpad_add_comment.py
requires launchpadlib, which has at least one dependency that is not
available for Python 3. I propose that we continue to run this script
under Python 2, until all projects are migrated to storyboard and we can
drop it completely.

openstack-infra/release-tools/announce.sh uses some python programs in
the release-tools repository. Those work under python 3, but they are a
bit odd because they are the last remaining tools used by the automation
that live in that git repo. Everything else has either moved to
openstack/releases or openstack-infra/project-config. If we move these
tools, we will have all of our active scripts in a consistent place.

1. If we move the scripts to openstack/releases then we can easily
use the release note generation tool as part of the validation jobs,
and eliminate (or at least reduce) issues with release announcement
failures. The actual announcement job will have to clone the releases
repo to run the tool, but it already has to do that with the
release-tools repo.

2. The other option is to move the scripts to
openstack-infra/project-config.  I think this will end up being
more work, because that repository is not set up to encourage using
tox to configure virtualenvs where we can run console scripts, and
these tools rely on that technique right now. If we were starting
from scratch I think it would make sense to put them in project-config
with the other release tools, but they were designed in a way that
makes that more work right now.

Before I start working on option 1, I wanted to get some feedback from
the rest of the team.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] CI Squad Meeting Summary (week 22) - Promotion Problems

2017-06-02 Thread Attila Darazs
If the topics below interest you and you want to contribute to the 
discussion, feel free to join the next meeting:


Time: Thursdays, 14:30-15:30 UTC
Place: https://bluejeans.com/4113567798/

Full minutes: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting

= CI Promotion problems =

The last promoted DLRN hash is from 21st of May, so now it's 12 day old. 
This is mostly due to not being able to thoroughly gate everything that 
consists of TripleO and we're right in the middle of the cycle where 
most work happens and a lot of code gets merged into every project.


However we should still try our best to improve the situation. If you're 
in any position to help solve our blocker problems (the bugs are 
announced on #tripleo regularly), please lend a hand!


= Smaller topics =

* We also had a couple of issues due to trying to bump Ansible from 2.2 
to version 2.3 in Quickstart. This uncovered a couple of gaps in our 
gating, and we decided to revert until we fix them.


* We're on track with transitioning some OVB jobs to RDO Cloud, now we 
need to create our infrastructure there and add the cloud definition to 
openstack-infra/project-config.


* We have RDO containers built on the CentOS CI system[1]. We should 
eventually integrate them into the promotion pipeline. Maybe use them as 
the basis for upstream CI runs eventually?


* Our periodic tempest jobs are getting good results on both Ocata and 
Master, Arx keeps ironing out the remaining failures. See the current 
status here: [2].


* The featureset discussion is coming to an end, we have a good idea how 
what should go in which config files, now the cores should document that 
to help contributors make the right calls when creating new config files 
or modifying existing ones.


Thank you for reading the summary. Have a great weekend!

Best regards,
Attila

[1] https://ci.centos.org/job/rdo-tripleo-containers-build/
[2] 
http://status.openstack.org/openstack-health/#/g/project/openstack-infra~2Ftripleo-ci?searchJob=


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ops] OpenStack Days UK 2017 - Call for speakers

2017-06-02 Thread Nick Jones
Hello!

Organisation of the next edition of OpenStack Days UK is in full swing, and in 
case you missed the announcement, the event is to be held in Central London 
(Bishopsgate) on the 26th of September.  Full details are on the website:  
https://openstackdays.uk 

At this point we’d like to formally invite anyone who is interested in 
presenting at this event to submit their proposal here:

https://www.papercall.io/openstackdaysuk 


This CFP closes on June 24th, and although there’s no overarching theme for the 
day the link contains a few suggested topics.

We also have a few sponsorship spots available, with a discount of 25% for 
those signing up early and committing to make payment before 1st of July. The 
sponsor’s prospectus is here:

http://openstackdays.uk/2017/wp-content/uploads/2017/04/OpenStackDayUK2017-SponsorshipOpportunities.pdf
 


See you there!

--

-Nick


-- 
DataCentred Limited registered in England and Wales no. 05611763


signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Supporting volume_type when booting from volume

2017-06-02 Thread Matt Riedemann

On 6/2/2017 12:40 AM, 한승진 wrote:

Hello, stackers

I am just curious about the results of lots of discussions on the below 
blueprint.


https://blueprints.launchpad.net/nova/+spec/support-volume-type-with-bdm-parameter

Can I ask what the concolusion is?




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



There wasn't one really. There is a mailing list discussion here:

http://lists.openstack.org/pipermail/openstack-dev/2017-May/117242.html

Which turned into a discussion about porcelain APIs.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] placement/resource providers update 25

2017-06-02 Thread Chris Dent



Placement update 25. Only 75 more to reach 100.

# What Matters Most

Claims against the placement API remain the highest priority. There's
plenty of other work in progress too which needs to advance. Lots of
links within.

# What's Changed

The entire shared resource providers stack has merged. This doesn't
mean we have support for them yet, but rather that it is possible to
express a query of the placement db that will include results
that are associated (via aggregates) with shared resource providers.

A new version of the os-traits library was required because the
routine it was using to walk modules could escape the local package,
leading to either brokenness or at least weirdness.

Work has begun on having project and user id information included in
allocations (see below).

Incremental progress across many other areas.

# Help Wanted

(This section _has_ changed since last time, removing some bug links
because the fixes have been started and are now linked below.)

Areas where volunteers are needed.

* General attention to bugs tagged placement:
   https://bugs.launchpad.net/nova/+bugs?field.tag=placement

* Helping to create api documentation for placement (see the Docs
   section below).

# Main Themes

## Claims in the Scheduler

Work is in progress on having the scheduler make resource claims.

  https://review.openstack.org/#/q/status:open+topic:bp/placement-claims

The current choice for how to do this is to pass instance uuids as a
separate parameter in the RPC call to select_destinations. This
information is required to be able to make the claims/allocations
(which are identified by consumer uuid).

## Traits

The main API is in place. Debate raged on how best to manage updates
of standard os-traits. Eventually a simple sync done once per
process seemed like the way to go, without having a cache:

https://review.openstack.org/#/c/469578/

This needs to address some concurrency issues.

There's also a small cleanup to the os-traits library:

https://review.openstack.org/#/c/469631/

## Shared Resource Providers

The stack that makes the database side of things start to work has
merged:


https://review.openstack.org/#/q/status:merged+topic:bp/shared-resources-pike

This will allow work on the API and resource-tracker/scheduler side
to move along.

## Docs

Lots of placement-related api docs in progress on a few different
topics:

* https://review.openstack.org/#/q/status:open+topic:cd/placement-api-ref
* 
https://review.openstack.org/#/q/status:open+topic:placement-api-ref-add-resource-classes-put
* https://review.openstack.org/#/q/status:open+topic:bp/placement-api-ref

We should a) probably get that stuff on the same topic, b) make sure
work is not being duplicated.

## Nested Resource Providers

Work has resumed on nested resource providers.


https://review.openstack.org/#/q/status:open+topic:bp/nested-resource-providers

Currently having some good review discussion on data structures and
graph traversal and search. It's a bit like being back in school.

## User and Project IDs in Allocations

This will allow placement allocations to be considered when doing
resource accounting for things like quota. User id and project id
information is added to allocation records and a new API resource is
added to be able to get summaries of usage by user or project.

https://review.openstack.org/#/q/topic:bp/placement-project-user

# Other Code/Specs

* https://review.openstack.org/#/c/460147/
   Use DELETE inventories method in report client.

* https://review.openstack.org/#/c/427200/
Add a status check for legacy filters in nova-status.

* https://review.openstack.org/#/c/454426/
Handle new hosts for updating instance info in scheduler
Currently in merge conflict.

* https://review.openstack.org/#/c/453916/
Don't send instance updates from compute if not using filter
scheduler

* https://review.openstack.org/#/q/project:openstack/osc-placement
Work has started on an osc-plugin that can provide a command
line interface to the placement API.
It's quite likely that this code is going to need to be adopted by
someone new.

* https://review.openstack.org/#/c/457636/
   Devstack change to install that plugin. This has two +2, but no
   +W.

* https://review.openstack.org/#/c/469037/
   Cleanups for _schedule_instances()

* https://review.openstack.org/#/c/469047/
  Update placement.rst to link to more specs

* https://review.openstack.org/#/c/469048/
  Provide more information about installing placement

* https://review.openstack.org/#/c/468928/
  Disambiguate resource provider conflict message

* https://review.openstack.org/#/c/468923/
  Adjust resource provider links by microversion

# End

I was unable to go digging for things as much as usual this week due
to other business. If I've missed something, my apologies, please
add it to the thread in a followup.

Your prize is some cornish clotted cream.

--
Chris Dent 

Re: [openstack-dev] [masakari] Intrusive Instance Monitoring

2017-06-02 Thread Waines, Greg
Hey Sam,
Just FYI ... I have updated the intrusive-instance-monitoring spec based on 
comments received from Adam and Vikash.

Greg.

From: Sam P 
Reply-To: "openstack-dev@lists.openstack.org" 

Date: Tuesday, May 30, 2017 at 8:12 AM
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [masakari] Intrusive Instance Monitoring

Hi Greg,

Great.. thank you. I will ask people to review this..

--- Regards,
Sampath



On Tue, May 30, 2017 at 9:06 PM, Waines, Greg 
> wrote:
Hey Sam,



Was able to submit the blueprint and spec.



Blueprint:
https://blueprints.launchpad.net/masakari/+spec/intrusive-instance-monitoring

Spec: https://review.openstack.org/#/c/469070/



Greg.



From: Sam P >
Reply-To: 
"openstack-dev@lists.openstack.org"
>
Date: Monday, May 29, 2017 at 10:01 PM
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: Re: [openstack-dev] [masakari] Intrusive Instance Monitoring



Hi Greg,



# Thank you Jeremy..!



I couldn't find any problem with repo side.

As Jeremy pointed out, could you please check the `git remote show gerrit`.



BTW, could you please create a BP in [1] and link it to your spec when

you commit it.

In this way, we could track all the changes related to this task.

Please include the related bp Name in commit massage of your spec as,



Implements: bp name-of-your-bp

# Please refer to open to review spec [2] for more details.

# You may find more details on [3]



[1] https://blueprints.launchpad.net/masakari

[2] https://review.openstack.org/#/c/458023/4//COMMIT_MSG

[3]
https://docs.openstack.org/infra/manual/developers.html#working-on-specifications-and-blueprints

--- Regards,

Sampath







On Tue, May 30, 2017 at 4:39 AM, Jeremy Stanley 
> wrote:

On 2017-05-29 14:48:10 + (+), Waines, Greg wrote:

Was just trying to submit my spec for Intrusive Instance

Monitoring for review.



And I get the following warning after committing when I do the

‘git review’



gwaines@gwaines-VirtualBox:~/openstack/masakari-specs$ git review

You are about to submit multiple commits. This is expected if you are

submitting a commit that is dependent on one or more in-review

commits. Otherwise you should consider squashing your changes into one

commit before submitting.



The outstanding commits are:



f09deee (HEAD -> myBranch) Initial draft specification of Intrusive Instance
Monitoring.

21aeb96 (origin/master, origin/HEAD, master) Prepare specs repository for
Pike

83d1a0a Implement reserved_host, auto_priority and rh_priority recovery
methods

4e746cb Add periodic task to clean up workflow failure

2c10be4 Add spec repo structure

a82016f Added .gitreview



Do you really want to submit the above commits?

Type 'yes' to confirm, other to cancel: no

Aborting.

gwaines@gwaines-VirtualBox:~/openstack/masakari-specs$



Seems like my clone picked up someone else’s open commit ?



Any help would be appreciated,

The full log of my git session is below,

[...]



The output doesn't show any open changes, but rather seems to

indicate that the parent is the commit at the tip of origin/master.

This condition shouldn't normally happen unless Gerrit doesn't

actually know about any of those commits for some reason.



One thing, I notice your `git review -s` output in your log was

empty. Make sure the output of `git remote show gerrit` looks

something like this (obviously with your username in place of mine):



  * remote gerrit

Fetch URL:
ssh://fu...@review.openstack.org:29418/openstack/masakari-specs.git

Push  URL:
ssh://fu...@review.openstack.org:29418/openstack/masakari-specs.git

HEAD branch: master

Remote branch:

  master tracked

Local ref configured for 'git push':

  master pushes to master (up to date)



Using git-review 1.25.0 I attempted to replicate the issue like

this, but everything worked normally:



  fungi@dhole:~/work/openstack/openstack$ git clone
https://github.com/openstack/masakari-specs.git

  Cloning into 'masakari-specs'...

  remote: Counting objects: 61, done.

  remote: Total 61 (delta 0), reused 0 (delta 0), pack-reused 61

  Unpacking objects: 100% (61/61), done.

  fungi@dhole:~/work/openstack/openstack$ cd masakari-specs/

  fungi@dhole:~/work/openstack/openstack/masakari-specs$ git log

  commit 21aeb965acea0b3ebe8448715bb88df4409dd402

  Author: Abhishek Kekane 
>

  Date:   Wed 

Re: [openstack-dev] [neutron] tempest failures when deploying neutron-server in wsgi with apache

2017-06-02 Thread Emilien Macchi
On Thu, Jun 1, 2017 at 10:28 PM, Morales, Victor
 wrote:
> Hi Emilien,
>
> I noticed that the configuration file was created using puppet.  I submitted 
> a patch[1] that was targeting to include the changes in Devstack. My major 
> concern is with the value of WSGIScriptAlias which should be pointing to WSGI 
> script.

Thanks for looking, the script that is used is from
/usr/bin/neutron-api which is I think correct. If you look at logs,
you can see that API actually works but some tempest tests fail
though...

> Regards/Saludos
> Victor Morales
>
> [1] https://review.openstack.org/#/c/439191
>
> On 5/31/17, 4:40 AM, "Emilien Macchi"  wrote:
>
> Hey folks,
>
> I've been playing with deploying Neutron in WSGI with Apache and
> Tempest tests fail on spawning Nova server when creating Neutron
> ports:
> 
> http://logs.openstack.org/89/459489/4/check/gate-puppet-openstack-integration-4-scenario001-tempest-centos-7/f2ee8bf/console.html#_2017-05-30_13_09_22_715400
>
> I haven't found anything useful in neutron-server logs:
> 
> http://logs.openstack.org/89/459489/4/check/gate-puppet-openstack-integration-4-scenario001-tempest-centos-7/f2ee8bf/logs/apache/neutron_wsgi_access_ssl.txt.gz
>
> Before I file a bug in neutron, can anyone look at the logs with me
> and see if I missed something in the config:
> 
> http://logs.openstack.org/89/459489/4/check/gate-puppet-openstack-integration-4-scenario001-tempest-centos-7/f2ee8bf/logs/apache_config/10-neutron_wsgi.conf.txt.gz
>
> Thanks for the help,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][tc][all] Tempest to reject trademark tests

2017-06-02 Thread Amrith Kumar

> -Original Message-
> From: Sean McGinnis [mailto:sean.mcgin...@gmx.com]
> Sent: Thursday, June 1, 2017 4:48 PM
> To: OpenStack Development Mailing List (not for usage questions)  d...@lists.openstack.org>
> Subject: Re: [openstack-dev] [qa][tc][all] Tempest to reject trademark tests
> 
> >
> > And yes, I agree with the argument that we should be fair and treat
> > all projects the same way. If we're going to move tests out of the
> > tempest repository, we should move all of them. The QA team can still
> > help maintain the test suites for whatever projects they want, even if
> > those tests are in plugins.
> >
> > Doug
> >
> 
> +1

+1


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][tc][all] Tempest to reject trademark tests

2017-06-02 Thread Doug Hellmann
Excerpts from Matthew Treinish's message of 2017-06-01 20:51:24 -0400:
> On Thu, Jun 01, 2017 at 11:57:00AM -0400, Doug Hellmann wrote:
> > Excerpts from Thierry Carrez's message of 2017-06-01 11:51:50 +0200:
> > > Graham Hayes wrote:
> > > > On 01/06/17 01:30, Matthew Treinish wrote:
> > > >> TBH, it's a bit premature to have the discussion. These additional 
> > > >> programs do
> > > >> not exist yet, and there is a governance road block around this. Right 
> > > >> now the
> > > >> set of projects that can be used defcore/interopWG is limited to the 
> > > >> set of 
> > > >> projects in:
> > > >>
> > > >> https://governance.openstack.org/tc/reference/tags/tc_approved-release.html
> > > > 
> > > > Sure - but that is a solved problem, when the interop committee is
> > > > ready to propose them, they can add projects into that tag. Or am I
> > > > misunderstanding [1] (again)?
> > > 
> > > I think you understand it well. The Board/InteropWG should propose
> > > additions/removals of this tag, which will then be approved by the TC:
> > > 
> > > https://governance.openstack.org/tc/reference/tags/tc_approved-release.html#tag-application-process
> > > 
> > > > [...]
> > > >> We had a forum session on it (I can't find the etherpad for the 
> > > >> session) which
> > > >> was pretty speculative because it was about planning the new programs. 
> > > >> Part of
> > > >> that discussion was around the feasibility of using tests in plugins 
> > > >> and whether
> > > >> that would be desirable. Personally, I was in favor of doing that for 
> > > >> some of
> > > >> the proposed programs because of the way they were organized it was a 
> > > >> good fit.
> > > >> This is because the proposed new programs were extra additions on top 
> > > >> of the
> > > >> base existing interop program. But it was hardly a definitive 
> > > >> discussion.
> > > > 
> > > > Which will create 2 classes of testing for interop programs.
> > > 
> > > FWIW I would rather have a single way of doing "tests used in trademark
> > > programs" without differentiating between old and new trademark programs.
> > > 
> > > I fear that we are discussing solutions before defining the problem. We
> > > want:
> > > 
> > > 1- Decentralize test maintenance, through more tempest plugins, to
> > > account for limited QA resources
> > > 2- Additional codereview constraints and approval rules for tests that
> > > happen to be used in trademark programs
> > > 3- Discoverability/ease-of-install of the set of tests that happen to be
> > > used in trademark programs
> > > 4- A git repo layout that can be simply explained, for new teams to
> > > understand
> > > 
> > > It feels like the current git repo layout (result of that 2016-05-04
> > > resolution) optimizes for 2 and 3, which kind of works until you add
> > > more trademark programs, at which point it breaks 1 and 4.
> > > 
> > > I feel like you could get 2 and 3 without necessarily using git repo
> > > boundaries (using Gerrit approval rules and some tooling to install/run
> > > subset of tests across multiple git repos), which would allow you to
> > > optimize git repo layout to get 1 and 4...
> > > 
> > > Or am I missing something ?
> > > 
> > 
> > Right. The point of having the trademark tests "in tempest" was not
> > to have them "in the tempest repo", that was just an implementation
> > detail of the policy of "put them in a repository managed by people
> > who understand the expanded review rules".
> 
> There was more to it than this, a big part was duplication of effort as well.
> Tempest itself is almost a perfect fit for the scope of the testing defcore is
> doing. While tempest does additional testing that defcore doesn't use, a large
> subset is exactly what they want.

That does explain why Tempest was appealing to the DefCore folks.
I was trying to explain my motivation for writing the resolution
saying that we did not want DefCore using tests scattered throughout
a bunch of plugin repositories managed by different reviewer teams.

> > There were a lot of unexpected issues when we started treating the
> > test suite as a production tool for validating a cloud.  We have
> > to be careful about how we change the behavior of tests, for example,
> > even if the API responses are expected to be the same.  It's not
> > fair to vendors or operators who get trademark approval with one
> > release to have significant changes in behavior in the exact same
> > tests for the next release.
> 
> I actually find this to be kinda misleading. Tempest has always had
> running on any cloud as part of it's mission. I think you're referring
> to the monster defcore thread from last summer about proprietary nova 
> extensions
> adding on to API responses. This is honestly a completely separate problem
> which is not something I want to dive into again, because that was a much more
> nuanced problem that involved much more than just code review.

That may have been the situation I'm thinking of, and I agree,

Re: [openstack-dev] [Openstack-operators] [dev] [doc] Operations Guide future

2017-06-02 Thread Alexandra Settle
O I like your thinking – I’m a pandoc fan, so, I’d be interested in moving 
this along using any tools to make it easier.

I think my only proviso (now I’m thinking about it more) is that we still have 
a link on docs.o.o, but it goes to the wiki page for the Ops Guide.

From: Anne Gentle 
Date: Friday, June 2, 2017 at 1:53 PM
To: Alexandra Settle 
Cc: Blair Bethwaite , OpenStack Operators 
, "OpenStack Development Mailing List 
(not for usage questions)" , 
"openstack-d...@lists.openstack.org" 
Subject: Re: [Openstack-operators] [dev] [doc] Operations Guide future

I'm okay with option 3.

Since we hadn't heard from anyone yet who can do the work, I thought I'd 
describe a super small experiment to try. If you're interested in the export, 
run an experiment with Pandoc to convert from RST to Mediawiki. 
http://pandoc.org/demos.html

You'll likely still have cleanup but it's a start. Only convert troubleshooting 
to start, which gets the most hits: 
docs.openstack.org/ops-guide/ops-network-troubleshooting.html
Then see how much you get from Pandoc.

Let us know how it goes, I'm curious!
Anne



On Fri, Jun 2, 2017 at 4:03 AM, Alexandra Settle 
> wrote:
Blair – correct, it was the majority in the room. I just wanted to reach out 
and ensure that operators had a chance to voice opinions and see where we were 
going (

Sounds like option 3 is still the favorable direction. This is going to be a 
really big exercise, lifting the content out of the repos. Are people able to 
help?

Thanks everyone for getting on board (

On 6/2/17, 2:44 AM, "Blair Bethwaite" 
> wrote:

Hi Alex,

Likewise for option 3. If I recall correctly from the summit session
that was also the main preference in the room?

On 2 June 2017 at 11:15, George Mihaiescu 
> wrote:
> +1 for option 3
>
>
>
> On Jun 1, 2017, at 11:06, Alexandra Settle 
> wrote:
>
> Hi everyone,
>
>
>
> I haven’t had any feedback regarding moving the Operations Guide to the
> OpenStack wiki. I’m not taking silence as compliance. I would really like 
to
> hear people’s opinions on this matter.
>
>
>
> To recap:
>
>
>
> Option one: Kill the Operations Guide completely and move the 
Administration
> Guide to project repos.
> Option two: Combine the Operations and Administration Guides (and then 
this
> will be moved into the project-specific repos)
> Option three: Move Operations Guide to OpenStack wiki (for ease of
> operator-specific maintainability) and move the Administration Guide to
> project repos.
>
>
>
> Personally, I think that option 3 is more realistic. The idea for the last
> option is that operators are maintaining operator-specific documentation 
and
> updating it as they go along and we’re not losing anything by combining or
> deleting. I don’t want to lose what we have by going with option 1, and I
> think option 2 is just a workaround without fixing the problem – we are 
not
> getting contributions to the project.
>
>
>
> Thoughts?
>
>
>
> Alex
>
>
>
> From: Alexandra Settle >
> Date: Friday, May 19, 2017 at 1:38 PM
> To: Melvin Hillsman >, 
OpenStack Operators
> 
>
> Subject: Re: [Openstack-operators] Fwd: [openstack-dev] [openstack-doc]
> [dev] What's up doc? Summit recap edition
>
>
>
> Hi everyone,
>
>
>
> Adding to this, I would like to draw your attention to the last dot point 
of
> my email:
>
>
>
> “One of the key takeaways from the summit was the session that I joint
> moderated with Melvin Hillsman regarding the Operations and Administration
> Guides. You can find the etherpad with notes here:
> https://etherpad.openstack.org/p/admin-ops-guides  The session was really
> helpful – we were able to discuss with the operators present the current
> situation of the documentation team, and how they could help us maintain 
the
> two guides, aimed at the same audience. The operator’s present at the
> session agreed that the Administration Guide was important, and could be
> maintained upstream. However, they voted and agreed that the best course 
of
> action for the Operations Guide was for it to be pulled down and put into 
a

Re: [openstack-dev] [tc] Status update, Jun 2

2017-06-02 Thread Sean McGinnis
> 
> == Need for a TC meeting next Tuesday ==
> 
> Based on past discussions, we'll call a meeting on Jun 6 at 20:00 UTC on
> #openstack-meeting to specifically discuss the postgresql question,
> hopefully unblocking the situation and defining a path forward. I'm
> traveling on that day, so I'd like some other TC member to chair and
> moderate that discussion. Let me know if you're interested. Sean
> McGinnis volunteered, but since he might be traveling too it would be
> great to have someone else signed up.
> 

I had to adjust my travel plans, so I will now be around with no conflicts.
I can chair the meeting.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [dev] [doc] Operations Guide future

2017-06-02 Thread Anne Gentle
I'm okay with option 3.

Since we hadn't heard from anyone yet who can do the work, I thought I'd
describe a super small experiment to try. If you're interested in the
export, run an experiment with Pandoc to convert from RST to Mediawiki.
http://pandoc.org/demos.html

You'll likely still have cleanup but it's a start. Only convert
troubleshooting to start, which gets the most hits: docs.openstack.org/
ops-guide/ops-network-troubleshooting.html
Then see how much you get from Pandoc.

Let us know how it goes, I'm curious!
Anne



On Fri, Jun 2, 2017 at 4:03 AM, Alexandra Settle 
wrote:

> Blair – correct, it was the majority in the room. I just wanted to reach
> out and ensure that operators had a chance to voice opinions and see where
> we were going (
>
> Sounds like option 3 is still the favorable direction. This is going to be
> a really big exercise, lifting the content out of the repos. Are people
> able to help?
>
> Thanks everyone for getting on board (
>
> On 6/2/17, 2:44 AM, "Blair Bethwaite"  wrote:
>
> Hi Alex,
>
> Likewise for option 3. If I recall correctly from the summit session
> that was also the main preference in the room?
>
> On 2 June 2017 at 11:15, George Mihaiescu 
> wrote:
> > +1 for option 3
> >
> >
> >
> > On Jun 1, 2017, at 11:06, Alexandra Settle 
> wrote:
> >
> > Hi everyone,
> >
> >
> >
> > I haven’t had any feedback regarding moving the Operations Guide to
> the
> > OpenStack wiki. I’m not taking silence as compliance. I would really
> like to
> > hear people’s opinions on this matter.
> >
> >
> >
> > To recap:
> >
> >
> >
> > Option one: Kill the Operations Guide completely and move the
> Administration
> > Guide to project repos.
> > Option two: Combine the Operations and Administration Guides (and
> then this
> > will be moved into the project-specific repos)
> > Option three: Move Operations Guide to OpenStack wiki (for ease of
> > operator-specific maintainability) and move the Administration Guide
> to
> > project repos.
> >
> >
> >
> > Personally, I think that option 3 is more realistic. The idea for
> the last
> > option is that operators are maintaining operator-specific
> documentation and
> > updating it as they go along and we’re not losing anything by
> combining or
> > deleting. I don’t want to lose what we have by going with option 1,
> and I
> > think option 2 is just a workaround without fixing the problem – we
> are not
> > getting contributions to the project.
> >
> >
> >
> > Thoughts?
> >
> >
> >
> > Alex
> >
> >
> >
> > From: Alexandra Settle 
> > Date: Friday, May 19, 2017 at 1:38 PM
> > To: Melvin Hillsman , OpenStack Operators
> > 
> > Subject: Re: [Openstack-operators] Fwd: [openstack-dev]
> [openstack-doc]
> > [dev] What's up doc? Summit recap edition
> >
> >
> >
> > Hi everyone,
> >
> >
> >
> > Adding to this, I would like to draw your attention to the last dot
> point of
> > my email:
> >
> >
> >
> > “One of the key takeaways from the summit was the session that I
> joint
> > moderated with Melvin Hillsman regarding the Operations and
> Administration
> > Guides. You can find the etherpad with notes here:
> > https://etherpad.openstack.org/p/admin-ops-guides  The session was
> really
> > helpful – we were able to discuss with the operators present the
> current
> > situation of the documentation team, and how they could help us
> maintain the
> > two guides, aimed at the same audience. The operator’s present at the
> > session agreed that the Administration Guide was important, and
> could be
> > maintained upstream. However, they voted and agreed that the best
> course of
> > action for the Operations Guide was for it to be pulled down and put
> into a
> > wiki that the operators could manage themselves. We will be looking
> at
> > actioning this item as soon as possible.”
> >
> >
> >
> > I would like to go ahead with this, but I would appreciate feedback
> from
> > operators who were not able to attend the summit. In the etherpad
> you will
> > see the three options that the operators in the room recommended as
> being
> > viable, and the voted option being moving the Operations Guide out of
> > docs.openstack.org into a wiki. The aim of this was to empower the
> > operations community to take more control of the updates in an
> environment
> > they are more familiar with (and available to others).
> >
> >
> >
> > What does everyone think of the proposed options? Questions? Other
> thoughts?
> 

[openstack-dev] [IGNORE] Test Mail

2017-06-02 Thread Narendra Pal Singh
-- 
Best Regards,
NPS
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang

2017-06-02 Thread Ricardo Rocha
Hi Hongbin.

Regarding your comments below, some quick clarifications for people
less familiar with Magnum.

1. Rexray / Cinder integration

- Magnum uses an alpine based rexray image, compressed size is 33MB
(the download size), so pretty good
- Deploying a full Magnum cluster of 128 nodes takes less than 5
minutes in our production environment. The issue you mention only
exists in the upstream builds and is valid for all container images,
and is due to nodes in infra having a combination of non-nested
virtualization and/or slow connectivity (there were several attempts
to fix this)
- Not sure about mystery bugs, but the ones we found were fixed by Mathieu:
https://github.com/codedellemc/libstorage/pull/243

2. Enterprise ready

Certainly this means different things for different people, at CERN we
run ~80 clusters in our production service covering many use cases.
Magnum currently lacks the ability to properly upgrade the COE version
for running clusters, which is a problem for long lived services
(which are not the majority of our use cases today). This is the main
focus on the currently cycle.

Hope this adds some relevant information.

Cheers,
  Ricardo

On Wed, May 31, 2017 at 5:55 PM, Hongbin Lu  wrote:
> Please find my replies inline.
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> From: Spyros Trigazis [mailto:strig...@gmail.com]
> Sent: May-30-17 9:56 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for
> Fuxi-golang
>
>
>
>
>
>
>
> On 30 May 2017 at 15:26, Hongbin Lu  wrote:
>
> Please consider leveraging Fuxi instead.
>
>
>
> Is there a missing functionality from rexray?
>
>
>
> [Hongbin Lu] From my understanding, Rexray targets on the overcloud use
> cases and assumes that containers are running on top of nova instances. You
> mentioned Magnum is leveraging Rexray for Cinder integration. Actually, I am
> the core reviewer who reviewed and approved those Rexray patches. From what
> I observed, the functionalities provided by Rexray are minimal. What it was
> doing is simply calling Cinder API to search an existing volume, attach the
> volume to the Nova instance, and let docker to bind-mount the volume to the
> container. At the time I was testing it, it seems to have some mystery bugs
> that prevented me to get the cluster to work. It was packaged by a large
> container image, which might take more than 5 minutes to pull down. With
> that said, Rexray might be a choice for someone who are looking for cross
> cloud-providers solution. Fuxi will focus on OpenStack and targets on both
> overcloud and undercloud use cases. That means Fuxi can work with
> Nova+Cinder or a standalone Cinder. As John pointed out in another reply,
> another benefit of Fuxi is to resolve the fragmentation problem of existing
> solutions. Those are the differentiators of Fuxi.
>
>
>
> Kuryr/Fuxi team is working very hard to deliver the docker network/storage
> plugins. I wish you will work with us to get them integrated with
> Magnum-provisioned cluster.
>
>
>
> Patches are welcome to support fuxi as an *option* instead of rexray, so
> users can choose.
>
>
>
> Currently, COE clusters provisioned by Magnum is far away from
> enterprise-ready. I think the Magnum project will be better off if it can
> adopt Kuryr/Fuxi which will give you a better OpenStack integration.
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> fuxi feature request: Add authentication using a trustee and a trustID.
>
>
>
> [Hongbin Lu] I believe this is already supported.
>
>
>
> Cheers,
> Spyros
>
>
>
>
>
> From: Spyros Trigazis [mailto:strig...@gmail.com]
> Sent: May-30-17 7:47 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for
> Fuxi-golang
>
>
>
> FYI, there is already a cinder volume driver for docker available, written
>
> in golang, from rexray [1].
>
>
> Our team recently contributed to libstorage [3], it could support manila
> too. Rexray
> also supports the popular cloud providers.
>
> Magnum's docker swarm cluster driver, already leverages rexray for cinder
> integration. [2]
>
> Cheers,
> Spyros
>
>
>
> [1] https://github.com/codedellemc/rexray/releases/tag/v0.9.0
>
> [2] https://github.com/codedellemc/libstorage/releases/tag/v0.6.0
>
> [3]
> http://git.openstack.org/cgit/openstack/magnum/tree/magnum/drivers/common/templates/swarm/fragments/volume-service.sh?h=stable/ocata
>
>
>
> On 27 May 2017 at 12:15, zengchen  wrote:
>
> Hi John & Ben:
>
>  I have committed a patch[1] to add a new repository to Openstack. Please
> take a look at it. Thanks very much!
>
>
>
>  [1]: https://review.openstack.org/#/c/468635
>
>
>
> Best Wishes!
>
> zengchen
>
>
>
>
> 在 2017-05-26 21:30:48,"John Griffith"  写道:
>
>
>
>
>
> On Thu, May 25, 2017 at 10:01 PM, zengchen  wrote:
>
>
>
> 

[openstack-dev] [tc] Status update, Jun 2

2017-06-02 Thread Thierry Carrez
Hi!

This new regular email will give you an update on the status of a number
of TC-proposed governance changes, in an attempt to rely less on a
weekly meeting to convey that information.

You can find the full status list of open topics at:
https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee


== Recently-approved changes ==

* Add Etcd as a base service [1]
* Add missing api document urls for some projects [2]
* Adjust governance to depend less on TC meetings for decisions [3] [4]
* New git repositories: zuul-base-jobs, zuul-jobs, ansible-hardening,
ldappool, charm-glusterfs, charm-manila-glusterfs
* Pike goal responses for Freezer, Karbor, Infra, Searchlight, Cinder

[1] https://review.openstack.org/#/c/467436/
[2] https://review.openstack.org/#/c/468080/
[3] https://review.openstack.org/#/c/467255/
[4] https://review.openstack.org/#/c/463141/

Adding etcd to base services means that OpenStack services will be able
to depend on etcd being present in OpenStack installations (as much as
they can rely on a database, Keystone or a message queue being present).
Read more at
https://governance.openstack.org/tc/reference/base-services.html


== Open discussions ==

The policy discussion on binary images publication is in full swing. Two
versions were posted, although the second one is limited to Kolla
containers and therefore likely to be abandoned in favor of the first one:

* Guidelines for managing releases of binary artifacts [5]
* Resolution regarding container images [6]

[5] https://review.openstack.org/#/c/469265/
[6] https://review.openstack.org/#/c/469248/

A number of potential goals for Queens have been posted (as reviews or
ML threads) for initial discussion as well:

* Discovery alignment, with two options: [7] [8]
* Policy and docs in code [9]
* Migrate off paste [10]
* Continuing Python 3.5+ Support​ [11]

[7] https://review.openstack.org/#/c/468436/
[8] https://review.openstack.org/#/c/468437/
[9] https://review.openstack.org/#/c/469954/
[10] http://lists.openstack.org/pipermail/openstack-dev/2017-May/117747.html
[11] http://lists.openstack.org/pipermail/openstack-dev/2017-May/117746.html


== Voting in progress ==

The Top 5 help wanted list seems ready to move from discussion to
voting. It now includes an example followup patch:

* Introduce Top 5 help wanted list [12]
* Add "Doc owners" to top-5 wanted list [13]

[12] https://review.openstack.org/#/c/466684/
[13] https://review.openstack.org/#/c/469115/

We also have two items that reached majority over the past days and will
be approved on Monday unless the votes swing the other way:

* Introduce office hours [14]
* Establish a #openstack-tc water cooler channel [15]
* Add etherpad link with detailed steps to split-tempest-plugins goal [16]

[14] https://review.openstack.org/#/c/467256/
[15] https://review.openstack.org/#/c/467386/
[16] https://review.openstack.org/#/c/468972/


== Blocked items ==

The discussion around postgresql support in OpenStack is still blocked,
with two proposals up:

* Declare plainly the current state of Posgresql in OpenStack [17]
* Document lack of postgresql support [18]

[17] https://review.openstack.org/#/c/427880/
[18] https://review.openstack.org/465589

A TC meeting on Jun 6 will be called to discuss this item specifically.

The Gluon team could still use a TC member mentor/sponsor to help them
navigate the OpenStack seas as they engage to become an official
project. Any volunteer ?


== TC member actions for the coming week(s) ==

johnthetubaguy, cdent, dtroyer to continue distill TC vision feedback
into actionable points (and split between cosmetic and significant
changes) [https://review.openstack.org/453262]

johnthetubaguy to update "Describe what upstream support means" with a
new revision [https://review.openstack.org/440601]

johnthetubaguy to update "Decisions should be globally inclusive" with a
new revision [https://review.openstack.org/460946]

flaper87 to update "Drop Technical Committee meetings" with a new
revision [https://review.openstack.org/459848]

ttx to communicate results of the 2017 contributor attrition stats
analysis he did


== Need for a TC meeting next Tuesday ==

Based on past discussions, we'll call a meeting on Jun 6 at 20:00 UTC on
#openstack-meeting to specifically discuss the postgresql question,
hopefully unblocking the situation and defining a path forward. I'm
traveling on that day, so I'd like some other TC member to chair and
moderate that discussion. Let me know if you're interested. Sean
McGinnis volunteered, but since he might be traveling too it would be
great to have someone else signed up.


Thanks everyone!

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ptls][all] Potential Queens Goal: Move policy and policy docs into code

2017-06-02 Thread Emilien Macchi
On Thu, Jun 1, 2017 at 7:54 PM, Lance Bragstad  wrote:
> Hi all,
>
> I've proposed a community-wide goal for Queens to move policy into code and
> supply documentation for each policy [0]. I've included references to
> existing documentation and specifications completed by various projects and
> attempted to lay out the benefits for both developers and operators.
>
> I'd greatly appreciate any feedback or discussion.

I would rewrite what I said in the patch, but thanks for proposing it.
I like this one in a sense it will help our operators to control
policies and make their configuration easier and more consistent.

> Thanks!
>
> Lance
>
>
> [0] https://review.openstack.org/#/c/469954/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][DevStack][Neutron] devstack install - need help on local.conf settings

2017-06-02 Thread Ganpat Agarwal
Hi Nidhi,

Try this :

*Set up the network environment on the host so that devstack VMs can access
the external world.*

*sudo bash*
*echo 1 > /proc/sys/net/ipv4/ip_forward*
*echo 1 > /proc/sys/net/ipv4/conf/INTERFACE/proxy_arp*
*iptables -t nat -A POSTROUTING -o INTERFACE -j MASQUERADE*

*These command will make sure that network traffic will be correctly routed
in and out of the devstack VMs.*
*The ip_forward and proxy_arp changes will be reset when the machice
reboots. You can make these changes permanent by editing /etc/sysctl.conf
and adding the following lines:*

*net.ipv4.conf.INTERFACE.proxy_arp = 1*
*net.ipv4.ip_forward = 1*

*Replace INTERFACE with ethernet interface of your virtual box.*

Hope it will work as it works always for me.

Regards,
Ganpat

On Fri, Jun 2, 2017 at 3:01 PM, nidhi.h...@wipro.com 
wrote:

> Hello all,
>
>
>
> I am using http://paste.openstack.org/show/595339/ as my local.conf.
>
>
>
> *I wanted to understand :- Which interface should we put as value in *
>
> *PUBLIC_INTERFACE in local.conf.*
>
>
>
>
>
> *Reason why I am asking this is,*
>
>
>
> Once, I installed OpenStack using DevStack, on my linux VM on
>
> VirtualBox - I used PUBLIC_INTERFACE value as eth0
>
> and
>
> I configured only one network adapter on VM in NAT mode.
>
>
>
> Later on I faced lot of networking problems in that OpenStack VM.
>
> Internet wasn’t accessible suddenly and many other probs.
>
>
>
> I debugged and somehow found eth0 was added in
>
> One ovs-bridge and if I remove eth0 from that bridge -
>
> only then internet in my VM used to work well.
>
>
>
> Then I doubted PUBLIC_INTERFACE value in local.conf
>
> is something I should setup correctly..
>
>
>
> Could not find much help on this from google.
>
>
>
> Can someone please enlighten?
>
>
>
> Thanks
>
> Nidhi
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *From:* Nidhi Mittal Hada (Product Engineering Service)
> *Sent:* Wednesday, January 18, 2017 3:49 PM
> *To:* openstack-dev@lists.openstack.org
> *Subject:* Re: [openstack-dev] [OpenStack-Dev][DevStack][Neutron] facing
> problem in devstack install - No Network found for private
>
>
>
> Hi Andreas,
>
>
>
> As in between you suggested to try with default devstack
>
> neutron config params. I tried that i set no config option
>
> for neutron part all default.
>
>
>
> This local.conf is working well..
>
>
>
> for others who are facing problem pasting working local.conf here
>
> http://paste.openstack.org/show/595339/
>
>
>
> Attaching too.
>
>
>
> Thanks
>
> Nidhi
>
>
>
>
>
>
>
>
> --
>
> *From:* Nidhi Mittal Hada (Product Engineering Service)
> *Sent:* Wednesday, January 18, 2017 2:44 PM
> *To:* openstack-dev@lists.openstack.org
> *Subject:* Re: [openstack-dev] [OpenStack-Dev][DevStack][Neutron] facing
> problem in devstack install - No Network found for private
>
>
>
> Andreas,
>
>
>
> I require nothing specific from neutron side.
>
> Just a basic working setup from neutron side
>
> because my work is mostly on storage side of
>
> OpenStack.
>
>
>
> Can you please suggest a working configuration
>
> if  tried recently.
>
>
>
> Thanks
>
> nidhi
>
>
> --
>
> *From:* Nidhi Mittal Hada (Product Engineering Service)
> *Sent:* Wednesday, January 18, 2017 2:35:13 PM
> *To:* openstack-dev@lists.openstack.org
> *Subject:* Re: [openstack-dev] [OpenStack-Dev][DevStack][Neutron] facing
> problem in devstack install - No Network found for private
>
>
>
> HI Andreas,
>
>
>
> Thanks for your reply.
>
>
>
> I have no specific reason for using this network configuration in
> local.conf.
>
> I have only basic knowledge of these config options even.
>
> This local.conf network configurations used to work well with earlier
>
> devstack openstack versions. So i did not change it..
>
> Just this time its creating trouble.
>
>
>
> I have not created any ovs bridge manually  before running devstack.
>
> just created this local.conf and ran ./stack.sh in devstack folder.
>
>
>
> Can you please suggest changes if i have not created ovs-bridge manually.
>
>
>
> *At present my settings are - from local.conf - for reference - *
>
> FIXED_RANGE=10.11.12.0/24
>
> NETWORK_GATEWAY=10.11.12.1
>
> FIXED_NETWORK_SIZE=256
>
>
>
> FLOATING_RANGE=10.0.2.0/24
>
> Q_FLOATING_ALLOCATION_POOL=start=10.0.2.104,end=10.0.2.111
>
> PUBLIC_NETWORK_GATEWAY=10.0.2.1
>
> HOST_IP=10.0.2.15
>
>
>
> PUBLIC_INTERFACE=eth0
>
>
>
> Q_USE_SECGROUP=True
>
> ENABLE_TENANT_VLANS=True
>
> TENANT_VLAN_RANGE=1000:1999
>
> PHYSICAL_NETWORK=default
>
> OVS_PHYSICAL_BRIDGE=br-ex
>
>
>
>
>
> Q_USE_PROVIDER_NETWORKING=True
>
> Q_L3_ENABLED=False
>
>
>
> PROVIDER_SUBNET_NAME="provider_net"
>
> PROVIDER_NETWORK_TYPE="vlan"
>
> SEGMENTATION_ID=2010
>
>
>
>
>
>
>
>
>
>
>
> Thanks
>
> Nidhi
>
>
>
>
> --
>
> *From:* Andreas Scheuring 
> *Sent:* Wednesday, January 18, 2017 1:09:17 PM
> *To:* openstack-dev@lists.openstack.org
> *Subject:* Re: 

Re: [openstack-dev] [OpenStack-Dev][DevStack][Neutron] devstack install - need help on local.conf settings

2017-06-02 Thread nidhi.h...@wipro.com
Hello all,

I am using http://paste.openstack.org/show/595339/ as my local.conf.

I wanted to understand :- Which interface should we put as value in
PUBLIC_INTERFACE in local.conf.


Reason why I am asking this is,

Once, I installed OpenStack using DevStack, on my linux VM on
VirtualBox - I used PUBLIC_INTERFACE value as eth0
and
I configured only one network adapter on VM in NAT mode.

Later on I faced lot of networking problems in that OpenStack VM.
Internet wasn't accessible suddenly and many other probs.

I debugged and somehow found eth0 was added in
One ovs-bridge and if I remove eth0 from that bridge -
only then internet in my VM used to work well.

Then I doubted PUBLIC_INTERFACE value in local.conf
is something I should setup correctly..

Could not find much help on this from google.

Can someone please enlighten?

Thanks
Nidhi









From: Nidhi Mittal Hada (Product Engineering Service)
Sent: Wednesday, January 18, 2017 3:49 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack-Dev][DevStack][Neutron] facing problem 
in devstack install - No Network found for private


Hi Andreas,



As in between you suggested to try with default devstack

neutron config params. I tried that i set no config option

for neutron part all default.



This local.conf is working well..



for others who are facing problem pasting working local.conf here

http://paste.openstack.org/show/595339/



Attaching too.



Thanks

Nidhi








From: Nidhi Mittal Hada (Product Engineering Service)
Sent: Wednesday, January 18, 2017 2:44 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack-Dev][DevStack][Neutron] facing problem 
in devstack install - No Network found for private


Andreas,



I require nothing specific from neutron side.

Just a basic working setup from neutron side

because my work is mostly on storage side of

OpenStack.



Can you please suggest a working configuration

if  tried recently.



Thanks

nidhi




From: Nidhi Mittal Hada (Product Engineering Service)
Sent: Wednesday, January 18, 2017 2:35:13 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack-Dev][DevStack][Neutron] facing problem 
in devstack install - No Network found for private


HI Andreas,



Thanks for your reply.



I have no specific reason for using this network configuration in local.conf.

I have only basic knowledge of these config options even.

This local.conf network configurations used to work well with earlier

devstack openstack versions. So i did not change it..

Just this time its creating trouble.



I have not created any ovs bridge manually  before running devstack.

just created this local.conf and ran ./stack.sh in devstack folder.



Can you please suggest changes if i have not created ovs-bridge manually.



At present my settings are - from local.conf - for reference -
FIXED_RANGE=10.11.12.0/24
NETWORK_GATEWAY=10.11.12.1
FIXED_NETWORK_SIZE=256

FLOATING_RANGE=10.0.2.0/24
Q_FLOATING_ALLOCATION_POOL=start=10.0.2.104,end=10.0.2.111
PUBLIC_NETWORK_GATEWAY=10.0.2.1
HOST_IP=10.0.2.15

PUBLIC_INTERFACE=eth0

Q_USE_SECGROUP=True
ENABLE_TENANT_VLANS=True
TENANT_VLAN_RANGE=1000:1999
PHYSICAL_NETWORK=default
OVS_PHYSICAL_BRIDGE=br-ex


Q_USE_PROVIDER_NETWORKING=True
Q_L3_ENABLED=False

PROVIDER_SUBNET_NAME="provider_net"
PROVIDER_NETWORK_TYPE="vlan"
SEGMENTATION_ID=2010









Thanks

Nidhi






From: Andreas Scheuring 
>
Sent: Wednesday, January 18, 2017 1:09:17 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack-Dev][DevStack][Neutron] facing problem 
in devstack install - No Network found for private

** This mail has been sent from an external source **

Without looking into the details

you're specifying
Q_USE_PROVIDER_NETWORKING=True
in your local.conf - usually this results in the creation of a single
provider network called "public". But the manila devstack plugin seems
not to be able to deal with provider networks as it's expecting a
network named "private" to be present.


Why are you using provider networks? Just for sake of VLANs? You can
also configure devstack to use vlans with the default setup. This has
worked for me in the past - results in a private network using vlans
(assuming you have created ovs b bridge br-data manually):


OVS_PHYSICAL_BRIDGE=br-data
PHYSICAL_NETWORK=phys-data

ENABLE_TENANT_TUNNELS=False
Q_ML2_TENANT_NETWORK_TYPE=vlan
ENABLE_TENANT_VLANS=True
TENANT_VLAN_RANGE=1000:1000




--
-
Andreas
IRC: andreas_s



On Mi, 2017-01-18 at 06:59 +, 
nidhi.h...@wipro.com wrote:
> Hi All,
>
>
> I was trying to install latest Newton version of OpenStack using
> 

Re: [openstack-dev] [ptg] Strawman Queens PTG week slicing

2017-06-02 Thread Emilien Macchi
On Thu, Jun 1, 2017 at 4:38 PM, Thierry Carrez  wrote:
> Thierry Carrez wrote:
>> In a previous thread[1] I introduced the idea of moving the PTG from a
>> purely horizontal/vertical week split to a more
>> inter-project/intra-project activities split, and the initial comments
>> were positive.
>>
>> We need to solidify how the week will look like before we open up
>> registration (first week of June), so that people can plan their
>> attendance accordingly. Based on the currently-signed-up teams and
>> projected room availability, I built a strawman proposal of how that
>> could look:
>>
>> https://docs.google.com/spreadsheets/d/1xmOdT6uZ5XqViActr5sBOaz_mEgjKSCY7NEWcAEcT-A/pubhtml?gid=397241312=true
>
> OK, it looks like the feedback on this strawman proposal was generally
> positive, so we'll move on with this.
>
> For teams that are placed on the Wednesday-Friday segment, please let us
> know whether you'd like to make use of the room on Friday (pick between
> 2 days or 3 days). Note that it's not a problem if you do (we have space
> booked all through Friday) and this can avoid people leaving too early
> on Thursday afternoon. We just need to know how many rooms we might be
> able to free up early.

For TripleO, Friday would be good (at least the morning) but I also
think 2 days would be enough in case we don't have enough space.

- So let's book Wednesday / Thursday / Friday.
- We probably won't have anything on Friday afternoon, since I expect
people traveling usually at this time.
- If not enough room, no worries, we can have Wednesday / Thursday
only, we'll survive for sure.

Thanks,

> In the same vein, if your team (or workgroup, or inter-project goal) is
> not yet listed and you'd like to have a room in Denver, let us know ASAP.
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [tc] [all] more tempest plugins (was Re: [tc] [all] TC Report 22)

2017-06-02 Thread Chris Dent

On Thu, 1 Jun 2017, Matthew Treinish wrote:


On Thu, Jun 01, 2017 at 11:09:56AM +0100, Chris Dent wrote:

A lot of this results, in part, from there being no single guiding
pattern and principle for how (and where) the tests are to be
managed.


It sounds like you want to write a general testing guide for openstack.
Have you started this effort anywhere? I don't think anyone would be opposed
to starting a document for that, it seems like a reasonable thing to have.
But, I think you'll find there is not a one size fits all solution though,
because every project has their own requirements and needs for testing.


No, I haven't made any decisions about what ought to happen. I'm
still trying to figure out if there is a problem, a suite of
problems, or everything is great. Knowing what the problems are
tends to be a reasonable thing to do before proposing or
implementing solutions, especially if we want those solutions to be
most correct.


So have you read the documentation:

https://docs.openstack.org/developer/tempest/ (or any of the other relevant
documentation

and filed bugs about where you think there are gaps? This is something that
really bugs me sometimes (yes the pun is intended) just like anything else this
is all about iterative improvements. These broad trends are things tempest
and (every project hopefully) have been working on. But improvements don't
just magically occur overnight it takes time to implement them.


This is a huge part of the colllaboration issues I was identifying
in my previous message. Somebody says "there seems to be some
confusion here" and somebody else comes along and asks "have you
filed bugs?" or "have you proposed a solution?".

Well, "no" because like I said above I don't know what (or even _if_)
there's something to fix or the relevant foundations of the confusion.

I have some suspicions or concerns that the implicit hierarchy of
some tempests tests being in plugins and some not creates issues
with discovery, management and identification of responsible parties
and _may_ imply a lack of "level playing field".

But:

* if other people don't have those concerns it's not worth
  pursuing
* until we reach some kind of shared understanding and agreement
  about the concerns, speculating about solutions is premature


Just compare the state of the documentation and tooling from 2 years ago (when
tempest started adding the plugin interface) to today. Things have steadily
improved over time and the situation now is much better. This will continue and
in the future things will get even better.


Yes, it's great. If you feel like I was suggesting otherwise, then
my apologies for not being clear. As a general rule tempest and
other QA tools have consistently done great work in terms of
documentation and tooling. That there are plugins at all is
fantastic; that we are having discussions about how to make the most
effective and fair use of them is a sign that they work.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [dev] [doc] Operations Guide future

2017-06-02 Thread Alexandra Settle
Blair – correct, it was the majority in the room. I just wanted to reach out 
and ensure that operators had a chance to voice opinions and see where we were 
going (

Sounds like option 3 is still the favorable direction. This is going to be a 
really big exercise, lifting the content out of the repos. Are people able to 
help?

Thanks everyone for getting on board (

On 6/2/17, 2:44 AM, "Blair Bethwaite"  wrote:

Hi Alex,

Likewise for option 3. If I recall correctly from the summit session
that was also the main preference in the room?

On 2 June 2017 at 11:15, George Mihaiescu  wrote:
> +1 for option 3
>
>
>
> On Jun 1, 2017, at 11:06, Alexandra Settle  wrote:
>
> Hi everyone,
>
>
>
> I haven’t had any feedback regarding moving the Operations Guide to the
> OpenStack wiki. I’m not taking silence as compliance. I would really like 
to
> hear people’s opinions on this matter.
>
>
>
> To recap:
>
>
>
> Option one: Kill the Operations Guide completely and move the 
Administration
> Guide to project repos.
> Option two: Combine the Operations and Administration Guides (and then 
this
> will be moved into the project-specific repos)
> Option three: Move Operations Guide to OpenStack wiki (for ease of
> operator-specific maintainability) and move the Administration Guide to
> project repos.
>
>
>
> Personally, I think that option 3 is more realistic. The idea for the last
> option is that operators are maintaining operator-specific documentation 
and
> updating it as they go along and we’re not losing anything by combining or
> deleting. I don’t want to lose what we have by going with option 1, and I
> think option 2 is just a workaround without fixing the problem – we are 
not
> getting contributions to the project.
>
>
>
> Thoughts?
>
>
>
> Alex
>
>
>
> From: Alexandra Settle 
> Date: Friday, May 19, 2017 at 1:38 PM
> To: Melvin Hillsman , OpenStack Operators
> 
> Subject: Re: [Openstack-operators] Fwd: [openstack-dev] [openstack-doc]
> [dev] What's up doc? Summit recap edition
>
>
>
> Hi everyone,
>
>
>
> Adding to this, I would like to draw your attention to the last dot point 
of
> my email:
>
>
>
> “One of the key takeaways from the summit was the session that I joint
> moderated with Melvin Hillsman regarding the Operations and Administration
> Guides. You can find the etherpad with notes here:
> https://etherpad.openstack.org/p/admin-ops-guides  The session was really
> helpful – we were able to discuss with the operators present the current
> situation of the documentation team, and how they could help us maintain 
the
> two guides, aimed at the same audience. The operator’s present at the
> session agreed that the Administration Guide was important, and could be
> maintained upstream. However, they voted and agreed that the best course 
of
> action for the Operations Guide was for it to be pulled down and put into 
a
> wiki that the operators could manage themselves. We will be looking at
> actioning this item as soon as possible.”
>
>
>
> I would like to go ahead with this, but I would appreciate feedback from
> operators who were not able to attend the summit. In the etherpad you will
> see the three options that the operators in the room recommended as being
> viable, and the voted option being moving the Operations Guide out of
> docs.openstack.org into a wiki. The aim of this was to empower the
> operations community to take more control of the updates in an environment
> they are more familiar with (and available to others).
>
>
>
> What does everyone think of the proposed options? Questions? Other 
thoughts?
>
>
>
> Alex
>
>
>
> From: Melvin Hillsman 
> Date: Friday, May 19, 2017 at 1:30 PM
> To: OpenStack Operators 
> Subject: [Openstack-operators] Fwd: [openstack-dev] [openstack-doc] [dev]
> What's up doc? Summit recap edition
>
>
>
>
>
> -- Forwarded message --
> From: Alexandra Settle 
> Date: Fri, May 19, 2017 at 6:12 AM
> Subject: [openstack-dev] [openstack-doc] [dev] What's up doc? Summit recap
> edition
> To: "openstack-d...@lists.openstack.org"
> 
> Cc: "OpenStack Development Mailing List (not for usage questions)"
> 
>
>
> Hi everyone,
>
>
> The OpenStack 

[openstack-dev] [release] Release countdown for week R-12, June 05-09

2017-06-02 Thread Thierry Carrez
Welcome to our regular release countdown email!

Development Focus
-

Teams should be wrapping up Pike-2 work.

Actions
---

Next week is the Pike-2 deadline for cycle-with-milestones projects.
That means that before EOD on Thursday, all milestone-driven projects
should propose a SHA for Pike-2 as a change to the openstack/releases
repository. As a reminder, Pike-2 versions should look like "P.0.0.0b2"
where P = Ocata version + 1. So if your Ocata release was "5.0.0",
Pike-2 should be "6.0.0.0b2".

Pike-2 also marks the deadline for introducing new release deliverables
to be a part of the Pike release cycle. If you have new deliverables
that haven't been released yet in the Pike cycle, you should request a
release before Thursday EOD.

Finally, libraries (following the cycle-with-intermediary model) should
have at least one Pike release before the Pike-2 deadline. This is
necessary so that the release team has something to fall back to (and
branch from) around Pike-3 if the project fails to deliver a new library
release by then.

At this time, that means the following libraries should release in the
coming week:

glance-store, instack, networking-hyperv, pycadf, python-barbicanclient,
python-ceilometerclient, python-cloudkittyclient, python-congressclient,
python-designateclient, python-karborclient, python-keystoneclient,
python-magnumclient, python-muranoclient, python-searchlightclient,
python-swiftclient, python-tackerclient, python-vitrageclient, and
requestsexceptions.

Upcoming Deadlines & Dates
--

Pike-2 milestone: June 8
Queens PTG in Denver: Sept 11-15

-- 
Thierry Carrez (ttx)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][Blazar] steps to big-tent project

2017-06-02 Thread Thierry Carrez
Masahito MUROI wrote:
> Blazar team is thinking to push a request about adding Blazar project
> into the OpenStack BigTent.
> 
> Based on documents in the governance repository[1], what the team is
> required to do for the request is just adding project's definition to
> references/projects.yaml. Is there another thing to do as Blazar team?

Not really. Make sure you read:

The requirements for new project teams:
https://governance.openstack.org/tc/reference/new-projects-requirements.html

The guiding principles:
https://governance.openstack.org/tc/reference/principles.html

The Project Team Guide:
https://docs.openstack.org/project-team-guide/

I also wrote a blog post to outline the process, that you might be
interested in reading:
https://ttx.re/create-official-openstack-project.html

Regards,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][tc][all] Tempest to reject trademark tests

2017-06-02 Thread Masayuki Igawa
On Fri, Jun 2, 2017, at 09:51 AM, Matthew Treinish wrote:
> On Thu, Jun 01, 2017 at 11:57:00AM -0400, Doug Hellmann wrote:
> > Excerpts from Thierry Carrez's message of 2017-06-01 11:51:50 +0200:
> > > Graham Hayes wrote:
> > > > On 01/06/17 01:30, Matthew Treinish wrote:
> > > >> TBH, it's a bit premature to have the discussion. These additional 
> > > >> programs do
> > > >> not exist yet, and there is a governance road block around this. Right 
> > > >> now the
> > > >> set of projects that can be used defcore/interopWG is limited to the 
> > > >> set of 
> > > >> projects in:
> > > >>
> > > >> https://governance.openstack.org/tc/reference/tags/tc_approved-release.html
> > > > 
> > > > Sure - but that is a solved problem, when the interop committee is
> > > > ready to propose them, they can add projects into that tag. Or am I
> > > > misunderstanding [1] (again)?
> > > 
> > > I think you understand it well. The Board/InteropWG should propose
> > > additions/removals of this tag, which will then be approved by the TC:
> > > 
> > > https://governance.openstack.org/tc/reference/tags/tc_approved-release.html#tag-application-process
> > > 
> > > > [...]
> > > >> We had a forum session on it (I can't find the etherpad for the 
> > > >> session) which
> > > >> was pretty speculative because it was about planning the new programs. 
> > > >> Part of
> > > >> that discussion was around the feasibility of using tests in plugins 
> > > >> and whether
> > > >> that would be desirable. Personally, I was in favor of doing that for 
> > > >> some of
> > > >> the proposed programs because of the way they were organized it was a 
> > > >> good fit.
> > > >> This is because the proposed new programs were extra additions on top 
> > > >> of the
> > > >> base existing interop program. But it was hardly a definitive 
> > > >> discussion.
> > > > 
> > > > Which will create 2 classes of testing for interop programs.
> > > 
> > > FWIW I would rather have a single way of doing "tests used in trademark
> > > programs" without differentiating between old and new trademark programs.
> > > 
> > > I fear that we are discussing solutions before defining the problem. We
> > > want:
> > > 
> > > 1- Decentralize test maintenance, through more tempest plugins, to
> > > account for limited QA resources
> > > 2- Additional codereview constraints and approval rules for tests that
> > > happen to be used in trademark programs
> > > 3- Discoverability/ease-of-install of the set of tests that happen to be
> > > used in trademark programs
> > > 4- A git repo layout that can be simply explained, for new teams to
> > > understand
> > > 
> > > It feels like the current git repo layout (result of that 2016-05-04
> > > resolution) optimizes for 2 and 3, which kind of works until you add
> > > more trademark programs, at which point it breaks 1 and 4.
> > > 
> > > I feel like you could get 2 and 3 without necessarily using git repo
> > > boundaries (using Gerrit approval rules and some tooling to install/run
> > > subset of tests across multiple git repos), which would allow you to
> > > optimize git repo layout to get 1 and 4...
> > > 
> > > Or am I missing something ?
> > > 
> > 
> > Right. The point of having the trademark tests "in tempest" was not
> > to have them "in the tempest repo", that was just an implementation
> > detail of the policy of "put them in a repository managed by people
> > who understand the expanded review rules".
> 
> There was more to it than this, a big part was duplication of effort as
> well.
> Tempest itself is almost a perfect fit for the scope of the testing
> defcore is
> doing. While tempest does additional testing that defcore doesn't use, a
> large
> subset is exactly what they want.
> 
> > 
> > There were a lot of unexpected issues when we started treating the
> > test suite as a production tool for validating a cloud.  We have
> > to be careful about how we change the behavior of tests, for example,
> > even if the API responses are expected to be the same.  It's not
> > fair to vendors or operators who get trademark approval with one
> > release to have significant changes in behavior in the exact same
> > tests for the next release.
> 
> I actually find this to be kinda misleading. Tempest has always had
> running on any cloud as part of it's mission. I think you're referring
> to the monster defcore thread from last summer about proprietary nova
> extensions
> adding on to API responses. This is honestly a completely separate
> problem
> which is not something I want to dive into again, because that was a much
> more
> nuanced problem that involved much more than just code review.
> 
> > 
> > At the early stage, when the DefCore team was still figuring out
> > these issues, it made sense to put all of the tests in one place
> > with a review team that was actively participating in establishing
> > the process. If we better understand the "rules" for these tests
> > now, we can document them and distribute 

[openstack-dev] [tc][Blazar] steps to big-tent project

2017-06-02 Thread Masahito MUROI

Dear TC team,

Blazar team is thinking to push a request about adding Blazar project 
into the OpenStack BigTent.


Based on documents in the governance repository[1], what the team is 
required to do for the request is just adding project's definition to 
references/projects.yaml. Is there another thing to do as Blazar team?


1. https://github.com/openstack/governance


best regards,
Masahito



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][horizon] FWaaS/VPNaaS dashboard split out from horizon

2017-06-02 Thread Sridar Kandaswamy (skandasw)
Thanks Akihiro. From an FWaaS perspective, in agreement with Takashi on
points below. On repository name, I am not religious as long as we keep
things consistent.

Thanks

Sridar

On 6/1/17, 12:59 AM, "Takashi Yamamoto"  wrote:

>On Wed, May 31, 2017 at 10:12 PM, Akihiro Motoki 
>wrote:
>> Hi all,
>>
>> As discussed last month [1], we agree that each neutron-related
>> dashboard has its own repository.
>> I would like to move this forward on FWaaS and VPNaaS
>> as the horizon team plans to split them out as horizon plugins.
>>
>> A couple of questions hit me.
>>
>> (1) launchpad project
>> Do we create a new launchpad project for each dashboard?
>> At now, FWaaS and VPNaaS projects use 'neutron' for their bug tracking
>> from the historical reason, it sometimes There are two choices: the
>> one is to accept dashboard bugs in 'neutron' launchpad,
>> and the other is to have a separate launchpad project.
>>
>> My vote is to create a separate launchpad project.
>> It allows users to search and file bugs easily.
>
>+1
>
>>
>> (2) repository name
>>
>> Are neutron-fwaas-dashboard / neutron-vpnaas-dashboard good repository
>> names for you?
>> Most horizon related projects use -dashboard or -ui as their
>>repo names.
>> I personally prefers to -dashboard as it is consistent with the
>> OpenStack dashboard
>> (the official name of horizon). On the other hand, I know some folks
>> prefer to -ui
>> as the name is shorter enough.
>> Any preference?
>
>+1 for -dashboard.
>-ui sounds too generic to me.
>
>>
>> (3) governance
>> neutron-fwaas project is under the neutron project.
>> Does it sound okay to have neutron-fwaas-dashboard under the neutron
>>project?
>> This is what the neutron team does for neutron-lbaas-dashboard before
>> and this model is adopted in most horizon plugins (like trove, sahara
>> or others).
>
>+1
>
>>
>> (4) initial core team
>>
>> My thought is to have neutron-fwaas/vpnaas-core and horizon-core as
>> the initial core team.
>> The release team and the stable team follow what we have for
>> neutron-fwaas/vpnaas projects.
>> Sounds reasonable?
>
>+1
>
>>
>>
>> Finally, I already prepare the split out version of FWaaS and VPNaaS
>> dashboards in my personal github repos.
>> Once we agree in the questions above, I will create the repositories
>> under git.openstack.org.
>
>great, thank you.
>
>>
>> [1] 
>>http://lists.openstack.org/pipermail/openstack-dev/2017-April/thread.html
>>#115200
>>
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev