Re: [openstack-dev] [rpm-packaging][karbor]

2017-07-13 Thread Chandan kumar
Hello Jiong,

Thank you for packaging karbor.

On Fri, Jul 14, 2017 at 11:49 AM, Jiong Liu  wrote:
> Hello rpm-packaging team and folks,
>
>
>
> I got trouble with packaging OpenStack project(karbor), which depends on two
> packages: icalendar and abclient.
>
> icalendar has pip package and RPM package, but RPM package can not be found
> by RDO CI.

python-icalender is available in fedora:
https://koji.fedoraproject.org/koji/packageinfo?packageID=10783
We can pull it soon in RDO.

>
> While abclient only has pip package but no RPM package.
>

abclient is not available in Fedora or RDO. I am packaging it. It will
be soon available in RDO.

>
>
> So in this case, what should I do to make sure these two packages can be
> installed via RPM when packaing karbor?
>
>
>
> My patch is uploaded to rpm-package review list, as you can find here
> https://review.openstack.org/#/c/480806/
>

Thanks,

Chandan Kumar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rpm-packaging][karbor]

2017-07-13 Thread Jiong Liu
Hello rpm-packaging team and folks,

 

I got trouble with packaging OpenStack project(karbor), which depends on two
packages: icalendar and abclient.

icalendar has pip package and RPM package, but RPM package can not be found
by RDO CI.

While abclient only has pip package but no RPM package.

 

So in this case, what should I do to make sure these two packages can be
installed via RPM when packaing karbor?

 

My patch is uploaded to rpm-package review list, as you can find here
https://review.openstack.org/#/c/480806/

Your comments and help are much appreciated!

 

Thanks!

Jeremy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-07-13 Thread Amrith Kumar
Kevin,

In interests of 'keeping it simple', I'm going to try and prioritize the
use-cases and pick implementation strategies which target the higher
priority ones without needlessly excluding other (lower priority) ones.

Thanks,

-amrith

--
Amrith Kumar
​
P.S. Verizon is hiring ​OpenStack engineers nationwide. If you are
interested, please contact me or visit https://t.co/gGoUzYvqbE


On Wed, Jul 12, 2017 at 5:46 PM, Fox, Kevin M  wrote:

> There is a use case where some sites have folks buy whole bricks of
> compute nodes that get added to the overarching cloud, but using AZ's or
> HostAggregates/Flavors to dedicate the hardware to the users.
>
> You might want to land the db vm on the hardware for that project and one
> would expect the normal quota would be dinged for it rather then a special
> trove quota. Otherwise they may have more quota then the hosts can actually
> handle.
>
> Thanks,
> Kevin
> 
> From: Doug Hellmann [d...@doughellmann.com]
> Sent: Wednesday, July 12, 2017 6:57 AM
> To: openstack-dev
> Subject: Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect
> Trove
>
> Excerpts from Amrith Kumar's message of 2017-07-12 06:14:28 -0500:
> > All:
> >
> > First, let me thank all of you who responded and provided feedback
> > on what I wrote. I've summarized what I heard below and am posting
> > it as one consolidated response rather than responding to each
> > of your messages and making this thread even deeper.
> >
> > As I say at the end of this email, I will be setting up a session at
> > the Denver PTG to specifically continue this conversation and hope
> > you will all be able to attend. As soon as time slots for PTG are
> > announced, I will try and pick this slot and request that you please
> > attend.
> >
> > 
> >
> > Thierry: naming issue; call it Hoard if it does not have a migration
> > path.
> >
> > 
> >
> > Kevin: use a container approach with k8s as the orchestration
> > mechanism, addresses multiple issues including performance. Trove to
> > provide containers for multiple components which cooperate to provide
> > a single instance of a database or cluster. Don't put all components
> > (agent, monitoring, database) in a single VM, decoupling makes
> > migraiton and upgrades easier and allows trove to reuse database
> > vendor supplied containers. Performance of databases in VM's poor
> > compared to databases on bare-metal.
> >
> > 
> >
> > Doug Hellmann:
> >
> > > Does "service VM" need to be a first-class thing?  Akanda creates
> > > them, using a service user. The VMs are tied to a "router" which is
> > > the billable resource that the user understands and interacts with
> > > through the API.
> >
> > Amrith: Doug, yes because we're looking not just for service VM's but all
> > resources provisioned by a service. So, to Matt's comment about a
> > blackbox DBaaS, the VM's, storage, snapshots, ... they should all be
> > owned by the service, charged to a users quota but not visible to the
> > user directly.
>
> I still don't understand. If you have entities that represent the
> DBaaS "host" or "database" or "database backup" or whatever, then
> you put a quota on those entities and you bill for them. If the
> database actually runs in a VM or the backup is a snapshot, those
> are implementation details. You don't want to have to rewrite your
> quota management or billing integration if those details change.
>
> Doug
>
> >
> > 
> >
> > Jay:
> >
> > > Frankly, I believe all of these types of services should be built
> > > as applications that run on OpenStack (or other)
> > > infrastructure. In other words, they should not be part of the
> > > infrastructure itself.
> > >
> > > There's really no need for a user of a DBaaS to have access to the
> > > host or hosts the DB is running on. If the user really wanted
> > > that, they would just spin up a VM/baremetal server and install
> > > the thing themselves.
> >
> > and subsequently in follow-up with Zane:
> >
> > > Think only in terms of what a user of a DBaaS really wants. At the
> > > end of the day, all they want is an address in the cloud where they
> > > can point their application to write and read data from.
> > > ...
> > > At the end of the day, I think Trove is best implemented as a hosted
> > > application that exposes an API to its users that is entirely
> > > separate from the underlying infrastructure APIs like
> > > Cinder/Nova/Neutron.
> >
> > Amrith: Yes, I agree, +1000
> >
> > 
> >
> > Clint (in response to Jay's proposal regarding the service making all
> > resources multi-tenant) raised a concern about having multi-tenant
> > shared resources. The issue is with ensuring separation between
> > tenants (don't want to use the word isolation because this is database
> > related).
> >
> > Amrith: yes, definitely a concern and one that we don't have today
> > because each DB is a VM of its own. Personally, I'd rather stick with
> > that construct, 

Re: [openstack-dev] [all][tc] How to deal with confusion around "hosted projects"

2017-07-13 Thread Fei Long Wang
I agree with Zane for most of the parts. But one thing I don't really
understand is, why OpenStack community is still confusing at the IaaS,
PaaS and SaaS, does the classification really mater at nowadays? Do we
really need a label/tag for OpenStack to limit it as an IaaS, PaaS or
SaaS? I never see AWS says it's an IaaS, PaaS or SaaS. Did Azure or
Google Cloud say that? I think they're just providing the service their
customer want.


On 14/07/17 05:03, Zane Bitter wrote:
> On 29/06/17 10:55, Monty Taylor wrote:
>> (Incidentally, I think it's unworkable to have an IaaS without DNS.
>> Other people have told me that having an IaaS without LBaaS or a
>> message queuing service is unworkable, while I neither need nor want
>> either of those things from my IaaS - they seem like PaaS components
>> to me)
>
> I resemble that remark, so maybe it's worth clarifying how I see things.
>
> In many ways the NIST definitions of SaaS/PaaS/IaaS from 2011, while
> helpful to cut through the vagueness of the 'cloud' buzzword and frame
> the broad outlines of cloud service models (at least at the time),
> have proven inadequate to describe the subtlety of the various
> possible offerings. The only thing that is crystal clear is that LBaaS
> and message queuing are not PaaS components ;)
>
> I'd like to suggest that the 'Platform' in PaaS means the same thing
> that it has since at least the '90s: the Operating System and possibly
> there language runtime if any. The difference between PaaS and IaaS in
> terms of compute is that in the latter case you're given a machine and
> you install whatever platform you like on it, while in the former the
> platform is provided as a service. Hence the name.
>
> To the extent that hardware load balancers are used, LBaaS is pretty
> clearly IaaS. Hardware is infrastructure, if you provide access to
> that as a service it's Infrastructure as a Service. QED. It's also
> possible to provide software load balancers as a service. Technically
> I guess this is SaaS. Theoretically you could make an argument that an
> API that can abstract over either hardware or software load balancers
> is not "real" IaaS. And I would label that argument as BS sophistry :)
>
> The fact that PaaS implementations use load balancers internally is
> really neither here nor there.
>
> You can certainly build a useful cloud without LBaaS. That just means
> that anybody who needs load balancing will have to spin up their own
> software load balancer to do it. But that has a couple of
> consequences. One is that every application developer has to build
> their own orchestration to update the load balancer configuration when
> it needs to change. The other is that they're stuck with the least
> common denominator - if you use one cloud that doesn't have an LBaaS
> API, even one backed by software load balancers, then you won't be
> able to take advantage of hardware load balancers on another cloud
> without rewriting a chunk of your application. That's a big concern
> for OpenStack, which has application portability as one of its
> foremost goals. Thus an IaaS cloud that includes LBaaS is considerably
> more valuable than one that does not, for a large range of very common
> use cases.
>
> This is pretty much the same argument as I would make for DNSaaS.
> Without it you're developing your own orchestration and/or manually
> updating stuff every time you make a change in your infrastructure,
> which pretty much negates the benefits of IaaS for a very large subset
> of applications and leaves you stuck back in the pre-aaS days where
> making any changes to where your application ran was slow and painful.
> That's despite the fact that DNSaaS is arguably pure SaaS. This is
> where the NIST model breaks down IMHO. We tend to assume that only
> stuff that faces end users is SaaS and therefore everything
> developer-facing has to fall into either IaaS/PaaS. This results in
> IaaS developers treating "PaaS" as a catch-all bucket for "everything
> application developer-facing that I don't think is infrastructure",
> rather than a term that has meaning in itself.
>
> This confusion leads to the implicit argument that these kinds of
> developer-facing SaaS offerings are only useful to applications
> running in a PaaS, and therefore should be Someone Else's Problem,
> which is WRONG. It's wrong because different parts of an application
> have different needs. Just because I need to tweak the kernel
> parameters on my app server, it doesn't follow that I need to tweak
> the kernel parameters on my load balancer, or my database, too. It
> just doesn't.
>
> At one level, a message queue falls into the same bucket as other SaaS
> components (like DBaaS, sometimes LBaaS, &c.). Something that's useful
> to a subset of applications running in either an IaaS or a PaaS. The
> subset of applications that would use it is probably quite a bit
> smaller than the subset that would use e.g. LBaaS.
>
> However, there's also another dimension to

Re: [openstack-dev] [keystone] stable/ocata and stable/newton are broken

2017-07-13 Thread Lance Bragstad
Oh - the original issues with the stable branches were reported here:

https://bugs.launchpad.net/keystone/+bug/1704148


On 07/13/2017 06:00 PM, Lance Bragstad wrote:
> Colleen found out today while doing a backport that both of our stable
> branches are broken. After doing some digging, it looks like bug 1687593
> is the culprit [0]. The fix to that bug merged in master and the author
> added some nicely written functional tests using the
> keystone-tempest-plugin. The functional tests are being run against both
> stables branches but the fix wasn't actually backported. As a result,
> both stable branches are bricked at the moment because of the functional
> tests.
>
> I've proposed the necessary backports for stable/ocata [1] and
> stable/newton [2], in addition to a cleaned up release note for master
> [3]. Any reviews would be greatly appreciated since we'll be doing a
> release of both stable branches relatively soon.
>
> Thanks!
>
>
> [0] https://bugs.launchpad.net/keystone/+bug/1687593
> [1]
> https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:stable/ocata+topic:bug/1687593
> [2]
> https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:stable/newton+topic:bug/1687593
> [3] https://review.openstack.org/#/c/483598/
>
>




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] stable/ocata and stable/newton are broken

2017-07-13 Thread Lance Bragstad
Colleen found out today while doing a backport that both of our stable
branches are broken. After doing some digging, it looks like bug 1687593
is the culprit [0]. The fix to that bug merged in master and the author
added some nicely written functional tests using the
keystone-tempest-plugin. The functional tests are being run against both
stables branches but the fix wasn't actually backported. As a result,
both stable branches are bricked at the moment because of the functional
tests.

I've proposed the necessary backports for stable/ocata [1] and
stable/newton [2], in addition to a cleaned up release note for master
[3]. Any reviews would be greatly appreciated since we'll be doing a
release of both stable branches relatively soon.

Thanks!


[0] https://bugs.launchpad.net/keystone/+bug/1687593
[1]
https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:stable/ocata+topic:bug/1687593
[2]
https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:stable/newton+topic:bug/1687593
[3] https://review.openstack.org/#/c/483598/




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] office hours report 2017-7-7

2017-07-13 Thread Mathieu Gagné
Resending... I found out that Gmail messed up my message with its HTML format...
Hopefully this time it will be more readable on the online archive interface.

On Tue, Jul 11, 2017 at 10:28 PM, Mathieu Gagné  wrote:
> Hi,
>
> So this email is relevant to my interests as an operator. =)
>
> On Tue, Jul 11, 2017 at 9:35 PM, Lance Bragstad  wrote:
>>
>> The future of the templated catalog backend
>>
>> Some issues were uncovered, or just resurfaced, with the templated catalog
>> backend. The net of the discussion boiled down to - do we fix it or remove
>> it? The answer actually ended up being both. It was determined that instead
>> of trying to maintain and fix the existing templated backend, we should
>> deprecate it for removal [0]. Since it does provide some value, it was
>> suggested that we can start implementing a new backend based on YAML to fill
>> the purpose instead. The advantage here is that the approach is directed
>> towards a specific format (YAML). This should hopefully make things easier
>> for both developers and users.
>>
>> [0] https://review.openstack.org/#/c/482714/
>
>
> We have been exclusively using the templated catalog backend for at least 5
> years without any major issues. And it looks like we are now among the < 3%
> using templated according to the April 2017 user survey.
> ¯\_(ツ)_/¯
>
> We choose the templated catalog backend for its simplicity (especially with
> our CMS) and because it makes no sense (to me) to use and rely on an
> SQL server to serve what is essentially static content.
>
> Regarding the v3 catalog support, we do have an in-house fix we intended to
> upstream very soon (and just did right now). [1]
>
> So if the templated catalog backend gets deprecated,
> my wish would be to have access to an alternate file based
> implementation, a production grade implementation ready to be used
> before I get spammed with deprecation warnings in the keystone logs.
>
> Thanks
>
> [1] https://review.openstack.org/#/c/482766/
>
> --
> Mathieu
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack Diversity BoF

2017-07-13 Thread T. Nichole Williams

Hi everyone,

Since we still have a few days to present presentation proposals for Summit, I 
was wondering if anyone would like to join me on a diversity panel or BoF for 
women (specifically WOC) in OpenStack? 


<3 Trilliams

Sent from my iPhone


<3 Trilliams

Sent from my iPhone
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] How to deal with confusion around "hosted projects"

2017-07-13 Thread Jeremy Stanley
On 2017-07-13 13:03:26 -0400 (-0400), Zane Bitter wrote:
[...]
> This is pretty much the same argument as I would make for DNSaaS.
> Without it you're developing your own orchestration and/or
> manually updating stuff every time you make a change in your
> infrastructure, which pretty much negates the benefits of IaaS for
> a very large subset of applications and leaves you stuck back in
> the pre-aaS days where making any changes to where your
> application ran was slow and painful.
[...]

For the most part I would agree with you, and in fact I myself run
geographically-distributed authoritative nameservers for my own
domains on general purpose virtual machines so I can have better
control over things like DNSSEC but there is one exception.

At least I (and many sysadmins I know personally or professionally)
expect working reverse DNS for the systems they maintain. Reverse
DNS either has to be set through or delegated by the holder of the
IP address assignments (the LIR in IANA's terminology). If you can't
get _some_ direct mechanism from your provider to set reverse DNS
entries for the systems you're running there, then that's bad. If
they don't provide an API or some tight integration with their
server management automation to set your chosen reverse DNS entries
on each new system you boot, and you're instead relegated to
submitting a trouble ticket or change request to get that added
afterward, then it's not an especially functional service.

It doesn't matter that I can boot a server in your environment in
seconds if it then sits there for hours or days before I have proper
reverse DNS added so I feel comfortable putting it into production.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][doc] migration update

2017-07-13 Thread Doug Hellmann
We have recovered old tagged content from
https://docs.openstack.org/developer/$project and moved them to
https://docs.openstack.org/$project/$version. As part of this process,
we also kept any builds from the tip of the mitaka, newton, and ocata
branches using those names as $version. And finally, for the projects
that have had no doc builds since we updated the output location of the
documentation job, we moved their /developer/$project content over to
/$project/latest.

Big thanks to fungi for helping with the restoration!

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][glance] Release countdown for week R-6, July 14 - July 21

2017-07-13 Thread Sean McGinnis
On Thu, Jul 13, 2017 at 01:35:00PM -0400, Doug Hellmann wrote:
> Excerpts from Thierry Carrez's message of 2017-07-13 17:44:13 +0200:
> 
> > glance-store and instack haven't made a Pike release yet: if nothing is
> > done by July 20, one release will be forced (on master HEAD) so that we
> > have something to cut a stable branch from.
> 
> I have prepared a patch for a glance-store 0.21.0 release [1]. It would
> be best if the glance team signed off on that before we approve it, but
> either way we will ensure there is a release before the deadline next
> week.
> 
> Doug
> 
> [1] https://review.openstack.org/483467
> 

This was brought up in the Glance meeting today, and there are some final
reviews being done on some outstanding patches. The Glance core team does
have their eye on the non-client library freeze, so I think we can expect
a release requested prior to the deadline.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glare][TC] Application for inclusion of Glare in the list of official projects

2017-07-13 Thread Erno Kuvaja
On Thu, Jul 13, 2017 at 4:43 PM, Monty Taylor  wrote:
> On 07/13/2017 08:42 AM, Erno Kuvaja wrote:
>>
>> On Wed, Jul 12, 2017 at 1:21 AM, Monty Taylor 
>> wrote:
>>>
>>> On 07/11/2017 06:47 AM, Flavio Percoco wrote:


 On 11/07/17 14:20 +0300, Mikhail Fedosin wrote:
>
>
> On Tue, Jul 11, 2017 at 1:43 AM, Monty Taylor 
> wrote:
>
>> On 07/10/2017 04:31 PM, Mikhail Fedosin wrote:
>>>
>>>
>>> Third, all these changes can be hidden in Glare client. So if we try
>>> a
>>> little, we can achieve 100% compatibility there, and other projects
>>> can
>>> use
>>> Glare client instead of Glance's without even noticing the
>>> differences.
>>>
>>
>> I think we should definitely not do this... I think instead, if we
>> decide
>> to go down this road, we want to look at adding an endpoint to glare
>> that
>> speaks glance v2 API so that users can have a transition period while
>> libraries and tools get updated to understand the artifacts API.
>
>
>
>
> This is optional and depends on the project developers. For my part, I
> can
> only offer the most compatible client, so that the Glance module can be
> simply copied into the new Glare module.



 Unfortunately, adding this sort of logic to the client is almost never
 the
 right
 choice. To be completely honest, I'm not even convinced having a
 Glance-like API
 in Glare is the right thing to do. As soon as that API hits the
 codebase,
 you'll
 have to maintain it.

 Anything that delays the transition to the new thing is providing a fake
 bridge
 to the users. It's a bridge that will be blown-up eventually.

 To make a hypothetical transition from Glance to Glare works smoothly,
 we
 should
 first figure out how to migrate the database (assuming this has not been
 done
 yet), how to migrate the images, etc. Only when these things have been
 figured
 out, I'd start worrying about what compatibility layer we want to
 provide.
 The
 answer could also be: "Hey, we're sorry but, the best thing you can do
 is
 to
 migrate your code base as soon as possible".
>>>
>>>
>>>
>>> I think this is a deal breaker. The problem is - if glare doesn't provide
>>> a
>>> v2 compat layer, then a deployer is going to have to run glance AND glare
>>> at
>>> the same time and we'll have to make sure both glance and glare can write
>>> to
>>> the same backend.
>>>
>>> The reason is that with our major version bumps both versions co-exist
>>> for a
>>> period of time which allows consumers to gracefully start consuming the
>>> nicer and newer api while not being immediately broken when the old api
>>> isn't there.
>>>
>>> What we'd be looking at is:
>>>
>>> * a glare service that runs two endpoints - an /image endpoint and an
>>> /artifact endpoint - and that registers the /image endpoint with the
>>> catalog
>>> as the 'image' service_type and the /artifact endpoint with the catalog
>>> as
>>> the 'artifact' service_type followed by a deprecation period of the image
>>> endpoint from the bazillion things that use it and a migration to the
>>> artifact service.
>>>
>>> OR
>>>
>>> First - immediately bump the glare api version to 3.0. This is affect
>>> some
>>> glare users, but given the relative numbers of glance v. glare users, it
>>> may
>>> be the right choice.
>>>
>>> Run a single set of versioned endpoints - no /v1, /v2 has /image at the
>>> root
>>> and /v3 has /artifact at the root. Register that endpoint with the
>>> catalog
>>> as both artifact and image.
>>>
>>> That means service and version discovery will find the /v2 endpoint of
>>> the
>>> glare service if someone says "I want 'image' api 'v2'". It's already
>>> fair
>>> game for a cloud to run without v1 - so that's not a problem. (This, btw,
>>> is
>>> the reason glare has to bump its api to v3 - if it still had a v1 in its
>>> version discovery document, glance users would potentially find that but
>>> it
>>> would not be a v1 of the image API)
>>>
>>> In both cases, /v2/images needs to be the same as glance /v2/images. If
>>> both
>>> are running side-by-side, which is how we normally do major version
>>> bumps,
>>> then client tools and libraries can use the normal version discovery
>>> process
>>> to discover that the cloud has the new /v3 version of the api with
>>> service-type of 'image', and they can decide if they want to use it or
>>> not.
>>>
>>>
>>> Yes - this is going to provide a pile of suck for the glare team, because
>>> they're going to have to maintain an API mapping layer, and they're going
>>> to
>>> have to maintain it for a full glance v2 api deprecation period. Becaue
>>> glance v2 is in DefCore, that is longer than a normal deprecation period
>>> -
>>> but that's life.
>>
>>
>> Just clarify something here. These plans are still not aligned with
>> cu

Re: [openstack-dev] [Glare][TC] Application for inclusion of Glare in the list of official projects

2017-07-13 Thread Monty Taylor

On 07/12/2017 02:02 AM, Flavio Percoco wrote:

On 11/07/17 19:21 -0500, Monty Taylor wrote:

On 07/11/2017 06:47 AM, Flavio Percoco wrote:

On 11/07/17 14:20 +0300, Mikhail Fedosin wrote:

On Tue, Jul 11, 2017 at 1:43 AM, Monty Taylor
 wrote:


On 07/10/2017 04:31 PM, Mikhail Fedosin wrote:
Third, all these changes can be hidden in Glare client. So if we 
try a

little, we can achieve 100% compatibility there, and other
projects can use
Glare client instead of Glance's without even noticing the 
differences.




I think we should definitely not do this... I think instead, if
we decide
to go down this road, we want to look at adding an endpoint to
glare that
speaks glance v2 API so that users can have a transition period while
libraries and tools get updated to understand the artifacts API.



This is optional and depends on the project developers. For my
part, I can
only offer the most compatible client, so that the Glance module can be
simply copied into the new Glare module.


Unfortunately, adding this sort of logic to the client is almost
never the right
choice. To be completely honest, I'm not even convinced having a
Glance-like API
in Glare is the right thing to do. As soon as that API hits the
codebase, you'll
have to maintain it.

Anything that delays the transition to the new thing is providing a
fake bridge
to the users. It's a bridge that will be blown-up eventually.

To make a hypothetical transition from Glance to Glare works
smoothly, we should
first figure out how to migrate the database (assuming this has not
been done
yet), how to migrate the images, etc. Only when these things have
been figured
out, I'd start worrying about what compatibility layer we want to
provide. The
answer could also be: "Hey, we're sorry but, the best thing you can
do is to
migrate your code base as soon as possible".


I think this is a deal breaker. The problem is - if glare doesn't
provide a v2 compat layer, then a deployer is going to have to run
glance AND glare at the same time and we'll have to make sure both
glance and glare can write to the same backend.

The reason is that with our major version bumps both versions co-exist
for a period of time which allows consumers to gracefully start
consuming the nicer and newer api while not being immediately broken
when the old api isn't there.

What we'd be looking at is:

* a glare service that runs two endpoints - an /image endpoint and an
/artifact endpoint - and that registers the /image endpoint with the
catalog as the 'image' service_type and the /artifact endpoint with
the catalog as the 'artifact' service_type followed by a deprecation
period of the image endpoint from the bazillion things that use it and
a migration to the artifact service.

OR

First - immediately bump the glare api version to 3.0. This is affect
some glare users, but given the relative numbers of glance v. glare
users, it may be the right choice.

Run a single set of versioned endpoints - no /v1, /v2 has /image at
the root and /v3 has /artifact at the root. Register that endpoint
with the catalog as both artifact and image.

That means service and version discovery will find the /v2 endpoint of
the glare service if someone says "I want 'image' api 'v2'". It's
already fair game for a cloud to run without v1 - so that's not a
problem. (This, btw, is the reason glare has to bump its api to v3 -
if it still had a v1 in its version discovery document, glance users
would potentially find that but it would not be a v1 of the image API)

In both cases, /v2/images needs to be the same as glance /v2/images.
If both are running side-by-side, which is how we normally do major
version bumps, then client tools and libraries can use the normal
version discovery process to discover that the cloud has the new /v3
version of the api with service-type of 'image', and they can decide
if they want to use it or not.


Yes - this is going to provide a pile of suck for the glare team,
because they're going to have to maintain an API mapping layer, and
they're going to have to maintain it for a full glance v2 api
deprecation period. Becaue glance v2 is in DefCore, that is longer
than a normal deprecation period - but that's life.


Right! This is the extended version of what I tried to say. :D


\o/

I'm not a huge fan of the Glare team having a Glance v2 API but I think 
it's our
best option forward. FWIW, this WAS tried before but a bit different. 
Remeber

the Glance v3 discussion?

That Glance v3 was Glare living in the Glance's codebase. The main 
difference
now is that it would be Glare providing Glance's v2 and Glare's v3 
rather than

Glance doing yet another major version change.

I still think we should figure out how to migrate a Glance deployment to 
Glare
(database, stores, etc) before the work on this API even starts. I would 
like to

see a good plan forward for this.

Ultimately, the thing I definitely don't want to see happening is any logic
being hard-coded inside client libraries.


Yes. I wh

Re: [openstack-dev] [release][glance] Release countdown for week R-6, July 14 - July 21

2017-07-13 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2017-07-13 17:44:13 +0200:

> glance-store and instack haven't made a Pike release yet: if nothing is
> done by July 20, one release will be forced (on master HEAD) so that we
> have something to cut a stable branch from.

I have prepared a patch for a glance-store 0.21.0 release [1]. It would
be best if the glance team signed off on that before we approve it, but
either way we will ensure there is a release before the deadline next
week.

Doug

[1] https://review.openstack.org/483467

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] How to deal with confusion around "hosted projects"

2017-07-13 Thread Zane Bitter

On 29/06/17 10:55, Monty Taylor wrote:
(Incidentally, I think it's unworkable to have an IaaS without DNS. 
Other people have told me that having an IaaS without LBaaS or a message 
queuing service is unworkable, while I neither need nor want either of 
those things from my IaaS - they seem like PaaS components to me)


I resemble that remark, so maybe it's worth clarifying how I see things.

In many ways the NIST definitions of SaaS/PaaS/IaaS from 2011, while 
helpful to cut through the vagueness of the 'cloud' buzzword and frame 
the broad outlines of cloud service models (at least at the time), have 
proven inadequate to describe the subtlety of the various possible 
offerings. The only thing that is crystal clear is that LBaaS and 
message queuing are not PaaS components ;)


I'd like to suggest that the 'Platform' in PaaS means the same thing 
that it has since at least the '90s: the Operating System and possibly 
there language runtime if any. The difference between PaaS and IaaS in 
terms of compute is that in the latter case you're given a machine and 
you install whatever platform you like on it, while in the former the 
platform is provided as a service. Hence the name.


To the extent that hardware load balancers are used, LBaaS is pretty 
clearly IaaS. Hardware is infrastructure, if you provide access to that 
as a service it's Infrastructure as a Service. QED. It's also possible 
to provide software load balancers as a service. Technically I guess 
this is SaaS. Theoretically you could make an argument that an API that 
can abstract over either hardware or software load balancers is not 
"real" IaaS. And I would label that argument as BS sophistry :)


The fact that PaaS implementations use load balancers internally is 
really neither here nor there.


You can certainly build a useful cloud without LBaaS. That just means 
that anybody who needs load balancing will have to spin up their own 
software load balancer to do it. But that has a couple of consequences. 
One is that every application developer has to build their own 
orchestration to update the load balancer configuration when it needs to 
change. The other is that they're stuck with the least common 
denominator - if you use one cloud that doesn't have an LBaaS API, even 
one backed by software load balancers, then you won't be able to take 
advantage of hardware load balancers on another cloud without rewriting 
a chunk of your application. That's a big concern for OpenStack, which 
has application portability as one of its foremost goals. Thus an IaaS 
cloud that includes LBaaS is considerably more valuable than one that 
does not, for a large range of very common use cases.


This is pretty much the same argument as I would make for DNSaaS. 
Without it you're developing your own orchestration and/or manually 
updating stuff every time you make a change in your infrastructure, 
which pretty much negates the benefits of IaaS for a very large subset 
of applications and leaves you stuck back in the pre-aaS days where 
making any changes to where your application ran was slow and painful. 
That's despite the fact that DNSaaS is arguably pure SaaS. This is where 
the NIST model breaks down IMHO. We tend to assume that only stuff that 
faces end users is SaaS and therefore everything developer-facing has to 
fall into either IaaS/PaaS. This results in IaaS developers treating 
"PaaS" as a catch-all bucket for "everything application 
developer-facing that I don't think is infrastructure", rather than a 
term that has meaning in itself.


This confusion leads to the implicit argument that these kinds of 
developer-facing SaaS offerings are only useful to applications running 
in a PaaS, and therefore should be Someone Else's Problem, which is 
WRONG. It's wrong because different parts of an application have 
different needs. Just because I need to tweak the kernel parameters on 
my app server, it doesn't follow that I need to tweak the kernel 
parameters on my load balancer, or my database, too. It just doesn't.


At one level, a message queue falls into the same bucket as other SaaS 
components (like DBaaS, sometimes LBaaS, &c.). Something that's useful 
to a subset of applications running in either an IaaS or a PaaS. The 
subset of applications that would use it is probably quite a bit smaller 
than the subset that would use e.g. LBaaS.


However, there's also another dimension to asynchronous messaging in 
particular. If a cloud needs to notify an application about events in or 
changes to its infrastructure, then if it has a built-in *reliable* 
message queue API it can reuse it to deliver these notifications. 
(Actually, I would put it the other way around: if a cloud has an API to 
deliver messages to applications from the infrastructure, it can also 
allow applications to reuse this capability for their own purposes.) 
Again, you can certainly have an IaaS cloud that doesn't provide an 
asynchronous messaging capability. But it wi

Re: [openstack-dev] [all] Announcing openstack-sigs ML

2017-07-13 Thread Doug Hellmann
Excerpts from Melvin Hillsman's message of 2017-07-13 08:51:16 -0500:

>1. Start the openstack-sigs mailing list

You can subscribe by visiting
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] priorities for the coming week (07/13 - 07/20)

2017-07-13 Thread Brian Rosmaita
As discussed at today's Glance meeting, here are the priorities for
the coming week.

(1) The uwsgi problem that abhishekk is working on:
https://bugs.launchpad.net/glance/+bug/1703856
Keep an eye on #openstack-glance in case he needs help, and keep an
eye on the bug to see what patches need to be looked at.

(2) The final release for non-client libraries is next week, so this
is our last chance to whip glance_store into shape for Pike.  List of
patches to review is here:
https://etherpad.openstack.org/p/glance-store-priority-reviews-pike

(3) Image import refactor: please review
https://review.openstack.org/#/c/482182/


Have a productive week!
brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-wg/news

2017-07-13 Thread Ed Leafe
Greetings OpenStack community,

If you missed our meeting today, well, you didn't miss much. Not a lot of new 
things going on. The bulk of the meeting was taken up with ideas for what we'd 
like to accomplish at the upcoming Denver PTG. We thought that it might be 
useful to have an informal "what do you think about our API" session, where 
people from various projects could bring up issues they are concerned about in 
their APIs, and we could then discuss what might be a better approach. The 
initial idea isn't to force anything, but rather to have these discussions 
*before* an API is released, so that there might be fewer problems later on. 
These are still initial working ideas, so we agreed to think this through a bit 
more before next week's meeting. If you have ideas, please let the group know.

# Newly Published Guidelines

None this week.

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

None this week

# Guidelines Currently Under Review [3]

* A (shrinking) suite of several documents about doing version and service 
discovery
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet ready for 
review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API WG, please address your 
concerns in an email to the OpenStack developer mailing list[1] with the tag 
"[api]" in the subject. In your email, you should include any relevant reviews, 
links, and comments to help guide the discussion of the specific challenge you 
are facing.

To learn more about the API WG mission and the work we do, see OpenStack API 
Working Group [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z


Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_wg/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Blueprint review focus toward feature freeze (July 27)

2017-07-13 Thread Matt Riedemann
As discussed in the nova meeting today [1] I have started an etherpad of 
blueprint changes up for review [2] broken down into categories to help 
focus reviews.


I did something similar in the Newton release and use this to help 
myself organize my TODO list, sort of like a Kanban board. As things 
make progress they move to the top.


I've already filled out some of the changes which are very close to 
completing the blueprint and we could get done this week.


Then I've started noting changes against priority efforts with notes.

I will then start filling in categories for other blueprints that need 
attention and try to prioritize those based on what I think can actually 
get completed in Pike. For example, if there are two changes which 
haven't gotten much review attention but I feel like one has a better 
chance of getting completed before the feature freeze, I will prioritize 
that one higher. Some people might think this is unfair, but the way I 
see it is, if we're going to focus on something, I'd rather it be the 
thing that can be done, rather than divide our attention and fail to get 
either done.


Please let me know if there are questions.

[1] 
http://eavesdrop.openstack.org/meetings/nova/2017/nova.2017-07-13-14.00.html

[2] https://etherpad.openstack.org/p/nova-pike-feature-freeze-status

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] Status update, July 13th

2017-07-13 Thread Thierry Carrez
Hi!

With a few hours in advance (blame Bastille Day!) this is the weekly
update on Technical Committee initiatives. You can find the full list of
all open topics at:

https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee


== Recently-approved changes ==

* Add Queens goal: policy and docs in code [1][2][3]
* Add champions to selected Pike goals [4]
* Remove remaining 'big tent' references [5]
* New repositories: castellan-ui, tripleo-upgrade, os-service-types,
tempest-tripleo-ui

[1] https://review.openstack.org/#/c/469954/
[2] https://review.openstack.org/#/c/475882/
[3] https://review.openstack.org/#/c/481654/
[4] https://review.openstack.org/#/c/482869/
[5] https://review.openstack.org/#/c/480500/

So the cross-project goals for the Queens cycle have been defined, you
can find them at:

https://governance.openstack.org/tc/goals/queens/index.html

One difference with the Pike goals is that each goal will have an active
champion to project-manage them to completion, providing assistance,
reminders and sometimes direct help to the project teams.


== Open discussions ==

We have two new project teams being actively discussed, for potential
approval once the Queens cycle is opened: Glare[6][7] and Blazar[8].
Please ask questions or voice your objections at:

[6] https://review.openstack.org/#/c/479285/
[7] http://lists.openstack.org/pipermail/openstack-dev/2017-July/119442.html
[8] https://review.openstack.org/#/c/482860/

Discussion is still in progress on two clarifications from John Garbutt,
where new patchsets have been recently posted:

Decisions should be globally inclusive:
https://review.openstack.org/#/c/460946/

Describe what upstream support means:
https://review.openstack.org/440601


== Voting in progress ==

The TC vision for 2019 was discussed at a recent TC meeting[9],
brilliantly summarized by cdent at [10].

[9] http://eavesdrop.openstack.org/meetings/tc/2017/tc.2017-07-11-20.01.html
[10]
http://lists.openstack.org/pipermail/openstack-dev/2017-July/119569.html

It now has majority votes, and is about to be approved early next week,
in the form of the following patch chain:

https://review.openstack.org/#/c/453262/
https://review.openstack.org/#/c/473620/
https://review.openstack.org/#/c/482152/
https://review.openstack.org/#/c/482686/

This vision is of course produced on a specific date by a specific
membership of the TC. Future TC members should definitely revisit it and
potentially produce a new vision for another point in the future if they
feel like the current vision is no longer productive.

The long-standing "Declare plainly the current state of PostgreSQL in
OpenStack" resolution is just one formal vote away from being approved:

https://review.openstack.org/#/c/427880/

Finally, a number of tag changes are about to be approved: unless there
are objections congress should get the stable:follows-policy tag[11],
and Nova and Ironic should get the assert:supports-api-interoperability
tag[12]

[11] https://review.openstack.org/479030
[12] https://review.openstack.org/482759

== TC member actions for the coming week(s) ==

flaper87 to update "Drop Technical Committee meetings" with a new
revision

dims to sync with Stackube on progress and report back


== Need for a TC meeting next Tuesday ==

No initiative is currently stuck, so there is no need for a TC meeting
next week.


Cheers!

-- 
Thierry Carrez (ttx)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glare][TC] Application for inclusion of Glare in the list of official projects

2017-07-13 Thread Monty Taylor

On 07/13/2017 08:42 AM, Erno Kuvaja wrote:

On Wed, Jul 12, 2017 at 1:21 AM, Monty Taylor  wrote:

On 07/11/2017 06:47 AM, Flavio Percoco wrote:


On 11/07/17 14:20 +0300, Mikhail Fedosin wrote:


On Tue, Jul 11, 2017 at 1:43 AM, Monty Taylor 
wrote:


On 07/10/2017 04:31 PM, Mikhail Fedosin wrote:


Third, all these changes can be hidden in Glare client. So if we try a
little, we can achieve 100% compatibility there, and other projects can
use
Glare client instead of Glance's without even noticing the differences.



I think we should definitely not do this... I think instead, if we
decide
to go down this road, we want to look at adding an endpoint to glare
that
speaks glance v2 API so that users can have a transition period while
libraries and tools get updated to understand the artifacts API.




This is optional and depends on the project developers. For my part, I
can
only offer the most compatible client, so that the Glance module can be
simply copied into the new Glare module.



Unfortunately, adding this sort of logic to the client is almost never the
right
choice. To be completely honest, I'm not even convinced having a
Glance-like API
in Glare is the right thing to do. As soon as that API hits the codebase,
you'll
have to maintain it.

Anything that delays the transition to the new thing is providing a fake
bridge
to the users. It's a bridge that will be blown-up eventually.

To make a hypothetical transition from Glance to Glare works smoothly, we
should
first figure out how to migrate the database (assuming this has not been
done
yet), how to migrate the images, etc. Only when these things have been
figured
out, I'd start worrying about what compatibility layer we want to provide.
The
answer could also be: "Hey, we're sorry but, the best thing you can do is
to
migrate your code base as soon as possible".



I think this is a deal breaker. The problem is - if glare doesn't provide a
v2 compat layer, then a deployer is going to have to run glance AND glare at
the same time and we'll have to make sure both glance and glare can write to
the same backend.

The reason is that with our major version bumps both versions co-exist for a
period of time which allows consumers to gracefully start consuming the
nicer and newer api while not being immediately broken when the old api
isn't there.

What we'd be looking at is:

* a glare service that runs two endpoints - an /image endpoint and an
/artifact endpoint - and that registers the /image endpoint with the catalog
as the 'image' service_type and the /artifact endpoint with the catalog as
the 'artifact' service_type followed by a deprecation period of the image
endpoint from the bazillion things that use it and a migration to the
artifact service.

OR

First - immediately bump the glare api version to 3.0. This is affect some
glare users, but given the relative numbers of glance v. glare users, it may
be the right choice.

Run a single set of versioned endpoints - no /v1, /v2 has /image at the root
and /v3 has /artifact at the root. Register that endpoint with the catalog
as both artifact and image.

That means service and version discovery will find the /v2 endpoint of the
glare service if someone says "I want 'image' api 'v2'". It's already fair
game for a cloud to run without v1 - so that's not a problem. (This, btw, is
the reason glare has to bump its api to v3 - if it still had a v1 in its
version discovery document, glance users would potentially find that but it
would not be a v1 of the image API)

In both cases, /v2/images needs to be the same as glance /v2/images. If both
are running side-by-side, which is how we normally do major version bumps,
then client tools and libraries can use the normal version discovery process
to discover that the cloud has the new /v3 version of the api with
service-type of 'image', and they can decide if they want to use it or not.


Yes - this is going to provide a pile of suck for the glare team, because
they're going to have to maintain an API mapping layer, and they're going to
have to maintain it for a full glance v2 api deprecation period. Becaue
glance v2 is in DefCore, that is longer than a normal deprecation period -
but that's life.


Just clarify something here. These plans are still not aligned with
current plans in Glance community. So we have no intention to
deprecate the Images API v2 nor stop supporting it. If Glare wants to
maintain functional par with Images API v2 and we get every single
deployment/project consuming Glance to move to Glare, then we can talk
again but the foreseeable future Glance is moving forward with the
Images API v2 maintained and supported (and developed forward within
our resources).


AH! Sorry for the confusion generated on my part- let me clarify.

What I'm talking about is the technical work that would be an absolute 
requirement for us to CONSIDER the proposal to merge the efforts or 
replace one with the other. By no means has such a decision been made.


What I'm tal

[openstack-dev] [release] Release countdown for week R-6, July 14 - July 21

2017-07-13 Thread Thierry Carrez
Welcome to our regular release countdown email!

Development Focus
-

Teams should be wrapping up Pike feature work, prioritizing non-client
libraries first, then everything else (services, client libraries...)


General Information
---

Next week is the deadline for releasing non-client libraries (meaning:
all libraries that are not python-${PROJECT}client API client libraries).

stable/pike branches will be cut from the most recent Pike releases. So
if your master branch contains changes that you want to see in the Pike
release branch, you should definitely consider asking for a new release.

glance-store and instack haven't made a Pike release yet: if nothing is
done by July 20, one release will be forced (on master HEAD) so that we
have something to cut a stable branch from.

For the rest of the cycle-with-milestones deliverables, feature freeze
will hit on July 27. After that date, only bugfixes should be accepted
(feature freeze exceptions may be granted up to RC1 for services).
During all that period, StringFreeze is in effect, in order to let the
I18N team do the translation work in good conditions. The StringFreeze
is soft (allowing exceptions as long as they are discussed on the
mailing-list and deemed worth the effort). It becomes a hard
StringFreeze on August 10.

See all details at:https://releases.openstack.org/pike/schedule.html


Actions
---

All project teams should prioritize updating the organization of their
documentation according to the migration spec [1] to facilitate
linking to it accurately from docs.openstack.org. Please complete
this work before creating your stable/pike branch. Contact
the docs team if you need assistance or have questions.

[1]
http://specs.openstack.org/openstack/docs-specs/specs/pike/os-manuals-migration.html


Upcoming Deadlines & Dates
--

Non-client libraries final releases: July 20
Client libraries final releases: July 27
Pike-3 milestone (and Feature freeze): July 27
Final Pike release: August 30
Queens PTG in Denver: Sept 11-15

-- 
Thierry Carrez (ttx)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TR: [tricircle]

2017-07-13 Thread Steve Gordon
- Original Message -
> From: "joehuang" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Tuesday, July 11, 2017 8:12:16 PM
> Subject: Re: [openstack-dev] TR: [tricircle]
> 
> Hi Meher,
> 
> Yes, as Victor pointed out, it should be done by the devstack script. But in
> our daily development, I use (maybe most of us) Ubuntu to install Tricircle
> through devstack, so not sure whether there is some bug under RHEL, and I
> have no RHEL distribution.

My recollection is that Apache configuration is under /etc/httpd/ on 
RHEL/CentOS/Fedora.

-Steve

> Best Regards
> Chaoyi Huang (joehuang)
> 
> From: Morales, Victor [victor.mora...@intel.com]
> Sent: 12 July 2017 0:13
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] TR: [tricircle]
> 
> Hi Meher,
> 
> I don’t think that you need to create those folders or at least that it’s
> shown in the devstack functions[1].
> 
> Regards/Saludos
> Victor Morales
> 
> [1]
> https://github.com/openstack-dev/devstack/blob/master/lib/apache#L178-L192
> 
> From: "meher.h...@orange.com" 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: Tuesday, July 11, 2017 at 7:51 AM
> To: "openstack-dev@lists.openstack.org" 
> Subject: [openstack-dev] TR: [tricircle]
> 
> 
> 
> [ogo Orange]
> 
> Meher Hihi
> Intern
> ORANGE/IMT/OLN/WTC/CMA/MAX
> Fixe : +33 2 96 07 03
> 71
> Mobile : +33 7 58 38 68
> 87
> meher.h...@orange.com
> 
> 
> De : HIHI Meher IMT/OLN
> Envoyé : mardi 11 juillet 2017 14:50
> À : HIHI Meher IMT/OLN
> Objet : RE: [openstack-dev][tricircle]
> 
> Hi Zhiyuan,
> 
> Thank you for the response! So, in this case, I just need to create two
> "sites-available" and "sites-enabled" folders under /etc/ httpd and put in
> the config files found in /etc/httpd/conf.d/?
> 
> Regards,
> 
> Meher
> 
> [ogo Orange]
> 
> Meher Hihi
> Intern
> ORANGE/IMT/OLN/WTC/CMA/MAX
> Fixe : +33 2 96 07 03
> 71
> Mobile : +33 7 58 38 68
> 87
> meher.h...@orange.com
> 
> 
> De : HIHI Meher IMT/OLN
> Envoyé : lundi 10 juillet 2017 16:10
> À : 'openstack-dev@lists.openstack.org'
> Objet : RE: [openstack-dev][tricircle]
> 
> Hello everybody,
> 
> I posted before a problem related to installing the tricircle on a single
> node, the script stopped with a keystone startup. You advised me to see the
> / etc / apache2 / sites-enabled folder to see if the keystone config files
> are included. But, I have not found this folder, yet the httpd service is
> properly installed, the name of this file changes according to the
> distribution? I use RHEL 7, thank you in advance!
> 
> Meher
> 
> [ogo Orange]
> 
> Meher Hihi
> Intern
> ORANGE/IMT/OLN/WTC/CMA/MAX
> Fixe : +33 2 96 07 03
> 71
> Mobile : +33 7 58 38 68
> 87
> meher.h...@orange.com
> 
> 
> De : HIHI Meher IMT/OLN
> Envoyé : mercredi 28 juin 2017 15:12
> À : 'openstack-dev@lists.openstack.org'
> Objet : [openstack-dev][tricircle]
> 
> Hello everyone,
> 
> I introduce myself; Meher Hihi; I am doing my internship at Orange Labs
> Networks Lannion-France for the diploma of computer network and
> telecommunications engineer.
> 
> I am working on innovative distribution solutions for the virtualization
> infrastructure of the network functions and more specifically on the
> Openstack Tricircle solution, which is why I join your community to
> participate in your discussions and learn from your advice.
> 
> Indeed, I try to install Tricircle on a single node by following this
> documentation
> “https://docs.openstack.org/developer/tricircle/installation-guide.html#single-pod-installation-with-devstack”.
> I managed to install Devstack without any

[openstack-dev] [tripleo] Implementing container healthchecks

2017-07-13 Thread Lars Kellogg-Stedman
We [1] have started work on implementing support for
https://blueprints.launchpad.net/tripleo/+spec/container-healthchecks in
tripleo-common.  I would like to describe the approach we're taking in the
short term, as well as explore some ideas for longer-term implementations.

## Our current approach

We will be implementing service health checks in the 'healthcheck'
directory of tripleo-common.  Once the checks are merged and available in
distribution packages, we will then
modify container-images/tripleo_kolla_template_overrides.j2 to activate
specific checks for containerized services.  A typical modification would
look something like:

  {% block nova_api_footer %}
  RUN install -D -m 755 /usr/share/tripleo-common/healthcheck/nova-api
/openstack/healthcheck
  HEALTHCHECK CMD /openstack/healthcheck
  {% endblock %}

That copies the specific healthcheck command to /openstack/healthcheck, and
then configures docker to run that check using the HEALTHCHECK directive.

This approach has the advantage of keeping all the development work within
tripleo-common for now.

If you are unfamiliar with Docker's HEALTHCHECK feature:

Docker will run this command periodically inside the container, and will
expose the status reported by the script (0 - healthy, 1 - unhealthy) via
the Docker API.  This is visible in the output of 'docker ps', for example:

  $ docker ps
  ... STATUS ...
  Up 8 minutes (healthy)

Details at:
https://docs.docker.com/engine/reference/builder/#healthchecDetails at:

## Looking to the future

Our initial thought was that moving forward, these checks could be
implemented through the Kolla project.  However, Martin André suggested
(correctly) that these checks would also be of interest outside of Kolla.
The thought right now is that at some point in the future, we would split
the checks out into a separate project to make them more generally
consumable.

## Reviews

You can see the proposed changes here:

- https://review.openstack.org/#/q/topic:bp/container-healthchecks+is:open

Specifically, the initial framework is provided in:

- https://review.openstack.org/#/c/483081/

And an initial set of checks is in:

- https://review.openstack.org/#/c/483104/

Please feel to review and comment. While we are reasonably happy with the
solution proposed in this email, we are open to improvements.  Thanks for
your input!

[1] Initially, Dan Prince, Ian Main, Martin Mágr, Lars Kellogg-Stedman

-- 
Lars Kellogg-Stedman 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Vitrage] Collectd notification isn't shown on the Vitrage Graph

2017-07-13 Thread Afek, Ifat (Nokia - IL/Kfar Sava)
Hi Volodymyr,

I believe that this change[1] will fix your problem.

[1] https://review.openstack.org/#/c/482212/

Best Regards,
Ifat.

From: "Mytnyk, VolodymyrX" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, 11 July 2017 at 12:48
To: "OpenStack Development Mailing List (not for usage questions)" 

Cc: "Tahhan, Maryam" 
Subject: Re: [openstack-dev] [Vitrage] Collectd notification isn't shown on the 
Vitrage Graph

Hi Ifat,

Thank you for investigating the issue.

The port name is unique on the graph.  The ovs port name in collectd ovs_events 
plugin is identified by the ‘plugin_instance’ notification field.

Thanks and Regards,
Volodymyr

From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.a...@nokia.com]
Sent: Tuesday, July 11, 2017 12:00 PM
To: OpenStack Development Mailing List (not for usage questions) 

Cc: Tahhan, Maryam 
Subject: Re: [openstack-dev] [Vitrage] Collectd notification isn't shown on the 
Vitrage Graph

Hi Volodymyr,

I’m working on this issue.
One question: is the port name, as defined by ‘plugin_instance’, supposed to be 
unique in the graph? If not, then how do you uniquely identify the port (in 
collectd)?

Thanks,
Ifat.

From: "Mytnyk, VolodymyrX" 
mailto:volodymyrx.myt...@intel.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Friday, 7 July 2017 at 13:27
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Cc: "Tahhan, Maryam" mailto:maryam.tah...@intel.com>>
Subject: Re: [openstack-dev] [Vitrage] Collectd notification isn't shown on the 
Vitrage Graph

Hi Ifat,

I’ve tested the template file modified by you with enabled debug for the 
Vitrage graph. See all Vitrage logs in the attachments.

Thank you!

Best Regards,
Volodymyr

From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.a...@nokia.com]
Sent: Friday, July 7, 2017 12:42 PM
To: OpenStack Development Mailing List (not for usage questions) 
mailto:openstack-dev@lists.openstack.org>>
Cc: Tahhan, Maryam mailto:maryam.tah...@intel.com>>
Subject: Re: [openstack-dev] [Vitrage] Collectd notification isn't shown on the 
Vitrage Graph

Hi Volodymyr,

Can you please enable debug information in vitrage.conf, restart vitrage-graph, 
and send me the vitrage-graph.log file (in the time where the alarm is raised)? 
I’ll try to understand why the alarm is not connected to the port. The 
definitions in collectd_conf.yaml seem correct.

I did find some issues with the template file – in the alarm definition, you 
specified the name of the resource instead of the name/rawtext of the alarm. 
Also, the name of the port was missing in the port definition. See the attached 
template (which I haven’t checked, but I believe should work). In any case, 
this will not fix the problem with the alarm being connected to the resource; 
it is relevant only for the next phase after we fix the first problem.

Best Regards,
Ifat.

From: "Mytnyk, VolodymyrX" 
mailto:volodymyrx.myt...@intel.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Friday, 7 July 2017 at 10:35
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Cc: "Tahhan, Maryam" mailto:maryam.tah...@intel.com>>
Subject: Re: [openstack-dev] [Vitrage] Collectd notification isn't shown on the 
Vitrage Graph

Hi Ifat,

Sorry, I forgot to attach the topology dump. Attaching it now.

Also, I’ve checked the topology, and looks like there is no relationship 
between neutron port and the alarm for some reason.

Thanks and Regards,
Volodymyr

From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.a...@nokia.com]
Sent: Friday, July 7, 2017 12:15 AM
To: OpenStack Development Mailing List (not for usage questions) 
mailto:openstack-dev@lists.openstack.org>>
Cc: Tahhan, Maryam mailto:maryam.tah...@intel.com>>
Subject: Re: [openstack-dev] [Vitrage] Collectd notification isn't shown on the 
Vitrage Graph

Hi Volodymyr,

Seems like the problem is that the alarm does not get connected to the port. In 
your collectd_conf.yaml, you should write:

collectd:
- collectd_host: silpixa00399503/ovs_events/qvo818dd156-be   (collectd resource 
name)
   type: neutron.port
   name: qvo818dd156  (openstack neutron port name)

By doing this, you cause any Collectd alarm that is raised on the Collectd 
source named silpixa00399503/ovs_events/qvo818dd156-be to be connected in 
Vitrage to a resource of type neutron.port with name qvo818dd156.

Try to look in the output of ‘vitrage topology show’ (you did not attach it to 
the mail) and see the exact details of the port.

Let me know if it helped,
Ifat.

From: "Mytnyk, VolodymyrX" 
mailto:volodymyrx.myt...@intel.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, 6 Jul

Re: [openstack-dev] How should we go about removing legacy VIF types in Queens?

2017-07-13 Thread Kevin Benton
Some of the stuff like '802.1qbh' isn't particularly vendor specific so I'm
not sure who will host it and a repo just for that seems like a bit much.
Should we just bite the bullet and convert them in the nova tree or put
them in os-vif?


On Thu, Jul 13, 2017 at 7:26 AM, Stephen Finucane 
wrote:

> os-vif has been integrated into nova since the newton cycle. With the
> integration of os-vif, the expectation is that all the old, non-os-vif
> plugging/unplugging code found in [1] will be replaced by code that
> harnesses
> os-vif plugins [2]. This has happened for a few of the VIF types, and newer
> VIFs are being added in this manner [3]. However, there are quite a few
> VIFs
> that are still using the legacy path, and I think it's about time we
> started
> moving things forward. Doing so allows us to continue to progress on
> passing
> os-vif objects from neutron and remove the large swathes of legacy code
> still
> found in nova.
>
> I've opened a bug against networking-bigswitch [4] for one of these VIF
> types,
> IVS, and I'm thinking I'll do the same for a lot of the other VIF types
> where I
> can find definite vendors. Is there anything else we can do though? At some
> point we're going to have to just start deleting code and I'd like to avoid
> leaving operators in the lurch.
>
> Cheers,
> Stephen
>
> [1] https://github.com/openstack/nova/blob/6205a3f8c/nova/virt/
> libvirt/vif.py#L
> 599-L764
> [2] https://github.com/openstack/nova/blob/6205a3f8c/nova/
> network/os_vif_util.p
> y#L346-L403
> [3] https://github.com/Juniper/contrail-nova-vif-driver
> [4] https://bugs.launchpad.net/networking-bigswitch/+bug/1704129
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Announcing openstack-sigs ML

2017-07-13 Thread Melvin Hillsman
Hi everyone,

Earlier this year we discussed areas we could collectively come together as
a community and be better. One area identified was getting unanswered
requirements simply put, answered[1]. After some small and larger
conversations the goal of adopting the SIG model, as touted by other
OpenSource communities like k8s and Fedora for example, was introduced at
the Forum[2].

With this goal in mind we understand that it is not the only single perfect
solution but it is one step in the right direction. Rather than wait for
ideas and implementation details to coalesce delaying the opportunity to
learn we decided to take to a couple actions:


   1. Start the openstack-sigs mailing list


   1. Propose creation of Meta SIG (
   http://lists.openstack.org/pipermail/openstack-sigs/2017-July/00.html
   ) *be sure to read this*


An initial thread[3] surrounding the effectiveness and implementation of
SIGs had been started and folks should continue the conversation by using
the [meta] tag on the openstack-sigs mailing list. We look forward to
lively and above all practical and applicable discussions taking place
within SIGs which result in unanswered requirements being answered.

Also we would like to encourage current folks to use the [meta] tag in your
emails to the mailing list to discuss any successes, failures, advantages,
disadvantages, improvements, suggestions, etc.

[1]
http://superuser.openstack.org/articles/community-leadership-charts-course-openstack/
[2] https://etherpad.openstack.org/p/BOS-forum-unanswered-requirements
[3] http://lists.openstack.org/pipermail/openstack-dev/2017-June/118723.html

-- 
Melvin Hillsman and Thierry Carrez
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glare][TC] Application for inclusion of Glare in the list of official projects

2017-07-13 Thread Erno Kuvaja
On Wed, Jul 12, 2017 at 1:21 AM, Monty Taylor  wrote:
> On 07/11/2017 06:47 AM, Flavio Percoco wrote:
>>
>> On 11/07/17 14:20 +0300, Mikhail Fedosin wrote:
>>>
>>> On Tue, Jul 11, 2017 at 1:43 AM, Monty Taylor 
>>> wrote:
>>>
 On 07/10/2017 04:31 PM, Mikhail Fedosin wrote:
>
> Third, all these changes can be hidden in Glare client. So if we try a
> little, we can achieve 100% compatibility there, and other projects can
> use
> Glare client instead of Glance's without even noticing the differences.
>

 I think we should definitely not do this... I think instead, if we
 decide
 to go down this road, we want to look at adding an endpoint to glare
 that
 speaks glance v2 API so that users can have a transition period while
 libraries and tools get updated to understand the artifacts API.
>>>
>>>
>>>
>>> This is optional and depends on the project developers. For my part, I
>>> can
>>> only offer the most compatible client, so that the Glance module can be
>>> simply copied into the new Glare module.
>>
>>
>> Unfortunately, adding this sort of logic to the client is almost never the
>> right
>> choice. To be completely honest, I'm not even convinced having a
>> Glance-like API
>> in Glare is the right thing to do. As soon as that API hits the codebase,
>> you'll
>> have to maintain it.
>>
>> Anything that delays the transition to the new thing is providing a fake
>> bridge
>> to the users. It's a bridge that will be blown-up eventually.
>>
>> To make a hypothetical transition from Glance to Glare works smoothly, we
>> should
>> first figure out how to migrate the database (assuming this has not been
>> done
>> yet), how to migrate the images, etc. Only when these things have been
>> figured
>> out, I'd start worrying about what compatibility layer we want to provide.
>> The
>> answer could also be: "Hey, we're sorry but, the best thing you can do is
>> to
>> migrate your code base as soon as possible".
>
>
> I think this is a deal breaker. The problem is - if glare doesn't provide a
> v2 compat layer, then a deployer is going to have to run glance AND glare at
> the same time and we'll have to make sure both glance and glare can write to
> the same backend.
>
> The reason is that with our major version bumps both versions co-exist for a
> period of time which allows consumers to gracefully start consuming the
> nicer and newer api while not being immediately broken when the old api
> isn't there.
>
> What we'd be looking at is:
>
> * a glare service that runs two endpoints - an /image endpoint and an
> /artifact endpoint - and that registers the /image endpoint with the catalog
> as the 'image' service_type and the /artifact endpoint with the catalog as
> the 'artifact' service_type followed by a deprecation period of the image
> endpoint from the bazillion things that use it and a migration to the
> artifact service.
>
> OR
>
> First - immediately bump the glare api version to 3.0. This is affect some
> glare users, but given the relative numbers of glance v. glare users, it may
> be the right choice.
>
> Run a single set of versioned endpoints - no /v1, /v2 has /image at the root
> and /v3 has /artifact at the root. Register that endpoint with the catalog
> as both artifact and image.
>
> That means service and version discovery will find the /v2 endpoint of the
> glare service if someone says "I want 'image' api 'v2'". It's already fair
> game for a cloud to run without v1 - so that's not a problem. (This, btw, is
> the reason glare has to bump its api to v3 - if it still had a v1 in its
> version discovery document, glance users would potentially find that but it
> would not be a v1 of the image API)
>
> In both cases, /v2/images needs to be the same as glance /v2/images. If both
> are running side-by-side, which is how we normally do major version bumps,
> then client tools and libraries can use the normal version discovery process
> to discover that the cloud has the new /v3 version of the api with
> service-type of 'image', and they can decide if they want to use it or not.
>
>
> Yes - this is going to provide a pile of suck for the glare team, because
> they're going to have to maintain an API mapping layer, and they're going to
> have to maintain it for a full glance v2 api deprecation period. Becaue
> glance v2 is in DefCore, that is longer than a normal deprecation period -
> but that's life.

Just clarify something here. These plans are still not aligned with
current plans in Glance community. So we have no intention to
deprecate the Images API v2 nor stop supporting it. If Glare wants to
maintain functional par with Images API v2 and we get every single
deployment/project consuming Glance to move to Glare, then we can talk
again but the foreseeable future Glance is moving forward with the
Images API v2 maintained and supported (and developed forward within
our resources).

For a while this spring it looked like worst case scenario we need to
p

[openstack-dev] How should we go about removing legacy VIF types in Queens?

2017-07-13 Thread Stephen Finucane
os-vif has been integrated into nova since the newton cycle. With the
integration of os-vif, the expectation is that all the old, non-os-vif
plugging/unplugging code found in [1] will be replaced by code that harnesses
os-vif plugins [2]. This has happened for a few of the VIF types, and newer
VIFs are being added in this manner [3]. However, there are quite a few VIFs
that are still using the legacy path, and I think it's about time we started
moving things forward. Doing so allows us to continue to progress on passing
os-vif objects from neutron and remove the large swathes of legacy code still
found in nova. 

I've opened a bug against networking-bigswitch [4] for one of these VIF types,
IVS, and I'm thinking I'll do the same for a lot of the other VIF types where I
can find definite vendors. Is there anything else we can do though? At some
point we're going to have to just start deleting code and I'd like to avoid
leaving operators in the lurch.

Cheers,
Stephen

[1] https://github.com/openstack/nova/blob/6205a3f8c/nova/virt/libvirt/vif.py#L
599-L764
[2] https://github.com/openstack/nova/blob/6205a3f8c/nova/network/os_vif_util.p
y#L346-L403
[3] https://github.com/Juniper/contrail-nova-vif-driver
[4] https://bugs.launchpad.net/networking-bigswitch/+bug/1704129

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] scenario006 conflict

2017-07-13 Thread Emilien Macchi
On Thu, Jul 13, 2017 at 1:55 AM, Derek Higgins  wrote:
> On 12 July 2017 at 22:33, Emilien Macchi  wrote:
>> On Wed, Jul 12, 2017 at 2:23 PM, Emilien Macchi  wrote:
>> [...]
>>> Derek, it seems like you want to deploy Ironic on scenario006
>>> (https://review.openstack.org/#/c/474802). I was wondering how it
>>> would work with multinode jobs.
>>
>> Derek, I also would like to point out that
>> https://review.openstack.org/#/c/474802 is missing the environment
>> file for non-containerized deployments & and also the pingtest file.
>> Just for the record, if we can have it before the job moves in gate.
>
> I knew I had left out the ping test file, this is the next step but I
> can create a noop one for now if you'd like?

Please create a basic pingtest with common things we have in other scenarios.

> Is the non-containerized deployments a requirement?

Until we stop supporting non-containerized deployments, I would say yes.

>>
>> Thanks,
>> --
>> Emilien Macchi

So if you create a libvirt domain, would it be possible to do it on
scenario004 for example and keep coverage for other services that are
already on scenario004? It would avoid to consume a scenario just for
Ironic. If not possible, then talk with Flavio and one of you will
have to prepare scenario007 or 0008, depending where Numans is in his
progress to have OVN coverage as well.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][rally] Disabling Glance Testing in Rally gates

2017-07-13 Thread Kekane, Abhishek
Initial finding:
This is something related to uwsgi socket timeout. Default value for 
socket-timeout is 30 in /etc/glance/glance-uwsgi.ini file.

For testing purpose I have tried to create 10 GB image using create command:
$ time glance --debug image-create --name dsl --file gentoo_root.img 
--disk-format iso --container-format bare

real2m48.539s
user0m4.076s
sys 0m10.012s

It has failed with “502 bad gateway” after 3 minutes. Then I have increased 
this timeout to 60 and restarted glance-api service and ran above command again.

$ time glance --debug image-create --name dsl --file gentoo_root.img 
--disk-format iso --container-format bare

After this I was able to create image without any issue.

So another workaround is to set socket-timeout to higher value in 
/etc/glance/glance-uwsgi.ini file.

Thank you,

Abhishek Kekane

From: Kekane, Abhishek [mailto:abhishek.kek...@nttdata.com]
Sent: Thursday, July 13, 2017 2:45 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [glance][rally] Disabling Glance Testing in Rally 
gates

Someone has reported this issue in glance recently.
Please refer, https://bugs.launchpad.net/glance/+bug/1703856

I think it is same kind of failure which Andrey was mentioning.

As mentioned in the bug temporary solution is to enable the v1 api using the v1 
parameter, --location instead of using --file. But this will not be applicable 
to rally scenarios I guess.
I will spend some time to understand the failure cause.

Thank you,

Abhishek


From: Andrey Kurilin [mailto:andr.kuri...@gmail.com]
Sent: Thursday, July 13, 2017 2:25 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [glance][rally] Disabling Glance Testing in Rally 
gates

Hi Flavio,

2017-07-13 11:16 GMT+03:00 Flavio Percoco 
mailto:fla...@redhat.com>>:
On 13/07/17 00:56 -0700, Boris Pavlovic wrote:
Hi stackers,


Unfortunately what was discussed in other thread (situation in glance is
critical) happened.
Glance stopped working and Rally team is forced to disable checking of it
in Rally gates.

P.S. Seems like this patch is casing the problems:
https://github.com/openstack-dev/devstack/commit/1fa653635781cd975a1031e212b35b6c38196ba4

Hey Boris,

Has this been brought up to the Glance team? Or is this email meant to do that?

Yes

http://eavesdrop.openstack.org/irclogs/%23openstack-glance/%23openstack-glance.2017-07-11.log.html#t2017-07-11T07:44:13
http://eavesdrop.openstack.org/irclogs/%23openstack-glance/%23openstack-glance.2017-07-11.log.html#t2017-07-11T22:12:47

FWIW, the switch to uwsgi is a community goal and not so much a Glance thing.
Would you mind elaborating on what exactly is failing and how the glance team
can help?
I understand that switching to uwsgi is a community goal, but we didn't have 
any problems with Glance for years until now.

As from IRC logs:
> HTTPBadGateway: 502 Bad Gateway: Bad Gateway: The proxy server received an 
> invalid: response from an upstream server.: Apache/2.4.18 (Ubuntu) Server at 
> 158.69.73.26 Port 80 (HTTP 502)

Trace:

http://paste.openstack.org/show/615234/

Flavio

--
@flaper87
Flavio Percoco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Best regards,
Andrey Kurilin.

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Security] IRC Meeting today - 1700 UTC

2017-07-13 Thread Rob C
Just a reminder for all that we'll be having a security meeting today at
the usual time.

Meeting agenda: https://etherpad.openstack.org/p/security-agenda

Cheers
-Rob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] out-of-tree l3 service providers

2017-07-13 Thread Takashi Yamamoto
hi,

today i managed to play with l3 flavors.
i wrote a crude patch to implement midonet flavor. [1]

[1] https://review.openstack.org/#/c/483174/

a good news is it's somehow working.

a bad news is it has a lot of issues, as you can see in TODO comments
in the patch.
given these issues, now i tend to think it's cleaner to introduce
ml2-like precommit/postcommit driver api (or its equivalent via
callbacks) rather than using these existing notifications.

how do you think?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] scheduling with custom resouce classes

2017-07-13 Thread Chris Dent

On Thu, 13 Jul 2017, Balazs Gibizer wrote:

/placement/allocation_candidates?resources=CUSTOM_MAGIC%3A512%2CMEMORY_MB%3A64%2CVCPU%3A1" 
but placement returns an empty response. Then nova scheduler falls back to 
legacy behavior [4] and places the instance without considering the custom 
resource request.


As far as I can tell at least one missing piece of the puzzle here
is that your MAGIC provider does not have the
'MISC_SHARES_VIA_AGGREGATE' trait. It's not enough for the compute
and MAGIC to be in the same aggregate, the MAGIC needs to announce
that its inventory is for sharing. The comments here have a bit more
on that:


https://github.com/openstack/nova/blob/master/nova/objects/resource_provider.py#L663-L678

It's quite likely this is not well documented yet as this style of
declaring that something is shared was a later development. The
initial code that added the support for GET /resource_providers
was around, it was later reused for GET /allocation_candidates:

https://review.openstack.org/#/c/460798/

The other thing to be aware of is that GET /allocation_candidates is
in flight. It should be stable on the placement service side, but the
way the data is being used on the scheduler side is undergoing
change as we speak:

https://review.openstack.org/#/c/482381/

Then I tried to connect the compute provider and the MAGIC provider to the 
same aggregate via the placement API but the above placement request still 
resulted in empty response. See my exact steps in [5].


This still needs to happen, but you also need to put the trait
mentioned above on the magic provider, the docs for that are in progress
on this review

https://review.openstack.org/#/c/474550/

and a rendered version:


http://docs-draft.openstack.org/50/474550/8/check/gate-placement-api-ref-nv/2d2a7ea//placement-api-ref/build/html/#update-resource-provider-traits


Do I still missing some environment setup on my side to make it work?
Is the work in [1] incomplete?
Are the missing pieces in [2] needed to make this use case work?

If more implementation is needed then I can offer some help during Queens 
cycle.


There's definitely more to do and your help would be greatly
appreciated. It's _fantastic_ that you are experimenting with this
and sharing what's happening.

To make the above use case fully functional I realized that I need a service 
that periodically updates the placement service with the state of the MAGIC 
resource like the resource tracker in Nova. Is there any existing plans 
creating a generic service or framework that can be used for the tracking and 
reporting purposes?


As you've probably discovered from your experiments with curl,
updating inventory is pretty straightforward (if you have a TOKEN)
so we decided to forego making a framework at this point. I had some
code long ago that demonstrated one way to do it, but it didn't get
any traction:

https://review.openstack.org/#/c/382613/

That tried to be a simple python script using requests that did the
bare minimum and would be amenable to cron jobs and other simple
scripts.

I hope some of the above is helpful. Jay, Ed, Sylvain or Dan may come
along with additional info.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Migration from Neutron ML2OVS to OVN

2017-07-13 Thread Saravanan KR
On Tue, Jul 11, 2017 at 11:40 PM, Ben Nemec  wrote:
>
>
> On 07/11/2017 10:17 AM, Numan Siddique wrote:
>>
>> Hello Tripleo team,
>>
>> I have few questios regarding migration from neutron ML2OVS to OVN. Below
>> are some of the requirements
>>
>>   - We want to migrate an existing depoyment from Neutroon default ML2OVS
>> to OVN
>>   - We are targetting this for tripleo Queen's release.
>>   - The plan is to first upgrade the tripleo deployment from Pike to
>> Queens with no changes to neutron. i.e with neutron ML2OVS. Once the upgrade
>> is done, we want to migrate to OVN.
>>   - The migration process will stop all the neutron agents, configure
>> neutron server to load OVN mechanism driver and start OVN services (with no
>> or very limited datapath downtime).
>>   - The migration would be handled by an ansible script. We have a PoC
>> ansible script which can be found here [1]
>>
>> And the questions are
>> -  (A broad question) - What is the right way to migrate and switch the
>> neutron plugin ? Can the stack upgrade handle the migration as well ?
This is going to be a broader problem as it is also require to migrate
ML2OvS to ODL for NFV deployments, pretty much at the same timeline.
If i understand correctly, this migration involves stopping services
of ML2OVS (like neutron-ovs-agent) and starting the corresponding new
ML2 (OVN or ODL), along with few parameter additions and removals.

>> - The migration procedure should be part of tripleo ? or can it be a
>> standalone ansible script ? (I presume it should be former).
Each service has upgrade steps which can be associated via ansible
steps. But this is not a service upgrade. It disables an existing
service and enables a new service. So I think, it would need an
explicit disabled service [1], stop the required service. And enabled
the new service.

>> - If it should be part of the tripleo then what would be the command to do
>> it ? A update stack command with appropriate environment files for OVN ?
>> - In case the migration can be done  as a standalone script, how to handle
>> later updates/upgrades since tripleo wouldn't be aware of the migration ?
>
I would also discourage doing it standalone.

Another area which needs to be looked is that, should it be associated
with containers upgrade? May be OVN and ODL can be migrated as
containers only instead of baremetal by default (just a thought, could
have implications to be worked/discussed out).

Regards,
Saravanan KR

[1] 
https://github.com/openstack/tripleo-heat-templates/tree/master/puppet/services/disabled

>
> This last point seems like the crux of the discussion here.  Sure, you can
> do all kinds of things to your cloud using standalone bits, but if any of
> them affect things tripleo manages (which this would) then you're going to
> break on the next stack update.
>
> If there are things about the migration that a stack-update can't handle,
> then the migration process would need to be twofold: 1) Run the standalone
> bits to do the migration 2) Update the tripleo configuration to match the
> migrated config so stack-updates work.
>
> This is obviously a complex and error-prone process, so I'd strongly
> encourage doing it in a tripleo-native fashion instead if at all possible.
>
>>
>>
>> Request to provide your comments so that we can move in the right
>> direction.
>>
>> [1] - https://github.com/openstack/networking-ovn/tree/master/migration
>>
>> Thanks
>> Numan
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][rally] Disabling Glance Testing in Rally gates

2017-07-13 Thread Kekane, Abhishek
Someone has reported this issue in glance recently.
Please refer, https://bugs.launchpad.net/glance/+bug/1703856

I think it is same kind of failure which Andrey was mentioning.

As mentioned in the bug temporary solution is to enable the v1 api using the v1 
parameter, --location instead of using --file. But this will not be applicable 
to rally scenarios I guess.
I will spend some time to understand the failure cause.

Thank you,

Abhishek


From: Andrey Kurilin [mailto:andr.kuri...@gmail.com]
Sent: Thursday, July 13, 2017 2:25 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [glance][rally] Disabling Glance Testing in Rally 
gates

Hi Flavio,

2017-07-13 11:16 GMT+03:00 Flavio Percoco 
mailto:fla...@redhat.com>>:
On 13/07/17 00:56 -0700, Boris Pavlovic wrote:
Hi stackers,


Unfortunately what was discussed in other thread (situation in glance is
critical) happened.
Glance stopped working and Rally team is forced to disable checking of it
in Rally gates.

P.S. Seems like this patch is casing the problems:
https://github.com/openstack-dev/devstack/commit/1fa653635781cd975a1031e212b35b6c38196ba4

Hey Boris,

Has this been brought up to the Glance team? Or is this email meant to do that?

Yes

http://eavesdrop.openstack.org/irclogs/%23openstack-glance/%23openstack-glance.2017-07-11.log.html#t2017-07-11T07:44:13
http://eavesdrop.openstack.org/irclogs/%23openstack-glance/%23openstack-glance.2017-07-11.log.html#t2017-07-11T22:12:47

FWIW, the switch to uwsgi is a community goal and not so much a Glance thing.
Would you mind elaborating on what exactly is failing and how the glance team
can help?
I understand that switching to uwsgi is a community goal, but we didn't have 
any problems with Glance for years until now.

As from IRC logs:
> HTTPBadGateway: 502 Bad Gateway: Bad Gateway: The proxy server received an 
> invalid: response from an upstream server.: Apache/2.4.18 (Ubuntu) Server at 
> 158.69.73.26 Port 80 (HTTP 502)

Trace:

http://paste.openstack.org/show/615234/

Flavio

--
@flaper87
Flavio Percoco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Best regards,
Andrey Kurilin.

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][rally] Disabling Glance Testing in Rally gates

2017-07-13 Thread Andrey Kurilin
Hi Flavio,

2017-07-13 11:16 GMT+03:00 Flavio Percoco :

> On 13/07/17 00:56 -0700, Boris Pavlovic wrote:
>
>> Hi stackers,
>>
>>
>> Unfortunately what was discussed in other thread (situation in glance is
>> critical) happened.
>> Glance stopped working and Rally team is forced to disable checking of it
>> in Rally gates.
>>
>> P.S. Seems like this patch is casing the problems:
>> https://github.com/openstack-dev/devstack/commit/1fa65363578
>> 1cd975a1031e212b35b6c38196ba4
>>
>
> Hey Boris,
>
> Has this been brought up to the Glance team? Or is this email meant to do
> that?
>
>
Yes

http://eavesdrop.openstack.org/irclogs/%23openstack-glance/%23openstack-glance.2017-07-11.log.html#t2017-07-11T07:44:13
http://eavesdrop.openstack.org/irclogs/%23openstack-glance/%23openstack-glance.2017-07-11.log.html#t2017-07-11T22:12:47


> FWIW, the switch to uwsgi is a community goal and not so much a Glance
> thing.
> Would you mind elaborating on what exactly is failing and how the glance
> team
> can help?
>
> I understand that switching to uwsgi is a community goal, but we didn't
have any problems with Glance for years until now.

As from IRC logs:

> HTTPBadGateway: 502 Bad Gateway: Bad Gateway: The proxy server received
an invalid: response from an upstream server.: Apache/2.4.18 (Ubuntu)
Server at 158.69.73.26 Port 80 (HTTP 502)

Trace:

http://paste.openstack.org/show/615234/

Flavio
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best regards,
Andrey Kurilin.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] scenario006 conflict

2017-07-13 Thread Derek Higgins
On 12 July 2017 at 22:33, Emilien Macchi  wrote:
> On Wed, Jul 12, 2017 at 2:23 PM, Emilien Macchi  wrote:
> [...]
>> Derek, it seems like you want to deploy Ironic on scenario006
>> (https://review.openstack.org/#/c/474802). I was wondering how it
>> would work with multinode jobs.
>
> Derek, I also would like to point out that
> https://review.openstack.org/#/c/474802 is missing the environment
> file for non-containerized deployments & and also the pingtest file.
> Just for the record, if we can have it before the job moves in gate.

I knew I had left out the ping test file, this is the next step but I
can create a noop one for now if you'd like?

Is the non-containerized deployments a requirement?

>
> Thanks,
> --
> Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] scenario006 conflict

2017-07-13 Thread Derek Higgins
On 12 July 2017 at 22:23, Emilien Macchi  wrote:
> Hey folks,
>
> Derek, it seems like you want to deploy Ironic on scenario006
> (https://review.openstack.org/#/c/474802). I was wondering how it
> would work with multinode jobs.

The idea was that we would create a libvirt domain on the overcloud
controller that Ironic could then control with VirtualBMC. But for the
moment the job only installs Ironic and I was going to build onto it
from there.

> Also, Flavio would like to test k8s on scenario006:
> https://review.openstack.org/#/c/471759/ . To avoid having too much
> scenarios and complexity, I think if ironic tests can be done on a
> 2nodes job, then we can deploy ironic on scenario004 maybe. If not,
> then please give the requirements so we can see how to structure it.

I'll take look and see whats possible

>
> For Flavio's need, I think we need a dedicated scenario for now, since
> he's not going to deploy any OpenStack service on the overcloud for
> now, just k8s.
>
> Thanks for letting us know the plans, so we can keep the scenarios in
> good shape.
> Note: Numans also wants to test OVN and I suggested to create
> scenario007 (since we can't deploy OVN before Pike, so upgrades
> wouldn't work).
> Note2: it seems like efforts done to test complex HA architectures
> weren't finished in scenario005 - Michele: any thoughts on this one?
> should we remove it now or do we expect it working one day?
>
>
> Thanks,
> --
> Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][rally] Disabling Glance Testing in Rally gates

2017-07-13 Thread Flavio Percoco

On 13/07/17 00:56 -0700, Boris Pavlovic wrote:

Hi stackers,


Unfortunately what was discussed in other thread (situation in glance is
critical) happened.
Glance stopped working and Rally team is forced to disable checking of it
in Rally gates.

P.S. Seems like this patch is casing the problems:
https://github.com/openstack-dev/devstack/commit/1fa653635781cd975a1031e212b35b6c38196ba4


Hey Boris,

Has this been brought up to the Glance team? Or is this email meant to do that?

FWIW, the switch to uwsgi is a community goal and not so much a Glance thing.
Would you mind elaborating on what exactly is failing and how the glance team
can help?

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance][rally] Disabling Glance Testing in Rally gates

2017-07-13 Thread Boris Pavlovic
Hi stackers,


Unfortunately what was discussed in other thread (situation in glance is
critical) happened.
Glance stopped working and Rally team is forced to disable checking of it
in Rally gates.

P.S. Seems like this patch is casing the problems:
https://github.com/openstack-dev/devstack/commit/1fa653635781cd975a1031e212b35b6c38196ba4

Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev