Re: [openstack-dev] [Zun] Propose addition of Zun core team and removal notice

2017-06-19 Thread Pradeep Singh
+1 from me,
Thanks Shunli for your great work :)

On Tue, Jun 20, 2017 at 10:02 AM, Hongbin Lu  wrote:

> Hi all,
>
>
>
> I would like to propose the following change to the Zun core team:
>
>
>
> + Shunli Zhou (shunliz)
>
>
>
> Shunli has been contributing to Zun for a while and did a lot of work. He
> has completed the BP for supporting resource claim and be closed to finish
> the filter scheduler BP. He showed a good understanding of the Zun’s code
> base and expertise on other OpenStack projects. The quantity [1] and
> quality of his submitted code also shows his qualification. Therefore, I
> think he will be a good addition to the core team.
>
>
>
> In addition, I have a removal notice. Davanum Srinivas (Dims) and Yanyan
> Hu requested to be removed from the core team. Dims had been helping us
> since the inception of the project. I treated him as mentor and his
> guidance is always helpful for the whole team. As the project becomes
> mature and stable, I agree with him that it is time to relieve him from the
> core reviewer responsibility because he has many other important
> responsibilities for the OpenStack community. Yanyan’s leaving is because
> he has been relocated and focused on an out-of-OpenStack area. I would like
> to take this chance to thank Dims and Yanyan for their contribution to Zun.
>
>
>
> Core reviewers, please cast your vote on this proposal.
>
>
>
> Best regards,
>
> Hongbin
>
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zun] Propose addition of Zun core team and removal notice

2017-06-19 Thread Hongbin Lu
Hi all,

I would like to propose the following change to the Zun core team:

+ Shunli Zhou (shunliz)

Shunli has been contributing to Zun for a while and did a lot of work. He has 
completed the BP for supporting resource claim and be closed to finish the 
filter scheduler BP. He showed a good understanding of the Zun's code base and 
expertise on other OpenStack projects. The quantity [1] and quality of his 
submitted code also shows his qualification. Therefore, I think he will be a 
good addition to the core team.

In addition, I have a removal notice. Davanum Srinivas (Dims) and Yanyan Hu 
requested to be removed from the core team. Dims had been helping us since the 
inception of the project. I treated him as mentor and his guidance is always 
helpful for the whole team. As the project becomes mature and stable, I agree 
with him that it is time to relieve him from the core reviewer responsibility 
because he has many other important responsibilities for the OpenStack 
community. Yanyan's leaving is because he has been relocated and focused on an 
out-of-OpenStack area. I would like to take this chance to thank Dims and 
Yanyan for their contribution to Zun.

Core reviewers, please cast your vote on this proposal.

Best regards,
Hongbin



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][tricircle] CellsV2 in Pike?

2017-06-19 Thread joehuang
Thank you very much, Matt, that's great news.

Best Regards
Chaoyi Huang (joehuang)


From: Matt Riedemann [mriede...@gmail.com]
Sent: 20 June 2017 9:29
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova][tricircle] CellsV2 in Pike?

On 6/19/2017 8:02 PM, joehuang wrote:
> Hello,
>
> In May, Tricircle has done some work to make Nova cells V2 + Neutron +
> Tricircle work together[1]: each cell will have corresponding local
> Neutron with Tricricle local plugin installed, and one central Neutron
> server work together with Nova API server, where the Tricricle central
> plugin installed.
>
> Would like to know how far multi-cells will be supported for CellsV2 in
> Pike release, so that Tricircle can do more verification of this
> deployment option.
>
> [1]http://lists.openstack.org/pipermail/openstack-dev/2017-May/117599.html
>
> Best Regards
> Chaoyi Huang (joehuang)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Hi Joe,

Tempest is passing on this devstack change [1] which enables a
multi-cell environment. We're still finding some random things that need
to be aware of a multi-cell deployment and are working through those,
but at this point we expect to be able to declare support for multiple
cells v2 cells in Pike.

[1] https://review.openstack.org/#/c/436094/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-dev[[nova] Simple question about sorting CPU topologies

2017-06-19 Thread Zhenyu Zheng
Sorry, The mail sent accidentally by mis-typing ...

My question is, what is the benefit of the above preference?

BR,
Kevin

On Tue, Jun 20, 2017 at 10:43 AM, Zhenyu Zheng 
wrote:

> Hi,
>
> In https://github.com/openstack/nova/blob/master/
> nova/virt/hardware.py#L396 we calculated every possible CPU topologies
> and sorted by:
> # We want to
> # - Minimize threads (ie larger sockets * cores is best)
> # - Prefer sockets over cores
> possible = sorted(possible, reverse=True,
> key=lambda x: (x.sockets * x.cores,
> x.sockets,
> x.threads))
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-dev[[nova] Simple question about sorting CPU topologies

2017-06-19 Thread Zhenyu Zheng
Hi,

In https://github.com/openstack/nova/blob/master/nova/virt/hardware.py#L396
we calculated every possible CPU topologies and sorted by:
# We want to
# - Minimize threads (ie larger sockets * cores is best)
# - Prefer sockets over cores
possible = sorted(possible, reverse=True,
key=lambda x: (x.sockets * x.cores,
x.sockets,
x.threads))
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-19 Thread Fei Long Wang


On 20/06/17 12:56, Curtis wrote:
> On Sun, Jun 18, 2017 at 5:35 AM, Amrith Kumar  wrote:
>> Trove has evolved rapidly over the past several years, since integration in
>> IceHouse when it only supported single instances of a few databases. Today
>> it supports a dozen databases including clusters and replication.
>>
>> The user survey [1] indicates that while there is strong interest in the
>> project, there are few large production deployments that are known of (by
>> the development team).
>>
>> Recent changes in the OpenStack community at large (company realignments,
>> acquisitions, layoffs) and the Trove community in particular, coupled with a
>> mounting burden of technical debt have prompted me to make this proposal to
>> re-architect Trove.
>>
>> This email summarizes several of the issues that face the project, both
>> structurally and architecturally. This email does not claim to include a
>> detailed specification for what the new Trove would look like, merely the
>> recommendation that the community should come together and develop one so
>> that the project can be sustainable and useful to those who wish to use it
>> in the future.
>>
>> TL;DR
>>
>> Trove, with support for a dozen or so databases today, finds itself in a
>> bind because there are few developers, and a code-base with a significant
>> amount of technical debt.
>>
>> Some architectural choices which the team made over the years have
>> consequences which make the project less than ideal for deployers.
>>
>> Given that there are no major production deployments of Trove at present,
>> this provides us an opportunity to reset the project, learn from our v1 and
>> come up with a strong v2.
>>
>> An important aspect of making this proposal work is that we seek to
>> eliminate the effort (planning, and coding) involved in migrating existing
>> Trove v1 deployments to the proposed Trove v2. Effectively, with work
>> beginning on Trove v2 as proposed here, Trove v1 as released with Pike will
>> be marked as deprecated and users will have to migrate to Trove v2 when it
>> becomes available.
>>
>> While I would very much like to continue to support the users on Trove v1
>> through this transition, the simple fact is that absent community
>> participation this will be impossible. Furthermore, given that there are no
>> production deployments of Trove at this time, it seems pointless to build
>> that upgrade path from Trove v1 to Trove v2; it would be the proverbial
>> bridge from nowhere.
>>
>> This (previous) statement is, I realize, contentious. There are those who
>> have told me that an upgrade path must be provided, and there are those who
>> have told me of unnamed deployments of Trove that would suffer. To this, all
>> I can say is that if an upgrade path is of value to you, then please commit
>> the development resources to participate in the community to make that
>> possible. But equally, preventing a v2 of Trove or delaying it will only
>> make the v1 that we have today less valuable.
>>
>> We have learned a lot from v1, and the hope is that we can address that in
>> v2. Some of the more significant things that I have learned are:
>>
>> - We should adopt a versioned front-end API from the very beginning; making
>> the REST API versioned is not a ‘v2 feature’
>>
>> - A guest agent running on a tenant instance, with connectivity to a shared
>> management message bus is a security loophole; encrypting traffic,
>> per-tenant-passwords, and any other scheme is merely lipstick on a security
>> hole
>>
>> - Reliance on Nova for compute resources is fine, but dependence on Nova VM
>> specific capabilities (like instance rebuild) is not; it makes things like
>> containers or bare-metal second class citizens
>>
>> - A fair portion of what Trove does is resource orchestration; don’t
>> reinvent the wheel, there’s Heat for that. Admittedly, Heat wasn’t as far
>> along when Trove got started but that’s not the case today and we have an
>> opportunity to fix that now
>>
>> - A similarly significant portion of what Trove does is to implement a
>> state-machine that will perform specific workflows involved in implementing
>> database specific operations. This makes the Trove taskmanager a stateful
>> entity. Some of the operations could take a fair amount of time. This is a
>> serious architectural flaw
>>
>> - Tenants should not ever be able to directly interact with the underlying
>> storage and compute used by database instances; that should be the default
>> configuration, not an untested deployment alternative
>>
> As an operator I wouldn't run Trove as it is, unless I absolutely had to.
>
> I think it is a good idea to reboot the project. I really think the
> concept of "service VMs" should be a thing. I'm not sure where the
> OpenStack community has landed on that, my fault for not paying close
> attention, but we should be able to create VMs for a tenant that are
> not managed by the tenant but that could be 

Re: [openstack-dev] [nova][tricircle] CellsV2 in Pike?

2017-06-19 Thread Matt Riedemann

On 6/19/2017 8:02 PM, joehuang wrote:

Hello,

In May, Tricircle has done some work to make Nova cells V2 + Neutron + 
Tricircle work together[1]: each cell will have corresponding local 
Neutron with Tricricle local plugin installed, and one central Neutron 
server work together with Nova API server, where the Tricricle central 
plugin installed.


Would like to know how far multi-cells will be supported for CellsV2 in 
Pike release, so that Tricircle can do more verification of this 
deployment option.


[1]http://lists.openstack.org/pipermail/openstack-dev/2017-May/117599.html

Best Regards
Chaoyi Huang (joehuang)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Hi Joe,

Tempest is passing on this devstack change [1] which enables a 
multi-cell environment. We're still finding some random things that need 
to be aware of a multi-cell deployment and are working through those, 
but at this point we expect to be able to declare support for multiple 
cells v2 cells in Pike.


[1] https://review.openstack.org/#/c/436094/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler][placement] Trying to understand the proposed direction

2017-06-19 Thread Boris Pavlovic
Hi,

Does this look too complicated and and a bit over designed.

For example, why we can't store all data in memory of single python
application with simple REST API and have
simple mechanism for plugins that are filtering. Basically there is no any
kind of problems with storing it on single host.

If we even have 100k hosts and every host has about 10KB -> 1GB of RAM (I
can just use phone)

There are easy ways to copy the state across different instance (sharing
updates)

And I thought that Placement project is going to be such centralized small
simple APP for collecting all
resource information and doing this very very simple and easy placement
selection...


Best regards,
Boris Pavlovic

On Mon, Jun 19, 2017 at 5:05 PM, Edward Leafe  wrote:

> On Jun 19, 2017, at 5:27 PM, Jay Pipes  wrote:
>
>
> It was from the straw man example. Replacing the $FOO_UUID with UUIDs, and
> then stripping out all whitespace resulted in about 1500 bytes. Your
> example, with whitespace included, is 1600 bytes.
>
>
> It was the "per compute host" that I objected to.
>
>
> I guess it would have helped to see an example of the data returned for
> multiple compute nodes. The straw man example was for a single compute node
> with SR-IOV, NUMA and shared storage. There was no indication how multiple
> hosts meeting the requested resources would be returned.
>
> -- Ed Leafe
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][tricircle] CellsV2 in Pike?

2017-06-19 Thread joehuang
Hello,

In May, Tricircle has done some work to make Nova cells V2 + Neutron + 
Tricircle work together[1]: each cell will have corresponding local Neutron 
with Tricricle local plugin installed, and one central Neutron server work 
together with Nova API server, where the Tricricle central plugin installed.

Would like to know how far multi-cells will be supported for CellsV2 in Pike 
release, so that Tricircle can do more verification of this deployment option.

[1]http://lists.openstack.org/pipermail/openstack-dev/2017-May/117599.html

Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-19 Thread Curtis
On Sun, Jun 18, 2017 at 5:35 AM, Amrith Kumar  wrote:
> Trove has evolved rapidly over the past several years, since integration in
> IceHouse when it only supported single instances of a few databases. Today
> it supports a dozen databases including clusters and replication.
>
> The user survey [1] indicates that while there is strong interest in the
> project, there are few large production deployments that are known of (by
> the development team).
>
> Recent changes in the OpenStack community at large (company realignments,
> acquisitions, layoffs) and the Trove community in particular, coupled with a
> mounting burden of technical debt have prompted me to make this proposal to
> re-architect Trove.
>
> This email summarizes several of the issues that face the project, both
> structurally and architecturally. This email does not claim to include a
> detailed specification for what the new Trove would look like, merely the
> recommendation that the community should come together and develop one so
> that the project can be sustainable and useful to those who wish to use it
> in the future.
>
> TL;DR
>
> Trove, with support for a dozen or so databases today, finds itself in a
> bind because there are few developers, and a code-base with a significant
> amount of technical debt.
>
> Some architectural choices which the team made over the years have
> consequences which make the project less than ideal for deployers.
>
> Given that there are no major production deployments of Trove at present,
> this provides us an opportunity to reset the project, learn from our v1 and
> come up with a strong v2.
>
> An important aspect of making this proposal work is that we seek to
> eliminate the effort (planning, and coding) involved in migrating existing
> Trove v1 deployments to the proposed Trove v2. Effectively, with work
> beginning on Trove v2 as proposed here, Trove v1 as released with Pike will
> be marked as deprecated and users will have to migrate to Trove v2 when it
> becomes available.
>
> While I would very much like to continue to support the users on Trove v1
> through this transition, the simple fact is that absent community
> participation this will be impossible. Furthermore, given that there are no
> production deployments of Trove at this time, it seems pointless to build
> that upgrade path from Trove v1 to Trove v2; it would be the proverbial
> bridge from nowhere.
>
> This (previous) statement is, I realize, contentious. There are those who
> have told me that an upgrade path must be provided, and there are those who
> have told me of unnamed deployments of Trove that would suffer. To this, all
> I can say is that if an upgrade path is of value to you, then please commit
> the development resources to participate in the community to make that
> possible. But equally, preventing a v2 of Trove or delaying it will only
> make the v1 that we have today less valuable.
>
> We have learned a lot from v1, and the hope is that we can address that in
> v2. Some of the more significant things that I have learned are:
>
> - We should adopt a versioned front-end API from the very beginning; making
> the REST API versioned is not a ‘v2 feature’
>
> - A guest agent running on a tenant instance, with connectivity to a shared
> management message bus is a security loophole; encrypting traffic,
> per-tenant-passwords, and any other scheme is merely lipstick on a security
> hole
>
> - Reliance on Nova for compute resources is fine, but dependence on Nova VM
> specific capabilities (like instance rebuild) is not; it makes things like
> containers or bare-metal second class citizens
>
> - A fair portion of what Trove does is resource orchestration; don’t
> reinvent the wheel, there’s Heat for that. Admittedly, Heat wasn’t as far
> along when Trove got started but that’s not the case today and we have an
> opportunity to fix that now
>
> - A similarly significant portion of what Trove does is to implement a
> state-machine that will perform specific workflows involved in implementing
> database specific operations. This makes the Trove taskmanager a stateful
> entity. Some of the operations could take a fair amount of time. This is a
> serious architectural flaw
>
> - Tenants should not ever be able to directly interact with the underlying
> storage and compute used by database instances; that should be the default
> configuration, not an untested deployment alternative
>

As an operator I wouldn't run Trove as it is, unless I absolutely had to.

I think it is a good idea to reboot the project. I really think the
concept of "service VMs" should be a thing. I'm not sure where the
OpenStack community has landed on that, my fault for not paying close
attention, but we should be able to create VMs for a tenant that are
not managed by the tenant but that could be billed to them in some
fashion. At least that's my opinion.

> - The CI should test all databases that are considered to be ‘supported’

Re: [openstack-dev] [devstack] etcd v3.2.0?

2017-06-19 Thread Tony Breeds
On Mon, Jun 19, 2017 at 08:17:53AM -0400, Davanum Srinivas wrote:
> Tony,
> 
> 
> On Sun, Jun 18, 2017 at 11:34 PM, Tony Breeds  wrote:
> > On Sun, Jun 18, 2017 at 08:19:16PM -0400, Davanum Srinivas wrote:
> >
> >> Awesome! thanks Tony, some kolla jobs do that for example, but i think
> >> this job is a better one to key off of:
> >> http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/infra.yaml#n381
> >>
> >> Outline of the work is - check if there are any new releases in github
> >> downloads, if so download them using wget and then delegate to the scp
> >> publisher (with keep-hierarchy) to create the new directories and
> >> upload the file(s).
> >
> > So perhaps I'm dense but I can't see an easy way to get a list of
> > release artefacts from github in a form that wget can consume.  The best
> > I can see is via the API.  I've knocked up a quick'n'dirty mirror
> > script[1] but I really feel like I've gone off into the weeds.
> >
> > You basically need to do:
> >
> > git clone  && cd
> > virtualenv .venv
> > .venv/bin/pip install -U pip setuptools wheel
> > .venv/bin/pip install -r ./requirements.txt   # [2]
> > .venv/bin/python ./mirror-github-releases.py \
> > 'coreos/etcd::.*linux.*gz:etcd' \
> > 'coreos/etcd:6225411:.*linux.*gz:etcd'
> 
> Works for me!

Okay I'll put something more compleet together for infra review.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] How about using boto3 instead of boto in requirements

2017-06-19 Thread Tony Breeds
NOTE:
- Removed Matthew and Sean from Cc as they'll get this via os-dev
- Added ec2-api tag and PTL for visibility

On Mon, Jun 19, 2017 at 08:05:55AM -0400, Davanum Srinivas wrote:
> Hi Pengju Jiao,
> 
> The main consume i believe is EC2-API :
> 
> http://codesearch.openstack.org/?q=%5Eboto=nope=.*req.*%5C.txt=
> http://codesearch.openstack.org/?q=%5E(from%7Cimport).*boto=nope==
> 
> Can you please reach out to them?

I think that's the point of this email :D

I had a quick look and it doesn't seem terrible to do the swictch to
boto3, but it's more than a trivial substitution.  Can the ec2-api team
look at moving to boto3?

Rally and heat are less critical as they don't seem to subscribe to
requirements management.
 
> Thanks,
> Dims
> 
> On Mon, Jun 19, 2017 at 3:11 AM, jiaopengju
>  wrote:
> > Hi, everyone:
> >
> > I have a blueprint of freezer to support s3 storage driver [1], so I need to
> > add boto3 to the requirements [2].
> >
> > Why use boto3 but not boto?
> >
> > Boto3 [3] is the next version of Boto, is now stable and recommended for
> > general use. It can be used side-by-side with Boto in the same project, so
> > it is easy to start using Boto3 in your existing projects as well as new
> > projects. Going forward, API updates and all new feature work will be
> > focused on Boto3.
> >
> >
> > Problem
> >
> > Boto3 requires botocore, jmespath and s3transfer. botocore and jmespath are
> > already used by boto, this is because boto is used by ec2-api (and
> > heat-cfntools, rally and swift3).  After adding Boto3 to requirements, we
> > will have old and new libraries ATM.
> >
> >
> > My suggenstion
> >
> > How about using boto3 instead of boto? This will require us to do some
> > migration about boto to boto3.
> >
> >
> > Anyone has ideas or suggesstions? Thank you very much!
> >
> >
> > [1] https://review.openstack.org/#/c/471295
> >
> > [2]https://review.openstack.org/#/c/473067
> >
> > [3]https://github.com/boto/boto3
> >
> >
> > Pengju Jiao
> > mail: jiaopen...@cmss.chinamobile.com
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> 
> -- 
> Davanum Srinivas :: https://twitter.com/dims
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-19 Thread joehuang
I think openstack community  provides a flat project market place for 
infrastructure is good enough:

all projects are just some "goods" in the market place, let the cloud operators 
to select projects
from the project market place for his own infrastructure.

We don't have to mark a project a core project or not, only need to tag 
attribute of a project, for
example how mature it is, how many "like" they have, what the cloud operator 
said for the project. etc.

All flat, just let people make decision by themselves, they are not idiot, they 
have wisdom
on building infrastructure.

Not all people need a package: you bought a package of ice-cream, but not all 
you will like it,
If they want package, distribution provider can help them to define and 
customize a package, if
you want customization, you will decide which ball of cream you want, isn't it?

openstack is "OPEN" stack. 

Best Regards
Chaoyi Huang (joehuang)


From: Matt Riedemann [mriede...@gmail.com]
Sent: 19 June 2017 22:56
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all][tc] Moving away from "big tent"  
terminology

On 6/17/2017 10:55 AM, Jay Bryant wrote:
>
> I am responding under Tim's note because I think it gets at what we
> really want to communicate and takes me to what we have presented in
> OUI.  We have Core OpenStack Projects and then a whole community of
> additional projects that support cloud functionality.
>
> So, without it being named, or cutesy, though I liked "Friends of
> Openstack", can we go with "OpenStack Core Projects" and "Peripheral
> OpenStack Projects"?

Because then you have to define what "core" means, and how you get to be
"core", which is like the old system of integrated and incubated
projects. I agree that a "core" set of projects is more understandable
at first, probably most for an outsider. But it gets confusing from a
governance perspective within the community.

And if you want to run just containers with Kubernetes and you want to
use Keystone and Cinder with it, you don't need Nova, so is Nova "core"
or not?

This is probably where the constellations idea comes in [1].

At the end of the day it's all OpenStack to me if it's hosted on
OpenStack infra, but I'm not the guy making budget decisions at a
company determining what to invest in. I think Doug has tried to explain
that perspective a bit elsewhere in this thread, and it sounds like
that's the key issue, the outside perspective from people making budget
decisions.

[1] https://review.openstack.org/#/c/453262/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler][placement] Trying to understand the proposed direction

2017-06-19 Thread Edward Leafe
On Jun 19, 2017, at 5:27 PM, Jay Pipes  wrote:
> 
>> It was from the straw man example. Replacing the $FOO_UUID with UUIDs, and 
>> then stripping out all whitespace resulted in about 1500 bytes. Your 
>> example, with whitespace included, is 1600 bytes.
> 
> It was the "per compute host" that I objected to.

I guess it would have helped to see an example of the data returned for 
multiple compute nodes. The straw man example was for a single compute node 
with SR-IOV, NUMA and shared storage. There was no indication how multiple 
hosts meeting the requested resources would be returned.

-- Ed Leafe





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler][placement] Trying to understand the proposed direction

2017-06-19 Thread Jay Pipes

On 06/19/2017 05:24 PM, Edward Leafe wrote:
On Jun 19, 2017, at 1:34 PM, Jay Pipes > wrote:


OK, thanks for clarifying that. When we discussed returning 1.5K per 
compute host instead of a couple of hundred bytes, there was 
discussion that paging would be necessary.


Not sure where you're getting the whole 1.5K per compute host thing from.


It was from the straw man example. Replacing the $FOO_UUID with UUIDs, 
and then stripping out all whitespace resulted in about 1500 bytes. Your 
example, with whitespace included, is 1600 bytes.


It was the "per compute host" that I objected to.

OK, that’s informative, too. Is there anything decided on how much 
host info will be in the response from placement, and how much will 
be in HostState? Or how the reporting of resources by the compute 
nodes will have to change to feed this information to placement? Or 
how the two sources of information will be combined so that the 
filters and weighers can process it? Or is that still to be worked out?


I'm currtently working on a patch that integrates the REST API into 
the scheduler.


The merging of data will essentially start with the resource amounts 
that the host state objects contain (stuff like total_usable_ram etc) 
with the accurate data from the provider_summaries section.


So in the near-term, we will be using provider_summaries to update the 
corresponding HostState objects with those values. Is the long-term plan 
to have most of the HostState information moved to placement?


Some things will move to placement sooner rather than later:

* Quantitative things that can be consumed
* Simple traits

Later rather than sooner:

* Distances between aggregates (affinity/anti-affinity)

Never:

* Filtering hosts based on how many instances use a particular image
* Filtering hosts based on something that is hypervisor-dependent
* Sorting hosts based on the number of instances in a particular state 
(e.g. how many instances are live-migrating or shelving at any given time)
* Weighing hosts based on the current temperature of a power supply in a 
rack

* Sorting hosts based on the current weather conditions in Zimbabwe

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Do we still support core plugin not based on the ML2 framework?

2017-06-19 Thread Kevin Benton
The issue is mainly developer resources. Everyone currently working
upstream doesn't have the bandwidth to keep adding/reviewing the layers of
interfaces to make the DB optional that go untested. (None of the projects
that would use them run a CI system that reports results on Neutron
patches.)

I think we can certainly accept patches to do the things you are proposing,
but there is no guarantee that it won't regress to being DB-dependent until
there is something reporting results back telling us when it breaks.

So it's not that the community is against non-DB core plugins, it's just
that the people developing those plugins don't participate in the community
to ensure they work.

Cheers


On Mon, Jun 19, 2017 at 2:15 AM, Édouard Thuleau 
wrote:

> Oops, sent too fast, sorry. I try again.
>
> Hi,
>
> Since Mitaka release, a default service plugins list is loaded when Neutron
> server starts [1]. That list is not editable and was extended with few
> services
> [2]. But all of them rely on the Neutron DB model.
>
> If a core driver is not based on the ML2 core plugin framework or not
> based on
> the 'neutron.db.models_v2' class, all that service plugins will not work.
>
> So my first question is Does Neutron still support core plugin not based
> on ML2
> or 'neutron.db.models_v2' class?
>
> If yes, I would like to propose two solutions:
> - permits core plugin to overload the service plugin class by it's own
> implementation and continuing to use the actual Neutron db based services
> as
> default.
> - modifying all default plugin service to use service plugin driver
> framework [3], and set the actual Neutron db based implementation as
> default driver for services. That permits to core drivers not based on the
> Neutron DB to specify a driver. We can see that solution was adopted in the
> networking-bgpvpn project, where can find two abstract driver classes, one
> for
> core driver based on Neutron DB model [4] and one used by core driver not
> based
> on the DB [5] as the Contrail driver [6].
>
> [1] https://github.com/openstack/neutron/commit/aadf2f30f84dff3d
> 85f380a7ff4e16dbbb0c6bb0#diff-9169a6595980d19b2649d5bedfff05ce
> [2] https://github.com/openstack/neutron/blob/master/neutron/plu
> gins/common/constants.py#L43
> [3] https://github.com/openstack/neutron/blob/master/neutron/ser
> vices/service_base.py#L27
> [4] https://github.com/openstack/networking-bgpvpn/blob/master/n
> etworking_bgpvpn/neutron/services/service_drivers/driver_api.py#L226
> [5] https://github.com/openstack/networking-bgpvpn/blob/master/n
> etworking_bgpvpn/neutron/services/service_drivers/driver_api.py#L23
> [6] https://github.com/Juniper/contrail-neutron-plugin/blob/mast
> er/neutron_plugin_contrail/plugins/opencontrail/networkin
> g_bgpvpn/contrail.py#L36
>
> Regards,
> Édouard.
>
> On Mon, Jun 19, 2017 at 10:47 AM, Édouard Thuleau
>  wrote:
> > Hi,
> > Since Mitaka release [1], a default service plugins list is loaded
> > when Neutron server starts. That list is not editable and was extended
> > with few services [2]. But none of th
> >
> > [1] https://github.com/openstack/neutron/commit/aadf2f30f84dff3d
> 85f380a7ff4e16dbbb0c6bb0#diff-9169a6595980d19b2649d5bedfff05ce
> > [2] https://github.com/openstack/neutron/blob/master/neutron/plu
> gins/common/constants.py#L43
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Stepping down from core

2017-06-19 Thread feilong

Hi there,

I've been a Glance core since 2013 and been involved in the Glance community 
even longer, so I care deeply about Glance. My situation right now is such that 
I cannot devote sufficient time to Glance, and while as you've seen elsewhere 
on the mailing list, Glance needs reviewers, I'm afraid that keeping my name on 
the core list is giving people a false impression of how dire the current 
Glance personnel situation is. So after discussed with Glance PTL, I'd like to 
offer my resignation as a member of the Glance core reviewer team. Thank you 
for your understanding.

--
Cheers & Best regards,
Feilong Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
--


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] A proposal for hackathon to reduce deploy time of TripleO

2017-06-19 Thread Emilien Macchi
Reminder: the sprint will start on Wednesday of this week.

Actions required:

- /join #openstack-sprint
- tag relevant bugs in Launchpad to be found on
https://bugs.launchpad.net/tripleo/+bugs?field.tag=deployment-time
- create new bugs related to performances and tag them.

Anyone is welcome to join the sprint, feel free to ping me or Sagi if
any question or feedback,

Thanks!

On Thu, Jun 8, 2017 at 10:44 AM, Emilien Macchi  wrote:
> On Thu, Jun 8, 2017 at 4:19 PM, Sagi Shnaidman  wrote:
>> Hi, all
>>
>> Thanks for your attention and proposals for this hackathon. With full
>> understanding that optimization of deployment is on-going effort and should
>> not be started and finished in these 2 days only, we still want to get focus
>> on these issues in the sprint. Even if we don't solve immediately all
>> problems, more people will be exposed to this field, additional tasks/bugs
>> could be opened and scheduled, and maybe additional tests, process
>> improvements and other insights will be introduced.
>> If we don't reduce ci job time to 1 hour in Thursday it doesn't mean we
>> failed the mission, please remember.
>> The main goal of this sprint is to find problems and their work scope, and
>> to find as many as possible solutions for them, using inter-team and team
>> members collaboration and sharing knowledge. Ideally this collaboration and
>> on-going effort will go further with such momentum. :)
>>
>> I suggest to do it in 21 - 22 Jun 2017 (Wednesday - Thursday). All other
>> details are provided in etherpad:
>> https://etherpad.openstack.org/p/tripleo-deploy-time-hack and in wiki as
>> well: https://wiki.openstack.org/wiki/VirtualSprints
>> We have a "deployment-time" tag for bugs:
>> https://bugs.launchpad.net/tripleo/+bugs?field.tag=deployment-time Please
>> use it for bugs that affect deployment time or CI job run time. It will be
>> easier to handle them in the sprint.
>>
>> Please provide your comments and suggestions.
>
> Thanks Sagi for bringing this up, this is really awesome.
> One thing we could do to make this sprint productive is to report /
> triage Launchpad bugs related to $topic so we have a list of things we
> can work on during these 2 days.
>
> Maybe we could go through:
> https://launchpad.net/tripleo/+milestone/pike-2
> https://launchpad.net/tripleo/+milestone/pike-3 and add
> deployment-time to all the bugs we think it's related to performances.
>
> Once we have the list, we'll work on them by priority and by area of 
> knowledge.
>
> Also, folks like face to face interactions. We'll take care of
> preparing an open Bluejeans where folks can easily join and ask
> questions. We'll probably be connected all day, so anyone can join
> anytime. No schedule constraint here.
>
> Any feedback is welcome,
>
> Thanks!
>
>> Thanks
>>
>>
>>
>> On Tue, May 23, 2017 at 1:47 PM, Sagi Shnaidman  wrote:
>>>
>>> Hi, all
>>>
>>> I'd like to propose an idea to make one or two days hackathon in TripleO
>>> project with main goal - to reduce deployment time of TripleO.
>>>
>>> - How could it be arranged?
>>>
>>> We can arrange a separate IRC channel and Bluejeans video conference
>>> session for hackathon in these days to create a "presence" feeling.
>>>
>>> - How to participate and contribute?
>>>
>>> We'll have a few responsibility fields like tripleo-quickstart,
>>> containers, storage, HA, baremetal, etc - the exact list should be ready
>>> before the hackathon so that everybody could assign to one of these "teams".
>>> It's good to have somebody in team to be stakeholder and responsible for
>>> organization and tasks.
>>>
>>> - What is the goal?
>>>
>>> The goal of this hackathon to reduce deployment time of TripleO as much as
>>> possible.
>>>
>>> For example part of CI team takes a task to reduce quickstart tasks time.
>>> It includes statistics collection, profiling and detection of places to
>>> optimize. After this tasks are created, patches are tested and submitted.
>>>
>>> The prizes will be presented to teams which saved most of time :)
>>>
>>> What do you think?
>>>
>>> Thanks
>>> --
>>> Best regards
>>> Sagi Shnaidman
>>
>>
>>
>>
>> --
>> Best regards
>> Sagi Shnaidman
>
>
>
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler][placement] Trying to understand the proposed direction

2017-06-19 Thread Edward Leafe
On Jun 19, 2017, at 1:34 PM, Jay Pipes  wrote:
> 
>> OK, thanks for clarifying that. When we discussed returning 1.5K per compute 
>> host instead of a couple of hundred bytes, there was discussion that paging 
>> would be necessary.
> 
> Not sure where you're getting the whole 1.5K per compute host thing from.

It was from the straw man example. Replacing the $FOO_UUID with UUIDs, and then 
stripping out all whitespace resulted in about 1500 bytes. Your example, with 
whitespace included, is 1600 bytes. 

> Here's a paste with the before and after of what we're talking about:
> 
> http://paste.openstack.org/show/613129/ 
> 
> 
> Note that I'm using a situation with shared storage and two compute nodes 
> providing VCPU and MEMORY. In the current situation, the shared storage 
> provider isn't returned, as you know.
> 
> The before is 231 bytes. The after (again, with three providers, not 1) is 
> 1651 bytes.

So in the basic non-shared, non-nested case, if there are, let’s say, 200 
compute nodes that can satisfy the request, will there be 1 
“allocation_requests” key returned, with 200 “allocations” sub-keys? And one 
“provider_summaries” key, with 200 sub-keys on the compute node UUID?

> gzipping the after contents results in 358 bytes.
> 
> So, honestly I'm not concerned.

Ok, just wanted to be clear.

>> OK, that’s informative, too. Is there anything decided on how much host info 
>> will be in the response from placement, and how much will be in HostState? 
>> Or how the reporting of resources by the compute nodes will have to change 
>> to feed this information to placement? Or how the two sources of information 
>> will be combined so that the filters and weighers can process it? Or is that 
>> still to be worked out?
> 
> I'm currtently working on a patch that integrates the REST API into the 
> scheduler.
> 
> The merging of data will essentially start with the resource amounts that the 
> host state objects contain (stuff like total_usable_ram etc) with the 
> accurate data from the provider_summaries section.


So in the near-term, we will be using provider_summaries to update the 
corresponding HostState objects with those values. Is the long-term plan to 
have most of the HostState information moved to placement?


-- Ed Leafe





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] Next bug triages

2017-06-19 Thread Jean-Philippe Evrard
Hello everyone,

I won't be able to hold the bug triage meeting for OpenStack-Ansible, this
week and next week.
I'd be super happy if someone could replace me.

On top of that, I suggest to cancel the bug triage for the 4th of July.

Thank you for your help/understanding!

Best regards,

Jean-Philippe Evrard -- @evrardjp
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-19 Thread Matt Fischer
Amrith,

Some good thoughts in your email. I've replied to a few specific pieces
below. Overall I think it's a good start to a plan.

On Sun, Jun 18, 2017 at 5:35 AM, Amrith Kumar 
wrote:

> Trove has evolved rapidly over the past several years, since integration
> in IceHouse when it only supported single instances of a few databases.
> Today it supports a dozen databases including clusters and replication.
>
> The user survey [1] indicates that while there is strong interest in the
> project, there are few large production deployments that are known of (by
> the development team).
>
> Recent changes in the OpenStack community at large (company realignments,
> acquisitions, layoffs) and the Trove community in particular, coupled with
> a mounting burden of technical debt have prompted me to make this proposal
> to re-architect Trove.
>
> This email summarizes several of the issues that face the project, both
> structurally and architecturally. This email does not claim to include a
> detailed specification for what the new Trove would look like, merely the
> recommendation that the community should come together and develop one so
> that the project can be sustainable and useful to those who wish to use it
> in the future.
>
> TL;DR
>
> Trove, with support for a dozen or so databases today, finds itself in a
> bind because there are few developers, and a code-base with a significant
> amount of technical debt.
>
> Some architectural choices which the team made over the years have
> consequences which make the project less than ideal for deployers.
>
> Given that there are no major production deployments of Trove at present,
> this provides us an opportunity to reset the project, learn from our v1 and
> come up with a strong v2.
>
> An important aspect of making this proposal work is that we seek to
> eliminate the effort (planning, and coding) involved in migrating existing
> Trove v1 deployments to the proposed Trove v2. Effectively, with work
> beginning on Trove v2 as proposed here, Trove v1 as released with Pike will
> be marked as deprecated and users will have to migrate to Trove v2 when it
> becomes available.
>
> While I would very much like to continue to support the users on Trove v1
> through this transition, the simple fact is that absent community
> participation this will be impossible. Furthermore, given that there are no
> production deployments of Trove at this time, it seems pointless to build
> that upgrade path from Trove v1 to Trove v2; it would be the proverbial
> bridge from nowhere.
>
> This (previous) statement is, I realize, contentious. There are those who
> have told me that an upgrade path must be provided, and there are those who
> have told me of unnamed deployments of Trove that would suffer. To this,
> all I can say is that if an upgrade path is of value to you, then please
> commit the development resources to participate in the community to make
> that possible. But equally, preventing a v2 of Trove or delaying it will
> only make the v1 that we have today less valuable.
>
> We have learned a lot from v1, and the hope is that we can address that in
> v2. Some of the more significant things that I have learned are:
>
> - We should adopt a versioned front-end API from the very beginning;
> making the REST API versioned is not a ‘v2 feature’
>
> - A guest agent running on a tenant instance, with connectivity to a
> shared management message bus is a security loophole; encrypting traffic,
> per-tenant-passwords, and any other scheme is merely lipstick on a security
> hole
>

This was a major concern when we deployed it and drove the architectural
decisions. I'd be glad to see it resolved or re-architected.


>
> - Reliance on Nova for compute resources is fine, but dependence on Nova
> VM specific capabilities (like instance rebuild) is not; it makes things
> like containers or bare-metal second class citizens
>
> - A fair portion of what Trove does is resource orchestration; don’t
> reinvent the wheel, there’s Heat for that. Admittedly, Heat wasn’t as far
> along when Trove got started but that’s not the case today and we have an
> opportunity to fix that now
>

+1


>
> - A similarly significant portion of what Trove does is to implement a
> state-machine that will perform specific workflows involved in implementing
> database specific operations. This makes the Trove taskmanager a stateful
> entity. Some of the operations could take a fair amount of time. This is a
> serious architectural flaw
>
> - Tenants should not ever be able to directly interact with the underlying
> storage and compute used by database instances; that should be the default
> configuration, not an untested deployment alternative
>

+1 to this also. Trove should offer a black box DB as a Service, not
something the user sees as an instance+storage that they feel that they can
manipulate.


>
> - The CI should test all databases that are considered to be ‘supported’
> without 

Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-19 Thread Fox, Kevin M
Thanks for starting this difficult discussion.

I think I agree with all the lessons learned except  the nova one. while you 
can treat containers and vm's the same, after years of using both, I really 
don't think its a good idea to treat them equally. Containers can't work 
properly if used as a vm. (really, really.)

I agree whole heartedly with your statement that its mostly an orchestration 
problem and should reuse stuff now that there are options.

I would propose the following that I think meets your goals and could widen 
your contributor base substantially:

Look at the Kubernetes (k8s) concept of Operator -> 
https://coreos.com/blog/introducing-operators.html

They allow application specific logic to be added to Kubernetes while reusing 
the rest of k8s to do what its good at. Container Orchestration. etcd is just a 
clustered database and if the operator concept works for it, it should also 
work for other databases such as Gallera.

Where I think the containers/vm thing is incompatible is the thing I think will 
make Trove's life easier. You can think of a member of the database as few 
different components, such as:
 * main database process
 * metrics gatherer (such as https://github.com/prometheus/mysqld_exporter)
 * trove_guest_agent

With the current approach, all are mixed into the same vm image, making it very 
difficult to update the trove_guest_agent without touching the main database 
process. (needed when you upgrade the trove controllers). With the k8s sidecar 
concept, each would be a separate container loaded into the same pod.

So rather then needing to maintain a trove image for every possible combination 
of db version, trove version, etc, you can reuse upstream database containers 
along with trove provided guest agents.

There's a secure channel between kube-apiserver and kubelet so you can reuse it 
for secure communications. No need to add anything for secure communication. 
trove engine -> kubectl exec x-db -c guest_agent some command.

There is k8s federation, so if the operator was started at the federation 
level, it can cross multiple OpenStack regions.

Another big feature I that hasn't been mentioned yet that I think is critical. 
In our performance tests, databases in VM's have never performed particularly 
well. Using k8s as a base, bare metal nodes could be pulled in easily, with 
dedicated disk or ssd's that the pods land on that are very very close to the 
database. This should give native performance.

So, my suggestion would be to strongly consider basing Trove v2 on Kubernetes. 
It can provide a huge bang for the buck, simplifying the Trove architecture 
substantially while gaining the new features your list as being important. The 
Trove v2 OpenStack api can be exposed as a very thin wrapper over k8s Third 
Party Resources (TPR) and would make Trove entirely stateless. k8s maintains 
all state for everything in etcd.

Please consider this architecture.

Thanks,
Kevin


From: Amrith Kumar [amrith.ku...@gmail.com]
Sent: Sunday, June 18, 2017 4:35 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

Trove has evolved rapidly over the past several years, since integration in 
IceHouse when it only supported single instances of a few databases. Today it 
supports a dozen databases including clusters and replication.

The user survey [1] indicates that while there is strong interest in the 
project, there are few large production deployments that are known of (by the 
development team).

Recent changes in the OpenStack community at large (company realignments, 
acquisitions, layoffs) and the Trove community in particular, coupled with a 
mounting burden of technical debt have prompted me to make this proposal to 
re-architect Trove.

This email summarizes several of the issues that face the project, both 
structurally and architecturally. This email does not claim to include a 
detailed specification for what the new Trove would look like, merely the 
recommendation that the community should come together and develop one so that 
the project can be sustainable and useful to those who wish to use it in the 
future.

TL;DR

Trove, with support for a dozen or so databases today, finds itself in a bind 
because there are few developers, and a code-base with a significant amount of 
technical debt.

Some architectural choices which the team made over the years have consequences 
which make the project less than ideal for deployers.

Given that there are no major production deployments of Trove at present, this 
provides us an opportunity to reset the project, learn from our v1 and come up 
with a strong v2.

An important aspect of making this proposal work is that we seek to eliminate 
the effort (planning, and coding) involved in migrating existing Trove v1 
deployments to the proposed Trove v2. Effectively, with work beginning on Trove 
v2 as 

Re: [openstack-dev] [nova][scheduler][placement] Trying to understand the proposed direction

2017-06-19 Thread Jay Pipes

On 06/19/2017 01:59 PM, Edward Leafe wrote:
While we discussed the fact that there may be a lot of entries, we did 
not say we'd immediately support a paging mechanism.


OK, thanks for clarifying that. When we discussed returning 1.5K per 
compute host instead of a couple of hundred bytes, there was discussion 
that paging would be necessary.


Not sure where you're getting the whole 1.5K per compute host thing from.

Here's a paste with the before and after of what we're talking about:

http://paste.openstack.org/show/613129/

Note that I'm using a situation with shared storage and two compute 
nodes providing VCPU and MEMORY. In the current situation, the shared 
storage provider isn't returned, as you know.


The before is 231 bytes. The after (again, with three providers, not 1) 
is 1651 bytes.


gzipping the after contents results in 358 bytes.

So, honestly I'm not concerned.

Again, operators have insisted on keeping the flexibility currently in 
the Nova scheduler to weigh/sort compute nodes by things like thermal 
metrics and kinds of data that the Placement API will never be 
responsible for.


The scheduler will need to merge information from the 
"provider_summaries" part of the HTTP response with information it has 
already in its HostState objects (gotten from 
ComputeNodeList.get_all_by_uuid() and AggregateMetadataList).


OK, that’s informative, too. Is there anything decided on how much host 
info will be in the response from placement, and how much will be in 
HostState? Or how the reporting of resources by the compute nodes will 
have to change to feed this information to placement? Or how the two 
sources of information will be combined so that the filters and weighers 
can process it? Or is that still to be worked out?


I'm currently working on a patch that integrates the REST API into the 
scheduler.


The merging of data will essentially start with the resource amounts 
that the host state objects contain (stuff like total_usable_ram etc) 
with the accurate data from the provider_summaries section.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] this week's priorities and subteam reports

2017-06-19 Thread Yeleswarapu, Ramamani
Hi,

We are glad to present this week's priorities and subteam report for Ironic. As 
usual, this is pulled directly from the Ironic whiteboard[0] and formatted.

This Week's Priorities (as of the weekly ironic meeting)

1. booting from volume:
1.1. iPXE template for iSCSI: https://review.openstack.org/#/c/413324/
1.2. Skipping deployment logic: https://review.openstack.org/#/c/454243/
2. Rolling upgrades:
2.1.  'Add new dbsync command with first online data migration': 
https://review.openstack.org/#/c/408556/
3. Physical network topology awareness:
3.1. Physical network-aware port mapping: 
https://review.openstack.org/#/c/470406
4. Driver composition reform
4.1. Classic driver deprecation spec: 
https://review.openstack.org/#/c/464046/


Bugs (dtantsur, vdrok, TheJulia)

- Stats (diff between 12 Jun 2017 and 19 Jun 2017)
- Ironic: 249 bugs (-1) + 251 wishlist items (-3). 25 new (+4), 200 in progress 
(-4), 0 critical, 30 high (-1) and 31 incomplete (-1)
- Inspector: 13 bugs (-2) + 30 wishlist items. 1 new, 14 in progress (-1), 0 
critical, 3 high and 3 incomplete
- Nova bugs with Ironic tag: 13 (+1). 1 new (-1), 0 critical, 0 high
- the tool at http://ironic-divius.rhcloud.com/ is now faster (though still not 
too fast)
- I wonder if anyone is using it, except for me :)

Essential Priorities


CI refactoring and missing test coverage

- Standalone CI tests (vsaienk0)
- next patch to be reviewed, needed for 3rd party CI: 
https://review.openstack.org/#/c/429770/
- Missing test coverage (all)
- portgroups and attach/detach tempest tests: 
https://review.openstack.org/382476
- local boot with partition images: TODO 
https://bugs.launchpad.net/ironic/+bug/1531149
- adoption: https://review.openstack.org/#/c/344975/
- should probably be changed to use standalone tests
- root device hints: TODO

Generic boot-from-volume (TheJulia, dtantsur)
-
- specs and blueprints:
- 
http://specs.openstack.org/openstack/ironic-specs/specs/approved/volume-connection-information.html
- code: https://review.openstack.org/#/q/topic:bug/1526231
- 
http://specs.openstack.org/openstack/ironic-specs/specs/approved/boot-from-volume-reference-drivers.html
- code: https://review.openstack.org/#/q/topic:bug/1559691
- https://blueprints.launchpad.net/nova/+spec/ironic-boot-from-volume
- code: 
https://review.openstack.org/#/q/topic:bp/ironic-boot-from-volume
- status as of most recent weekly meeting:
- hshiina is looking in Nova side changes and is attempting to obtain 
clarity on some of the issues that tenant network separation introduced into 
the deployment workflow.
- Patch/note tracking etherpad: https://etherpad.openstack.org/p/Ironic-BFV
Ironic Patches:
https://review.openstack.org/#/c/406290 - Wiring in attach/detach 
operations MERGED
https://review.openstack.org/#/c/413324 - iPXE template - Has 
review feedback - Hopefully updated revision later today.Pushing revision now
https://review.openstack.org/#/c/454243/ - Skip deployment if BFV - 
Has 1x+2
https://review.openstack.org/#/c/214586/ - Volume Connection 
Information Rest API Change  - Has 1x +2
https://review.openstack.org/#/c/463930/ - CRUD notification 
updates for volume objects.
https://review.openstack.org/#/c/463908/ - Enable cinder storage 
interface for generic hardware
Additional patches exist, for python-ironicclient and one for nova.  
Links in the patch/note tracking etherpad.

Rolling upgrades and grenade-partial (rloo, jlvillal)
-
- spec approved; code patches: 
https://review.openstack.org/#/q/topic:bug/1526283
- status as of most recent weekly meeting:
- next patch needs update to address comments: 'Add new dbsync command with 
first online data migration': https://review.openstack.org/#/c/408556/
- to address restarting services after unpinning, spec ready for reviews: 
'SIGHUP restarts services with updated configs': 
https://review.openstack.org/474309
- Testing work: done as per spec, but rloo wants to ask vasyl whether we 
can improve. grenade test will do upgrade so we have old API sending requests 
to old and/or new conductor, but rloo doesn't think there is anything to 
control -which- conductor handles the request, so what if old conductor handles 
all the requests?

Reference architecture guide (dtantsur)
---
- no updates, dtantsur plans to start working on some text for the 
install-guide soon(ish)

Python 3.5 compatibility (Nisha, Ankit)
---
- Topic: 

Re: [openstack-dev] [nova][scheduler][placement] Trying to understand the proposed direction

2017-06-19 Thread Edward Leafe
On Jun 19, 2017, at 9:17 AM, Jay Pipes  wrote:

As Matt pointed out, I mis-wrote when I said “current flow”. I meant “current 
agreed-to design flow”. So no need to rehash that.

>> * Placement returns a number of these data structures as JSON blobs. Due to 
>> the size of the data, a page size will have to be determined, and placement 
>> will have to either maintain that list of structured datafor subsequent 
>> requests, or re-run the query and only calculate the data structures for the 
>> hosts that fit in the requested page.
> 
> "of these data structures as JSON blobs" is kind of redundant... all our REST 
> APIs return data structures as JSON blobs.

Well, I was trying to be specific. I didn’t mean to imply that this was a 
radical departure or anything.

> While we discussed the fact that there may be a lot of entries, we did not 
> say we'd immediately support a paging mechanism.

OK, thanks for clarifying that. When we discussed returning 1.5K per compute 
host instead of a couple of hundred bytes, there was discussion that paging 
would be necessary.

>> * Scheduler continues to request the paged results until it has them all.
> 
> See above. Was discussed briefly as a concern but not work to do for first 
> patches.
> 
>> * Scheduler then runs this data through the filters and weighers. No 
>> HostState objects are required, as the data structures will contain all the 
>> information that scheduler will need.
> 
> No, this isn't correct. The scheduler will have *some* of the information it 
> requires for weighing from the returned data from the GET 
> /allocation_candidates call, but not all of it.
> 
> Again, operators have insisted on keeping the flexibility currently in the 
> Nova scheduler to weigh/sort compute nodes by things like thermal metrics and 
> kinds of data that the Placement API will never be responsible for.
> 
> The scheduler will need to merge information from the "provider_summaries" 
> part of the HTTP response with information it has already in its HostState 
> objects (gotten from ComputeNodeList.get_all_by_uuid() and 
> AggregateMetadataList).

OK, that’s informative, too. Is there anything decided on how much host info 
will be in the response from placement, and how much will be in HostState? Or 
how the reporting of resources by the compute nodes will have to change to feed 
this information to placement? Or how the two sources of information will be 
combined so that the filters and weighers can process it? Or is that still to 
be worked out?

>> * Scheduler then selects the data structure at the top of the ranked list. 
>> Inside that structure is a dict of the allocation data that scheduler will 
>> need to claim the resources on the selected host. If the claim fails, the 
>> next data structure in the list is chosen, and repeated until a claim 
>> succeeds.
> 
> Kind of, yes. The scheduler will select a *host* that meets its needs.
> 
> There may be more than one allocation request that includes that host 
> resource provider, because of shared providers and (soon) nested providers. 
> The scheduler will choose one of these allocation requests and attempt a 
> claim of resources by simply PUT /allocations/{instance_uuid} with the 
> serialized body of that allocation request. If 202 returned, cool. If not, 
> repeat for the next allocation request.

Ah, yes, good point. A host with multiple nested providers, or with shared and 
local storage, will have to have multiple copies of the data structure returned 
to reflect those permutations. 

>> * Scheduler then creates a list of N of these data structures, with the 
>> first being the data for the selected host, and the the rest being data 
>> structures representing alternates consisting of the next hosts in the 
>> ranked list that are in the same cell as the selected host.
> 
> Yes, this is the proposed solution for allowing retries within a cell.

OK.

>> * Scheduler returns that list to conductor.
>> * Conductor determines the cell of the selected host, and sends that list to 
>> the target cell.
>> * Target cell tries to build the instance on the selected host. If it fails, 
>> it uses the allocation data in the data structure to unclaim the resources 
>> for the selected host, and tries to claim the resources for the next host in 
>> the list using its allocation data. It then tries to build the instance on 
>> the next host in the list of alternates. Only when all alternates fail does 
>> the build request fail.
> 
> I'll let Dan discuss this last part.


Well, that’s not substantially different than the original plan, so no 
additional explanation is required.

One other thing: since this new functionality is exposed via a new API call, is 
the existing method of filtering RPs by passing in resources going to be 
deprecated? And the code for adding filtering by traits to that also no longer 
useful?


-- Ed Leafe






Re: [openstack-dev] [swift] Optimizing storage for small objects in Swift

2017-06-19 Thread Clint Byrum
Excerpts from Alexandre Lécuyer's message of 2017-06-19 11:36:15 +0200:
> Hello Clint,
> 
> Thanks for your feedback, replying in the email inline.
> 
> On 06/16/2017 10:54 PM, Clint Byrum wrote:
> > Excerpts from John Dickinson's message of 2017-06-16 11:35:39 -0700:
> >> On 16 Jun 2017, at 10:51, Clint Byrum wrote:
> >>
> >>> This is great work.
> >>>
> >>> I'm sure you've already thought of this, but could you explain why
> >>> you've chosen not to put the small objects in the k/v store as part of
> >>> the value rather than in secondary large files?
> >> I don't want to co-opt an answer from Alex, but I do want to point to some 
> >> of the other background on this LOSF work.
> >>
> >> https://wiki.openstack.org/wiki/Swift/ideas/small_files
> >> https://wiki.openstack.org/wiki/Swift/ideas/small_files/experimentations
> >> https://wiki.openstack.org/wiki/Swift/ideas/small_files/implementation
> >>
> > These are great. Thanks for sharing them, I understand a lot more now.
> >
> >> Look at the second link for some context to your answer, but the summary 
> >> is "that means writing a file system, and writing a file system is really 
> >> hard".
> >>
> > I'm not sure we were thinking the same thing.
> >
> > I was more asking, why not put the content of the object into the k/v
> > instead of the big_file_id:offset? My thinking was that for smaller
> > objects, you would just return the data immediately upon reading the k/v,
> > rather than then needing to go find the big file and read the offset.
> > However, I'm painfully aware that those directly involved with the problem
> > have likely thought of this. However, the experiments don't seem to show
> > that this was attempted. Perhaps I'm zooming too far out to see the real
> > problem space. You can all tell me to take my spray paint can and stop
> > staring at the bike shed if this is just too annoying. Seriously.
> >
> > Of course, one important thing is, what does one consider "small"? Seems
> > like there's a size where the memory footprint of storing it in the
> > k/v would be justifiable if reads just returned immediately from k/v
> > vs. needing to also go get data from a big file on disk. Perhaps that
> > size is too low to really matter. I was hoping that this had been
> > considered and there was documentation, but I don't really see it.
> Right, we had considered this when we started the project : storing 
> small objects directly in the KV. It would not be too diffcult to do, 
> but we see a few problems :
> 
> 1) consistency
> In the current design, we append data at the end of a "big file". When 
> the data upload is finished, swift writes the metadata and commits the 
> file. This triggers a fsync(). Only then do we return. We can rely on 
> the data being stable on disk, even if there is a power loss.  Because 
> we fallocate() space for the "big files" beforehand, we can also hope to 
> have mostly sequential disk IO.
> (Important as most swift clusters use SATA disks).
> 
> Once the object has been committed, we create an entry for it in the KV. 
> This is done asynchronously, because synchronous writes on the KV kills 
> performance. If we loose power, we loose the latest data. After the 
> server is rebooted, we have to scan the end of volumes to create missing 
> entries in the KV. (I will not discuss this in detail in this email to 
> keep this short, but we can discuss it in another thread, or I can post 
> some information on the wiki).
> 
> If we put small objects in the KV, we would need to do synchronous 
> writes to make sure we don't loose data.
> Also, currently we can completly reconstruct the KV from the "big 
> files". It would not be possible anymore.
> 
> 
> 2) performance
> On our clusters we see about 40% of physical disk IO being caused by 
> readdir().
> We want to serve directory listing requests from memory. So "small" 
> means "the KV can fit in the page cache".
> We estimate that we need the size per object to be below 50 bytes, which 
> doesn't leave much room for data.
> 
> LevelDB causes write amplification, as it will regularly copy data to 
> different files (levels) to keep keys compressed and in sorted order. If 
> we store object data within the KV, it will be copied around multiple 
> times as well.
> 
> 
> Finally it is also more simple to have only one path to handle. Beyond 
> these issues, it would not be difficult to store data in the KV. This is 
> something we can revisit after more test and maybe some production 
> experience.
> 

Really great explanation. Thanks for sharing. I hope we can all learn
from the thorough approach you've taken to this problem. Good luck!

> >
> > Also the "writing your own filesystem" option in experiments seemed
> > more like a thing to do if you left the k/v stores out entirely.
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [OpenStack-docs] [openstack-docs][dev][all] Documentation repo freeze

2017-06-19 Thread Petr Kovar
On Mon, 19 Jun 2017 15:56:35 +
Alexandra Settle  wrote:

> Hi everyone,
> 
> As of today - Monday, the 19th of June – please do NOT merge any patches into
> the openstack-manuals repository that is not related to the topic:
> “doc-migration”.
> 
> We are currently in the phase of setting up for our MASSIVE migration and we
> need to ensure that there will be minimal conflicts.
> 
> You can find all patches related to that topic here:
> https://review.openstack.org/#/q/status:open+project:openstack/openstack-manuals+branch:master+topic:doc-migration
> 
> The only other patches that should be passed is the Zanata translation 
> patches.
> 
> If there are any concerns or questions, please do not hesitate to contact 
> either
> myself or Doug Hellmann for further clarification.

Can we still merge into stable branches? As the migration only affects
content in master, I think there's no need to freeze stable branches. 

Thanks,
pk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][glance] Glance needs help, it's getting critical

2017-06-19 Thread Flavio Percoco

On 19/06/17 11:33 -0500, Sean McGinnis wrote:

[snip]

Who else would like to volunteer to help?

The help needed is not so much on fixing bugs but rather reviewing the patches
that fix bugs and help moving the release forward. I hope the community will
grow soonish so that we can go back to the regular core team.

Flavio

[0] https://review.openstack.org/#/c/474604/

--
@flaper87
Flavio Percoco


I've been trying to spend some time doing reviews there. I will continue to do
so as long as it is needed/useful.



Awesome, thanks a bunch! I'll propose adding you as well.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][glance] Glance needs help, it's getting critical

2017-06-19 Thread Sean McGinnis
[snip] 
> Who else would like to volunteer to help?
> 
> The help needed is not so much on fixing bugs but rather reviewing the patches
> that fix bugs and help moving the release forward. I hope the community will
> grow soonish so that we can go back to the regular core team.
> 
> Flavio
> 
> [0] https://review.openstack.org/#/c/474604/
> 
> --
> @flaper87
> Flavio Percoco

I've been trying to spend some time doing reviews there. I will continue to do
so as long as it is needed/useful.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Optimizing storage for small objects in Swift

2017-06-19 Thread Alexandre Lécuyer

Hello John,

Thanks for your comments! Replying inline

On 06/16/2017 07:06 PM, John Dickinson wrote:

Alex, this is fantastic work and great info. Thanks for sharing it.

Additional comments inline.

On 16 Jun 2017, at 6:54, Alexandre Lécuyer wrote:


Swift stores objects on a regular filesystem (XFS is recommended), one file per 
object. While it works fine for medium or big objects, when you have lots of 
small objects you can run into issues: because of the high count of inodes on 
the object servers, they can’t stay in cache, implying lot of memory usage and 
IO operations to fetch inodes from disk.

In the past few months, we’ve been working on implementing a new storage backend 
in Swift. It is highly inspired by haystack[1]. In a few words, objects are stored 
in big files, and a Key/Value store provides information to locate an object 
(object hash -> big_file_id:offset). As the mapping in the K/V consumes less 
memory than an inode, it is possible to keep all entries in memory, saving many IO 
to locate the object. It also allows some performance improvements by limiting the 
XFS meta updates (e.g.: almost no inode updates as we write objects by using 
fdatasync() instead of fsync())

One of the questions that was raised during discussions about this design is: 
do we want one K/V store per device, or one K/V store per Swift partition (= 
multiple K/V per device). The concern was about failure domain. If the only K/V 
gets corrupted, the whole device must be reconstructed. Memory usage is a major 
point in making a decision, so we did some benchmark.

The key-value store is implemented over LevelDB.
Given a single disk with 20 million files (could be either one object replica 
or one fragment, if using EC)

I have tested three cases :
   - single KV for the whole disk
   - one KV per partition, with 100 partitions per disk
   - one KV per partition, with 1000 partitions per disk

Single KV for the disk :
   - DB size: 750 MB
   - bytes per object: 38

One KV per partition :
Assuming :
   - 100 partitions on the disk (=> 100 KV)
   - 16 bits part power (=> all keys in a given KV will have the same 16 bit 
prefix)

   - 7916 KB per KV, total DB size: 773 MB
   - bytes per object: 41

One KV per partition :
Assuming :
   - 1000 partitions on the disk (=> 1000 KV)
   - 16 bits part power (=> all keys in a given KV will have the same 16 bit 
prefix)

   - 1388 KB per KV, total DB size: 1355 MB total
   - bytes per object: 71

A typical server we use for swift clusters has 36 drives, which gives us :
- Single KV : 26 GB
- Split KV, 100 partitions : 28 GB (+7%)
- Split KV, 1000 partitions : 48 GB (+85%)

So, splitting seems reasonable if you don't have too many partitions.

Same test, with 10 million files instead of 20

- Single KV : 13 GB
- Split KV, 100 partitions : 18 GB (+38%)
- Split KV, 1000 partitions : 24 GB (+85%)


Finally, if we run a full compaction on the DB after the test, you get the
same memory usage in all cases, about 32 bytes per object.

We have not made enough tests to know what would happen in production. LevelDB
does trigger compaction automatically on parts of the DB, but continuous change
means we probably would not reach the smallest possible size.

This is likely a very good assumption (that the KV will continuously change and 
never get to minimum size).

My initial instinct is to go for one KV per drive.

One per partition does sound nice, but it is more sensitive to proper cluster 
configuration and deployment. For example, if an operator were to deploy a 
relatively small cluster but have a part power that's too big for the capacity, 
the KV strategy would end up with many thousands of mostly-empty partitions 
(imagine a 5-node cluster, 60 drives with a part power of 18 -- you're looking 
at more than 13k parts per drive per storage policy). Going for one KV per 
whole drive means that poor ring settings won't impact this area of storage as 
much.
That is also what we think. We will do more testing to confirm that one 
K/V per disk is stable with many objects under load, and if it does not 
corrupt when power outages occur. (we will have to recover a little 
data, but not rebuild the whole K/V).







Beyond the size issue, there are other things to consider :
File descriptors limits : LevelDB seems to keep at least 4 file descriptors 
open during operation.

Having one KV per partition also means you have to move entries between KVs 
when you change the part power. (if we want to support that)

Yes, let's support that (in general)! But doing on KV per drive means it 
already works for this LOSF work.


A compromise may be to split KVs on a small prefix of the object's hash, 
independent of swift's configuration.

This is an interesting idea to explore. It will allow for smaller individual KV 
stores without being as sensitive to the ring parameters.


As you can see we're still thinking about this. Any ideas are welcome !
We will keep you updated about more "real 

Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-19 Thread Chris Friesen

On 06/16/2017 02:57 AM, Julien Danjou wrote:

On Thu, Jun 15 2017, Doug Hellmann wrote:


One of the *most* common complaints the TC gets from outside the
contributor community is that people do not understand what projects
are part of OpenStack and what parts are not. We have a clear
definition of that in our minds (the projects that have said they
want to be part of OpenStack, and agreed to put themselves under
TC governance, with all of the policies that implies). That definition
is so trivial to say, that it seems like a tautology.  However,
looking in from the outside of the community, that definition isn't
helpful.


I still wonder why they care. Who care, really? Can we have some people
that care on this thread so they explain directly what we're trying to
solve here?

Everything is just a bunch of free software projects to me. The
governance made zero difference in my contributions or direction of the
projects I PTL'ed.


When I was first starting out, I didn't care at all about governance.  I wanted 
to know "What do the various components *do*, and which of them do I need to 
install to get a practical and useful OpenStack installation?".


A bit later on, I started thinking about "Which of these components are mature 
enough to be usable, and likely to be around for long enough to make it 
worthwhile to use them?"


A bit further down the road the issue became "I have this specific thing I want 
to accomplish, are there any projects out there that are working on it?"


I suspect I'm not the only one that went through this process, and I don't feel 
like there's a lot of information out there aimed at answering this sort of 
question without spending a lot of time digging into individual service 
documentation.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-docs][dev][all] Documentation repo freeze

2017-06-19 Thread Alexandra Settle
Hi everyone,

As of today - Monday, the 19th of June – please do NOT merge any patches into 
the openstack-manuals repository that is not related to the topic: 
“doc-migration”.

We are currently in the phase of setting up for our MASSIVE migration and we 
need to ensure that there will be minimal conflicts.

You can find all patches related to that topic here: 
https://review.openstack.org/#/q/status:open+project:openstack/openstack-manuals+branch:master+topic:doc-migration

The only other patches that should be passed is the Zanata translation patches.

If there are any concerns or questions, please do not hesitate to contact 
either myself or Doug Hellmann for further clarification.

Thanks,

Alex
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-19 Thread Graham Hayes
On 19/06/17 16:11, Jay Pipes wrote:
> On 06/16/2017 05:18 AM, Graham Hayes wrote:
>> On 15/06/17 22:35, Ed Leafe wrote:
>>> On Jun 15, 2017, at 3:35 PM, Jeremy Stanley  wrote:
>>>
 For me it's one of the most annoying yet challenging/interesting
 aspects: free software development is as much about community and
 politics as it is actual software development (perhaps more so).
>>>
>>> Another way to look at it is how we see ourselves (as a community)
>>> and how people on the outside see OpenStack. I would imagine that
>>> someone looking at OpenStack for the first time would not care a whit
>>> about governance, repo locations, etc. They would certainly care
>>> about "what do I need to do to use this thing?"
>>>
>>> What we call things isn't confusing to those of us in the community -
>>> well, at least to those of us who take the time to read big long
>>> email threads like this. We need to be clearer in how we represent
>>> OpenStack to outsiders. To that end, I think that limiting the term
>>> "OpenStack" to a handful of the core projects would make things a
>>> whole lot clearer. We can continue to present everything else as a
>>> marketplace, or an ecosystem, or however the more marketing-minded
>>> want to label it, but we should *not* call those projects "OpenStack".
>>>
>>> Now I know, I work on Nova, so I'm expecting responses that "of
>>> course you don't care", or "OpenStack is people, and you're hurting
>>> our feelings!". So flame away!
>>
>> Where to start.
>>
>> Most of the small projects are not complaining about "hurt feelings".
>>
>> If the community want to follow advice from a certain tweet, and limit
>> OpenStack to Nova + its spinouts, we should do that. Just let the rest
>> of us know, so we can either start shutting down the projects, or look
>> at moving the projects to another foundation.
>>
>> Of course we should probably change the OpenStack mission statement,
>> and give the board a heads up that all these project teams they talk
>> about publicly will be going away.
>>
>> And, yes, coming from different project teams does mean that we will
>> have differing views on what should be in OpenStack, and its level of
>> priority - but (in my personal, biased opinion) we should not throw the
>> baby out with the bath water because we cannot find two names to
>> describe things.
> 
> How about Designate become a standalone DNSaaS project that more than
> OpenStack can use? Kubernetes could use Designate as a DNS provider,
> then, in the same way that it can currently use Cinder as a
> PersistenVolume provider.

We already have that. Designate is usable without anything else
(although I would recommend keystone to make it manageable).

Designate is at its core an API that maintains DNS servers. Nearly all
of our current advanced integrations are from other projects to
Designate, and they just use this API.

Kubernetes just doesn't have the concept of external DNS providers
build in yet (there is a incubator project, but it seems new).

> 
> Then there'd be no need to fret about a particular tweet.

Its not just one tweet. (See the parent of my email)

There are already repercussions of not being part of OpenStack for
project teams, and pending on the outcome of this discussion,
potentially more.

Also, Designate was not what I was thinking of when I wrote this,
I am working under the assumption that DNS is a core bit of
infrastructure, and hence would be in any "core". I am thinking
of the myriad of other projects that would then have to start hosting
docs, re-writing functional tests, potentially start building CI
systems.



> Best,
> -jay
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-19 Thread Chris Hoge


> On Jun 15, 2017, at 5:57 AM, Thierry Carrez  wrote:
> 
> Sean Dague wrote:
>> [...]
>> I think those are all fine. The other term that popped into my head was
>> "Friends of OpenStack" as a way to describe the openstack-hosted efforts
>> that aren't official projects. It may be too informal, but I do think
>> the OpenStack-Hosted vs. OpenStack might still mix up in people's head.
> 
> My original thinking was to call them "hosted projects" or "host
> projects", but then it felt a bit incomplete. I kinda like the "Friends
> of OpenStack" name, although it seems to imply some kind of vetting that
> we don't actually do.

Why not bring back the name Stackforge and apply that
to unofficial projects? It’s short, descriptive, and unambiguous.

-Chris

> An alternative would be to give "the OpenStack project infrastructure"
> some kind of a brand name (say, "Opium", for OpenStack project
> infrastructure ultimate madness) and then call the hosted projects
> "Opium projects". Rename the Infra team to Opium team, and voilà!
> -- 
> Thierry Carrez (ttx)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-19 Thread Jay Pipes

On 06/16/2017 05:18 AM, Graham Hayes wrote:

On 15/06/17 22:35, Ed Leafe wrote:

On Jun 15, 2017, at 3:35 PM, Jeremy Stanley  wrote:


For me it's one of the most annoying yet challenging/interesting
aspects: free software development is as much about community and
politics as it is actual software development (perhaps more so).


Another way to look at it is how we see ourselves (as a community) and how people on the 
outside see OpenStack. I would imagine that someone looking at OpenStack for the first 
time would not care a whit about governance, repo locations, etc. They would certainly 
care about "what do I need to do to use this thing?"

What we call things isn't confusing to those of us in the community - well, at least to those of us 
who take the time to read big long email threads like this. We need to be clearer in how we 
represent OpenStack to outsiders. To that end, I think that limiting the term "OpenStack" 
to a handful of the core projects would make things a whole lot clearer. We can continue to present 
everything else as a marketplace, or an ecosystem, or however the more marketing-minded want to 
label it, but we should *not* call those projects "OpenStack".

Now I know, I work on Nova, so I'm expecting responses that "of course you don't care", 
or "OpenStack is people, and you're hurting our feelings!". So flame away!


Where to start.

Most of the small projects are not complaining about "hurt feelings".

If the community want to follow advice from a certain tweet, and limit
OpenStack to Nova + its spinouts, we should do that. Just let the rest
of us know, so we can either start shutting down the projects, or look
at moving the projects to another foundation.

Of course we should probably change the OpenStack mission statement,
and give the board a heads up that all these project teams they talk
about publicly will be going away.

And, yes, coming from different project teams does mean that we will
have differing views on what should be in OpenStack, and its level of
priority - but (in my personal, biased opinion) we should not throw the
baby out with the bath water because we cannot find two names to
describe things.


How about Designate become a standalone DNSaaS project that more than 
OpenStack can use? Kubernetes could use Designate as a DNS provider, 
then, in the same way that it can currently use Cinder as a 
PersistenVolume provider.


Then there'd be no need to fret about a particular tweet.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-19 Thread Matt Riedemann

On 6/17/2017 10:55 AM, Jay Bryant wrote:


I am responding under Tim's note because I think it gets at what we 
really want to communicate and takes me to what we have presented in 
OUI.  We have Core OpenStack Projects and then a whole community of 
additional projects that support cloud functionality.


So, without it being named, or cutesy, though I liked "Friends of 
Openstack", can we go with "OpenStack Core Projects" and "Peripheral 
OpenStack Projects"?


Because then you have to define what "core" means, and how you get to be 
"core", which is like the old system of integrated and incubated 
projects. I agree that a "core" set of projects is more understandable 
at first, probably most for an outsider. But it gets confusing from a 
governance perspective within the community.


And if you want to run just containers with Kubernetes and you want to 
use Keystone and Cinder with it, you don't need Nova, so is Nova "core" 
or not?


This is probably where the constellations idea comes in [1].

At the end of the day it's all OpenStack to me if it's hosted on 
OpenStack infra, but I'm not the guy making budget decisions at a 
company determining what to invest in. I think Doug has tried to explain 
that perspective a bit elsewhere in this thread, and it sounds like 
that's the key issue, the outside perspective from people making budget 
decisions.


[1] https://review.openstack.org/#/c/453262/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] nominating Abhishek Kekane for glance core

2017-06-19 Thread Erno Kuvaja
On Fri, Jun 16, 2017 at 3:26 PM, Brian Rosmaita
 wrote:
> I'm nominating Abhishek Kekane (abhishekk on IRC) to be a Glance core
> for the Pike cycle.  Abhishek has been around the Glance community for
> a long time and is familiar with the architecture and design patterns
> used in Glance and its related projects.  He's contributed code,
> triaged bugs, provided bugfixes, and done quality reviews for Glance.
>
> Abhishek has been proposed for Glance core before, but some members of
> the community were concerned that he wasn't able to devote sufficient
> time to Glance.  Given the current situation with the project,
> however, it would be an enormous help to have someone as knowledgeable
> about Glance as Abhishek to have +2 powers.  I discussed this with
> Abhishek, he's aware that some in the community have that concern, and
> he's agreed to be a core reviewer for the Pike cycle.  The community
> can revisit his status early in Queens.
>
> Now that I've written that down, that puts Abhishek in the same boat
> as all core reviewers, i.e., their levels of participation and
> commitment are assessed at the beginning of each cycle and adjustments
> made.
>
> In any case, I'd like to put Abhishek to work as soon as possible!  So
> please reply to this message with comments or concerns before 23:59
> UTC on Monday 19 June.  I'd like to confirm Abhishek as a core on
> Tuesday 20 June.
>
> thanks,
> brian
>

+2 from me! This sounds like a great solution for our immediate
staffing issues and I'm happy to hear Abhishek would have the cycles
to help us. Lets hope we get to enjoy his knowledge and good quality
reviews on many cycles forward.

- Erno

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler][placement] Trying to understand the proposed direction

2017-06-19 Thread Matt Riedemann

On 6/19/2017 9:17 AM, Jay Pipes wrote:

On 06/19/2017 09:04 AM, Edward Leafe wrote:

Current flow:


As noted in the nova-scheduler meeting this morning, this should have 
been called "original plan" rather than "current flow", as Jay pointed 
out inline.


* Scheduler gets a req spec from conductor, containing resource 
requirements

* Scheduler sends those requirements to placement
* Placement runs a query to determine the root RPs that can satisfy 
those requirements


Not root RPs. Non-sharing resource providers, which currently 
effectively means compute node providers. Nested resource providers 
isn't yet merged, so there is currently no concept of a hierarchy of 
providers.


* Placement returns a list of the UUIDs for those root providers to 
scheduler


It returns the provider names and UUIDs, yes.


* Scheduler uses those UUIDs to create HostState objects for each


Kind of. The scheduler calls ComputeNodeList.get_all_by_uuid(), passing 
in a list of the provider UUIDs it got back from the placement service. 
The scheduler then builds a set of HostState objects from the results of 
ComputeNodeList.get_all_by_uuid().


The scheduler also keeps a set of AggregateMetadata objects in memory, 
including the association of aggregate to host (note: this is the 
compute node's *service*, not the compute node object itself, thus the 
reason aggregates don't work properly for Ironic nodes).


* Scheduler runs those HostState objects through filters to remove 
those that don't meet requirements not selected for by placement


Yep.

* Scheduler runs the remaining HostState objects through weighers to 
order them in terms of best fit.


Yep.

* Scheduler takes the host at the top of that ranked list, and tries 
to claim the resources in placement. If that fails, there is a race, 
so that HostState is discarded, and the next is selected. This is 
repeated until the claim succeeds.


No, this is not how things work currently. The scheduler does not claim 
resources. It selects the top (or random host depending on the selection 
strategy) and sends the launch request to the target compute node. The 
target compute node then attempts to claim the resources and in doing so 
writes records to the compute_nodes table in the Nova cell database as 
well as the Placement API for the compute node resource provider.


Not to nit pick, but today the scheduler sends the selected destinations 
to the conductor. Conductor looks up the cell that a selected host is 
in, creates the instance record and friends (bdms) in that cell and then 
sends the build request to the compute host in that cell.




* Scheduler then creates a list of N UUIDs, with the first being the 
selected host, and the the rest being alternates consisting of the 
next hosts in the ranked list that are in the same cell as the 
selected host.


This isn't currently how things work, no. This has been discussed, however.


* Scheduler returns that list to conductor.
* Conductor determines the cell of the selected host, and sends that 
list to the target cell.
* Target cell tries to build the instance on the selected host. If it 
fails, it unclaims the resources for the selected host, and tries to 
claim the resources for the next host in the list. It then tries to 
build the instance on the next host in the list of alternates. Only 
when all alternates fail does the build request fail.


This isn't currently how things work, no. There has been discussion of 
having the compute node retry alternatives locally, but nothing more 
than discussion.


Correct that this isn't how things currently work, but it was/is the 
original plan. And the retry happens within the cell conductor, not on 
the compute node itself. The top-level conductor is what's getting 
selected hosts from the scheduler. The cell-level conductor is what's 
getting a retry request from the compute. The cell-level conductor would 
deallocate from placement for the currently claimed providers, and then 
pick one of the alternatives passed down from the top and then make 
allocations (a claim) against those, then send to an alternative compute 
host for another build attempt.


So with this plan, there are two places to make allocations - the 
scheduler first, and then the cell conductors for retries. This 
duplication is why some people were originally pushing to move all 
allocation-related work happen in the conductor service.





Proposed flow:
* Scheduler gets a req spec from conductor, containing resource 
requirements

* Scheduler sends those requirements to placement
* Placement runs a query to determine the root RPs that can satisfy 
those requirements


Yes.

* Placement then constructs a data structure for each root provider as 
documented in the spec. [0]


Yes.

* Placement returns a number of these data structures as JSON blobs. 
Due to the size of the data, a page size will have to be determined, 
and placement will have to either maintain that list of structured 
datafor 

Re: [openstack-dev] [nova][scheduler][placement] Trying to understand the proposed direction

2017-06-19 Thread Jay Pipes

On 06/19/2017 09:04 AM, Edward Leafe wrote:

Current flow:
* Scheduler gets a req spec from conductor, containing resource requirements
* Scheduler sends those requirements to placement
* Placement runs a query to determine the root RPs that can satisfy those 
requirements


Not root RPs. Non-sharing resource providers, which currently 
effectively means compute node providers. Nested resource providers 
isn't yet merged, so there is currently no concept of a hierarchy of 
providers.



* Placement returns a list of the UUIDs for those root providers to scheduler


It returns the provider names and UUIDs, yes.


* Scheduler uses those UUIDs to create HostState objects for each


Kind of. The scheduler calls ComputeNodeList.get_all_by_uuid(), passing 
in a list of the provider UUIDs it got back from the placement service. 
The scheduler then builds a set of HostState objects from the results of 
ComputeNodeList.get_all_by_uuid().


The scheduler also keeps a set of AggregateMetadata objects in memory, 
including the association of aggregate to host (note: this is the 
compute node's *service*, not the compute node object itself, thus the 
reason aggregates don't work properly for Ironic nodes).



* Scheduler runs those HostState objects through filters to remove those that 
don't meet requirements not selected for by placement


Yep.


* Scheduler runs the remaining HostState objects through weighers to order them 
in terms of best fit.


Yep.


* Scheduler takes the host at the top of that ranked list, and tries to claim 
the resources in placement. If that fails, there is a race, so that HostState 
is discarded, and the next is selected. This is repeated until the claim 
succeeds.


No, this is not how things work currently. The scheduler does not claim 
resources. It selects the top (or random host depending on the selection 
strategy) and sends the launch request to the target compute node. The 
target compute node then attempts to claim the resources and in doing so 
writes records to the compute_nodes table in the Nova cell database as 
well as the Placement API for the compute node resource provider.



* Scheduler then creates a list of N UUIDs, with the first being the selected 
host, and the the rest being alternates consisting of the next hosts in the 
ranked list that are in the same cell as the selected host.


This isn't currently how things work, no. This has been discussed, however.


* Scheduler returns that list to conductor.
* Conductor determines the cell of the selected host, and sends that list to 
the target cell.
* Target cell tries to build the instance on the selected host. If it fails, it 
unclaims the resources for the selected host, and tries to claim the resources 
for the next host in the list. It then tries to build the instance on the next 
host in the list of alternates. Only when all alternates fail does the build 
request fail.


This isn't currently how things work, no. There has been discussion of 
having the compute node retry alternatives locally, but nothing more 
than discussion.



Proposed flow:
* Scheduler gets a req spec from conductor, containing resource requirements
* Scheduler sends those requirements to placement
* Placement runs a query to determine the root RPs that can satisfy those 
requirements


Yes.


* Placement then constructs a data structure for each root provider as 
documented in the spec. [0]


Yes.


* Placement returns a number of these data structures as JSON blobs. Due to the 
size of the data, a page size will have to be determined, and placement will 
have to either maintain that list of structured datafor subsequent requests, or 
re-run the query and only calculate the data structures for the hosts that fit 
in the requested page.


"of these data structures as JSON blobs" is kind of redundant... all our 
REST APIs return data structures as JSON blobs.


While we discussed the fact that there may be a lot of entries, we did 
not say we'd immediately support a paging mechanism.



* Scheduler continues to request the paged results until it has them all.


See above. Was discussed briefly as a concern but not work to do for 
first patches.



* Scheduler then runs this data through the filters and weighers. No HostState 
objects are required, as the data structures will contain all the information 
that scheduler will need.


No, this isn't correct. The scheduler will have *some* of the 
information it requires for weighing from the returned data from the GET 
/allocation_candidates call, but not all of it.


Again, operators have insisted on keeping the flexibility currently in 
the Nova scheduler to weigh/sort compute nodes by things like thermal 
metrics and kinds of data that the Placement API will never be 
responsible for.


The scheduler will need to merge information from the 
"provider_summaries" part of the HTTP response with information it has 
already in its HostState objects (gotten from 
ComputeNodeList.get_all_by_uuid() 

Re: [openstack-dev] [requirements] [ec2-api]How about using boto3instead ofboto in requirements

2017-06-19 Thread jiaopengju
Hi Dims,
I got response from core member of ec2-api. What do you think about it?


--
Hi,


I don't treat adding new library as a problem.


- I see that you don't remove boto - so your change doesn't affect ec2-api code.
- boto doesn't require botocore and jmespath
- right now ec2-api itself doesn't use botocore. botocore is needed only for 
functional tests of ec2-api.


P.S.
and I prefer to use botocore cause I know how to use it. I had such change 
-https://review.openstack.org/#/c/198571/




Regards,
Andrey Pavlov.
--


原始邮件
发件人:jiaopengjujiaopen...@cmss.chinamobile.com
收件人:OpenStack Development Mailing List (not for usage 
questions)openstack-dev@lists.openstack.org
抄送:mthodemth...@mthode.org; seans...@dague.net; alexandrelev...@gmail.com; 
andrey...@gmail.com; fter...@hotmail.com
发送时间:2017年6月19日(周一) 20:39
主题:Re: [openstack-dev] [requirements] [ec2-api]How about using boto3instead 
ofboto in requirements


Add core members of ec2-api to cc list.
Pengju Jiao mail: jiaopen...@cmss.chinamobile.com


原始邮件
发件人:jiaopengjujiaopen...@cmss.chinamobile.com
收件人:OpenStack Development Mailing List (not for usage 
questions)openstack-dev@lists.openstack.org
抄送:mthodemth...@mthode.org; seans...@dague.net
发送时间:2017年6月19日(周一) 20:33
主题:Re: [openstack-dev] [requirements] [ec2-api]How about using boto3instead 
ofboto in requirements


Thanks, I will try to reach out them in ec2-api channel.
Pengju Jiao mail: jiaopen...@cmss.chinamobile.com
Hi Pengju Jiao, The main consume i believe is EC2-API : 
http://codesearch.openstack.org/?q=%5Ebotoi=nopefiles=.*req.*%5C.txtrepos= 
http://codesearch.openstack.org/?q=%5E(from%7Cimport).*botoi=nopefiles=repos= 
Can you please reach out to them? Thanks, Dims On Mon, Jun 19, 2017 at 3:11 AM, 
jiaopengju jiaopen...@cmss.chinamobile.com wrote:  Hi, everyone:   I have a 
blueprint of freezer to support s3 storage driver [1], so I need to  add boto3 
to the requirements [2].   Why use boto3 but not boto?   Boto3 [3] is the next 
version of Boto, is now stable and recommended for  general use. It can be used 
side-by-side with Boto in the same project, so  it is easy to start using Boto3 
in your existing projects as well as new  projects. Going forward, API updates 
and all new feature work will be  focused on Boto3.Problem   Boto3 requires 
botocore, jmespath and s3transfer. botocore and jmespath are  already used by 
boto, this is because boto is used by ec2-api (and  heat-cfntools, rally and 
swift3). After adding Boto3 to requirements, we  will have old and new 
libraries ATM.My suggenstion   How about using boto3 instead of boto? This 
will require us to do some  migration about boto to boto3.Anyone has ideas 
or suggesstions? Thank you very much![1] 
https://review.openstack.org/#/c/471295   
[2]https://review.openstack.org/#/c/473067   [3]https://github.com/boto/boto3   
 Pengju Jiao  mail: jiaopen...@cmss.chinamobile.com   
__  
OpenStack Development Mailing List (not for usage questions)  Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  -- Davanum 
Srinivas :: https://twitter.com/dims 
__ 
OpenStack Development Mailing List (not for usage questions) Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][qa][glance] some recent tempest problems

2017-06-19 Thread Matt Riedemann

On 6/16/2017 8:58 AM, Eric Harney wrote:

I'm not convinced yet that this failure is purely Ceph-specific, at a
quick look.

I think what happens here is, unshelve performs an asynchronous delete
of a glance image, and returns as successful before the delete has
necessarily completed.  The check in tempest then sees that the image
still exists, and fails -- but this isn't valid, because the unshelve
API doesn't guarantee that this image is no longer there at the time it
returns.  This would fail on any image delete that isn't instantaneous.

Is there a guarantee anywhere that the unshelve API behaves how this
tempest test expects it to?


There are no guarantees, no. The unshelve API reference is here [1]. The 
asynchronous postconditions section just says:


"After you successfully shelve a server, its status changes to ACTIVE. 
The server appears on the compute node.


The shelved image is deleted from the list of images returned by an API 
call."


It doesn't say the image is deleted immediately, or that it waits for 
the image to be gone before changing the instance status to ACTIVE.


I see there is also a typo in there, that should say after you 
successfully *unshelve* a server.


From an API user point of view, this is all asynchronous because it's 
an RPC cast from the nova-api service to the nova-conductor and finally 
nova-compute service when unshelving the instance.


So I think the test is making some wrong assumptions on how fast the 
image is going to be deleted when the instance is active.


As Ken'ichi pointed out in the Tempest change, Glance returns a 204 when 
deleting an image in the v2 API [2]. If the image delete is asynchronous 
then that should probably be a 202.


Either way the Tempest test should probably be in a wait loop for the 
image to be gone if it's really going to assert this.


[1] 
https://developer.openstack.org/api-ref/compute/?expanded=unshelve-restore-shelved-server-unshelve-action-detail#unshelve-restore-shelved-server-unshelve-action
[2] 
https://developer.openstack.org/api-ref/image/v2/index.html?expanded=delete-an-image-detail#delete-an-image


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment][kolla][openstack-ansible][openstack-helm][tripleo] ansible role to produce oslo.config files for openstack services

2017-06-19 Thread Doug Hellmann
Excerpts from Michał Jastrzębski's message of 2017-06-16 15:50:54 -0700:
> So I'm trying to figure out how to actually use it.
> 
> We (and any other container based deploy..) will run into some
> chicken/egg problem - you need to deploy container to generate big
> yaml with defaults, then you need to overload it with your

The config schema file (the "big YAML with defaults") should be part of
the packaged software, so the deployment tool shouldn't need to generate
it unless you're handling drivers that are not included in tree.

> configurations, validate if they're not deprecated, run container with

It doesn't do it today, but the thing that converts the input data to
the INI file could automatically translate old option names to their new
names.

> this ansible role (or module...really doesn't matter), spit out final

Why does the config file need to be generated inside a container?

> confg, lay it down, deploy container again. And that will have to be
> done for every host class (as configs might differ host to host). Imho
> a bit too much for this to be appealing (but I might be wrong). I'd
> much rather have:
> 1. Yaml as input to oslo.config instead of broken ini

I'm not opposed to switching to YAML, but it's a bit more involved than
just adding support in the parser. All of the work that has been done on
generating sample default files and documentation needs to be updated to
support YAML. We need a migration path to move everyone from INI to
YAML. And we need to update devstack and all of its plugins to edit the
new file format. There are probably more tasks involved in the
migration. I'm dealing with a couple of other projects right now, and
don't have time to plan all of that out myself. If someone else wants to
pick it up, I can help with reviews on the spec and code changes.

> 2. Validator to throw an error if one of our regular,
> template-rendered, configs is deprecated
> 
> We can run this validator in gate to have quick feedback when
> something gets deprecated.
> 
> Thoughts?
> Michal
> 
> On 16 June 2017 at 13:24, Emilien Macchi  wrote:
> > On Fri, Jun 16, 2017 at 11:09 AM, Jiří Stránský  wrote:
> >> On 15.6.2017 19:06, Emilien Macchi wrote:
> >>>
> >>> I missed [tripleo] tag.
> >>>
> >>> On Thu, Jun 15, 2017 at 12:09 PM, Emilien Macchi 
> >>> wrote:
> 
>  If you haven't followed the "Configuration management with etcd /
>  confd" thread [1], Doug found out that using confd to generate
>  configuration files wouldn't work for the Cinder case where we don't
>  know in advance of the deployment what settings to tell confd to look
>  at.
>  We are still looking for a generic way to generate *.conf files for
>  OpenStack, that would be usable by Deployment tools and operators.
>  Right now, Doug and I are investigating some tooling that would be
>  useful to achieve this goal.
> 
>  Doug has prototyped an Ansible role that would generate configuration
>  files by consumming 2 things:
> 
>  * Configuration schema, generated by Ben's work with Machine Readable
>  Sample Config.
> $ oslo-config-generator --namespace cinder --format yaml >
>  cinder-schema.yaml
> 
>  It also needs: https://review.openstack.org/#/c/474306/ to generate
>  some extra data not included in the original version.
> 
>  * Parameters values provided in config_data directly in the playbook:
>  config_data:
>    DEFAULT:
>  transport_url: rabbit://user:password@hostname
>  verbose: true
> 
>  There are 2 options disabled by default but which would be useful for
>  production environments:
>  * Set to true to always show all configuration values:
>  config_show_defaults
>  * Set to true to show the help text: config_show_help: true
> 
>  The Ansible module is available on github:
>  https://github.com/dhellmann/oslo-config-ansible
> 
>  To try this out, just run:
> $ ansible-playbook ./playbook.yml
> 
>  You can quickly see the output of cinder.conf:
>   https://clbin.com/HmS58
> 
> 
>  What are the next steps:
> 
>  * Getting feedback from Deployment Tools and operators on the concept
>  of this module.
> Maybe this module could replace what is done by Kolla with
>  merge_configs and OpenStack Ansible with config_template.
>  * On the TripleO side, we would like to see if this module could
>  replace the Puppet OpenStack modules that are now mostly used for
>  generating configuration files for containers.
> A transition path would be having Heat to generate Ansible vars
>  files and give it to this module. We could integrate the playbook into
>  a new task in the composable services, something like
> "os_gen_config_tasks", a bit like we already have for upgrade tasks,
>  also driven 

[openstack-dev] [nova][scheduler][placement] Trying to understand the proposed direction

2017-06-19 Thread Edward Leafe
There is a lot going on lately in placement-land, and some of the changes being 
proposed are complex enough that it is difficult to understand what the final 
result is supposed to look like. I have documented my understanding of the 
current way that the placement/scheduler interaction works, and also what I 
understand if how it will work when the proposed changes are all implemented. I 
don’t know how close that understanding is to what the design is, so I’m hoping 
that this will serve as a starting point for clarifying things, so that 
everyone involved in these efforts has a clear view of the target we are aiming 
for. So please reply to this thread with any corrections or additions, so that 
all can see.

I do realize that some of this is to be done in Pike, and the rest in Queens, 
but that timetable is not relevant to the overall understanding of the design.

-- Ed Leafe

Current flow:
* Scheduler gets a req spec from conductor, containing resource requirements
* Scheduler sends those requirements to placement
* Placement runs a query to determine the root RPs that can satisfy those 
requirements
* Placement returns a list of the UUIDs for those root providers to scheduler
* Scheduler uses those UUIDs to create HostState objects for each
* Scheduler runs those HostState objects through filters to remove those that 
don't meet requirements not selected for by placement
* Scheduler runs the remaining HostState objects through weighers to order them 
in terms of best fit.
* Scheduler takes the host at the top of that ranked list, and tries to claim 
the resources in placement. If that fails, there is a race, so that HostState 
is discarded, and the next is selected. This is repeated until the claim 
succeeds.
* Scheduler then creates a list of N UUIDs, with the first being the selected 
host, and the the rest being alternates consisting of the next hosts in the 
ranked list that are in the same cell as the selected host.
* Scheduler returns that list to conductor.
* Conductor determines the cell of the selected host, and sends that list to 
the target cell.
* Target cell tries to build the instance on the selected host. If it fails, it 
unclaims the resources for the selected host, and tries to claim the resources 
for the next host in the list. It then tries to build the instance on the next 
host in the list of alternates. Only when all alternates fail does the build 
request fail.

Proposed flow:
* Scheduler gets a req spec from conductor, containing resource requirements
* Scheduler sends those requirements to placement
* Placement runs a query to determine the root RPs that can satisfy those 
requirements
* Placement then constructs a data structure for each root provider as 
documented in the spec. [0]
* Placement returns a number of these data structures as JSON blobs. Due to the 
size of the data, a page size will have to be determined, and placement will 
have to either maintain that list of structured datafor subsequent requests, or 
re-run the query and only calculate the data structures for the hosts that fit 
in the requested page.
* Scheduler continues to request the paged results until it has them all.
* Scheduler then runs this data through the filters and weighers. No HostState 
objects are required, as the data structures will contain all the information 
that scheduler will need.
* Scheduler then selects the data structure at the top of the ranked list. 
Inside that structure is a dict of the allocation data that scheduler will need 
to claim the resources on the selected host. If the claim fails, the next data 
structure in the list is chosen, and repeated until a claim succeeds.
* Scheduler then creates a list of N of these data structures, with the first 
being the data for the selected host, and the the rest being data structures 
representing alternates consisting of the next hosts in the ranked list that 
are in the same cell as the selected host.
* Scheduler returns that list to conductor.
* Conductor determines the cell of the selected host, and sends that list to 
the target cell.
* Target cell tries to build the instance on the selected host. If it fails, it 
uses the allocation data in the data structure to unclaim the resources for the 
selected host, and tries to claim the resources for the next host in the list 
using its allocation data. It then tries to build the instance on the next host 
in the list of alternates. Only when all alternates fail does the build request 
fail.


[0] https://review.openstack.org/#/c/471927/





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][glance][nova][neutron][horizon][cinder][osc][swift][manila][telemetry][heat][ptls][all][tc][docs] Documentation migration spec

2017-06-19 Thread Alexandra Settle
Hi everyone,

Doug has been working hard in my absence to answer everyone’s questions and 
concerns on the migration spec. Thanks to all that have taken the time to 
review.

Due to our limited time frame, we are now looking for the PTLs to provide their 
respective +1’s (or -1’s) on the patch.

Cheers,

Alex

On 6/15/17, 11:26 PM, "Doug Hellmann"  wrote:

Excerpts from Doug Hellmann's message of 2017-06-12 11:43:25 -0400:
> I added subject tags for the projects most affected by this change. It
> would be good to have the PTLs or liaisons from those teams review the
> spec so there are no surprises when we start moving files around.

I have set up patches for oslo.config, glance, glance client,
python-openstackclient, and horizon for folks to use as examples
[1]. I've also updated the tracking etherpad [2] with some more
detailed directions.

I hope the examples will answer any remaining questions about the
plan and PTLs will sign-off on the spec so we can move forward in
earnest next week.

Doug

[1] https://review.openstack.org/#/q/topic:doc-migration
[2] https://etherpad.openstack.org/p/doc-migration-tracking

> 
> Excerpts from Alexandra Settle's message of 2017-06-08 15:17:34 +:
> > Hi everyone,
> > 
> > Doug and I have written up a spec following on from the conversation 
[0] that we had regarding the documentation publishing future.
> > 
> > Please take the time out of your day to review the spec as this affects 
*everyone*.
> > 
> > See: https://review.openstack.org/#/c/472275/
> > 
> > I will be PTO from the 9th – 19th of June. If you have any pressing 
concerns, please email me and I will get back to you as soon as I can, or, 
email Doug Hellmann and hopefully he will be able to assist you.
> > 
> > Thanks,
> > 
> > Alex
> > 
> > [0] 
http://lists.openstack.org/pipermail/openstack-dev/2017-May/117162.html
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][glance] Glance needs help, it's getting critical

2017-06-19 Thread Flavio Percoco

On 13/06/17 09:50 -0500, Flavio Percoco wrote:

On 13/06/17 10:49 +0200, Thierry Carrez wrote:

Quick attempt at a summary of the discussion so far, with my questions:

* Short-term, Glance needs help to stay afloat
- Sean volunteered to help
- but glance needs to add core reviewers to get stuff flowing
-> could the VM/BM workgroup also help ? Any progress there ?


+1

Given the current situation, I think we'll get any help we can. I'd be happy to
add Sean and a couple of other volunteers to the core team until the end of the
cycle. When Pike is out, we can do a status check and see how to proceed.



I've proposed a patch to add Glance to the list of top-5 help wanted[0]. Please,
review and let me know what y'all think.

In addition to this, I'd like to for the Glance team to seriously consider the
possibility of having a provisional, extra, core team to go through the Pike
cycle. I'm ok with adding people to the general core team and describe in an
email thread who these folks are, for how long we think we'll need this, etc.

Who else would like to volunteer to help?

The help needed is not so much on fixing bugs but rather reviewing the patches
that fix bugs and help moving the release forward. I hope the community will
grow soonish so that we can go back to the regular core team.

Flavio

[0] https://review.openstack.org/#/c/474604/

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] [ec2-api]How about using boto3instead ofboto in requirements

2017-06-19 Thread jiaopengju
Add core members of ec2-api to cc list.
Pengju Jiao mail: jiaopen...@cmss.chinamobile.com


原始邮件
发件人:jiaopengjujiaopen...@cmss.chinamobile.com
收件人:OpenStack Development Mailing List (not for usage 
questions)openstack-dev@lists.openstack.org
抄送:mthodemth...@mthode.org; seans...@dague.net
发送时间:2017年6月19日(周一) 20:33
主题:Re: [openstack-dev] [requirements] [ec2-api]How about using boto3instead 
ofboto in requirements


Thanks, I will try to reach out them in ec2-api channel.
Pengju Jiao mail: jiaopen...@cmss.chinamobile.com
Hi Pengju Jiao, The main consume i believe is EC2-API : 
http://codesearch.openstack.org/?q=%5Ebotoi=nopefiles=.*req.*%5C.txtrepos= 
http://codesearch.openstack.org/?q=%5E(from%7Cimport).*botoi=nopefiles=repos= 
Can you please reach out to them? Thanks, Dims On Mon, Jun 19, 2017 at 3:11 AM, 
jiaopengju jiaopen...@cmss.chinamobile.com wrote:  Hi, everyone:   I have a 
blueprint of freezer to support s3 storage driver [1], so I need to  add boto3 
to the requirements [2].   Why use boto3 but not boto?   Boto3 [3] is the next 
version of Boto, is now stable and recommended for  general use. It can be used 
side-by-side with Boto in the same project, so  it is easy to start using Boto3 
in your existing projects as well as new  projects. Going forward, API updates 
and all new feature work will be  focused on Boto3.Problem   Boto3 requires 
botocore, jmespath and s3transfer. botocore and jmespath are  already used by 
boto, this is because boto is used by ec2-api (and  heat-cfntools, rally and 
swift3). After adding Boto3 to requirements, we  will have old and new 
libraries ATM.My suggenstion   How about using boto3 instead of boto? This 
will require us to do some  migration about boto to boto3.Anyone has ideas 
or suggesstions? Thank you very much![1] 
https://review.openstack.org/#/c/471295   
[2]https://review.openstack.org/#/c/473067   [3]https://github.com/boto/boto3   
 Pengju Jiao  mail: jiaopen...@cmss.chinamobile.com   
__  
OpenStack Development Mailing List (not for usage questions)  Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  -- Davanum 
Srinivas :: https://twitter.com/dims 
__ 
OpenStack Development Mailing List (not for usage questions) Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] [ec2-api]How about using boto3 instead ofboto in requirements

2017-06-19 Thread jiaopengju
Thanks, I will try to reach out them in ec2-api channel.
Pengju Jiao mail: jiaopen...@cmss.chinamobile.com
Hi Pengju Jiao, The main consume i believe is EC2-API : 
http://codesearch.openstack.org/?q=%5Ebotoi=nopefiles=.*req.*%5C.txtrepos= 
http://codesearch.openstack.org/?q=%5E(from%7Cimport).*botoi=nopefiles=repos= 
Can you please reach out to them? Thanks, Dims On Mon, Jun 19, 2017 at 3:11 AM, 
jiaopengju jiaopen...@cmss.chinamobile.com wrote:  Hi, everyone:   I have a 
blueprint of freezer to support s3 storage driver [1], so I need to  add boto3 
to the requirements [2].   Why use boto3 but not boto?   Boto3 [3] is the next 
version of Boto, is now stable and recommended for  general use. It can be used 
side-by-side with Boto in the same project, so  it is easy to start using Boto3 
in your existing projects as well as new  projects. Going forward, API updates 
and all new feature work will be  focused on Boto3.Problem   Boto3 requires 
botocore, jmespath and s3transfer. botocore and jmespath are  already used by 
boto, this is because boto is used by ec2-api (and  heat-cfntools, rally and 
swift3). After adding Boto3 to requirements, we  will have old and new 
libraries ATM.My suggenstion   How about using boto3 instead of boto? This 
will require us to do some  migration about boto to boto3.Anyone has ideas 
or suggesstions? Thank you very much![1] 
https://review.openstack.org/#/c/471295   
[2]https://review.openstack.org/#/c/473067   [3]https://github.com/boto/boto3   
 Pengju Jiao  mail: jiaopen...@cmss.chinamobile.com   
__  
OpenStack Development Mailing List (not for usage questions)  Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  -- Davanum 
Srinivas :: https://twitter.com/dims 
__ 
OpenStack Development Mailing List (not for usage questions) Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-19 Thread gordon chung


On 19/06/17 07:32 AM, Flavio Percoco wrote:
>> as an aside, in telemetry project, we did something somewhat similar
>> when we renamed/rebranded to telemetry from ceilometer. we wrote several
>> notes to the ML, had a few blog posts, fixed the docs, mentioned the new
>> project structure in our presentations... 2 years on, we still
>> occasionally get asked "what's ceilometer", "is xyz not ceilometer?", or
>> "so ceilometer is deprecated?". to a certain extent i think we'll have
>> to be prepared to do some hand holding and say "hey, that's not what the
>> "big tent/."
>
> Is it clear to these people, once you explain the difference, what
> telemetry is?
>
> I would assume it is and this is one of the problems we're trying to
> solve. Even
> after explaining the difference, it's sometimes hard for people to grasp
> the
> concept because the naming that was used is poor and, to be honest, it
> feels
> like it came out from an analogy without properly considering the impact it
> would have in the community.
>
> Over-communicating won't get rid of surprises but sometimes the problem
> is in
> the message and not the receivers of it. We must stay honest with
> ourselves.

i think once we send them to the doc page and correct them, they get it.

i imagine a/the problem is because people are going to read a lot of 
historical unofficial/official stuff while they google. so while we 
fixed all the docs we could, there are still many more other sources 
(blogs, forks, etc...) that still reference the project from prior 
years. i would think this is an issue you'll get as well. people will 
stumble across the many 'big-tent' articles from 2 years ago and that 
becomes their knowledge until they're corrected. i'm not a branding 
specialist so i'm not sure how to correct this but it does seem like 
just renaming will not necessarily fix the issue (based on our experience).

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] etcd v3.2.0?

2017-06-19 Thread Davanum Srinivas
Tony,


On Sun, Jun 18, 2017 at 11:34 PM, Tony Breeds  wrote:
> On Sun, Jun 18, 2017 at 08:19:16PM -0400, Davanum Srinivas wrote:
>
>> Awesome! thanks Tony, some kolla jobs do that for example, but i think
>> this job is a better one to key off of:
>> http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/infra.yaml#n381
>>
>> Outline of the work is - check if there are any new releases in github
>> downloads, if so download them using wget and then delegate to the scp
>> publisher (with keep-hierarchy) to create the new directories and
>> upload the file(s).
>
> So perhaps I'm dense but I can't see an easy way to get a list of
> release artefacts from github in a form that wget can consume.  The best
> I can see is via the API.  I've knocked up a quick'n'dirty mirror
> script[1] but I really feel like I've gone off into the weeds.
>
> You basically need to do:
>
> git clone  && cd
> virtualenv .venv
> .venv/bin/pip install -U pip setuptools wheel
> .venv/bin/pip install -r ./requirements.txt   # [2]
> .venv/bin/python ./mirror-github-releases.py \
> 'coreos/etcd::.*linux.*gz:etcd' \
> 'coreos/etcd:6225411:.*linux.*gz:etcd'

Works for me!

> This will in theory from the 3.2.0 (latest) release and look at the
> 3.1.7 release, see that it's already publically mirrored and move on.
>
> It wouldn't be too hard to incorporate into a job.  Thoughts?
>
> Yours Tony.
>
> [1]  https://github.com/tbreeds/mirror-github-releases
> [2] Yes of course I could publish it on pypi if we want to go down this
> path
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] How about using boto3 instead of boto in requirements

2017-06-19 Thread Davanum Srinivas
Hi Pengju Jiao,

The main consume i believe is EC2-API :

http://codesearch.openstack.org/?q=%5Eboto=nope=.*req.*%5C.txt=
http://codesearch.openstack.org/?q=%5E(from%7Cimport).*boto=nope==

Can you please reach out to them?

Thanks,
Dims

On Mon, Jun 19, 2017 at 3:11 AM, jiaopengju
 wrote:
> Hi, everyone:
>
> I have a blueprint of freezer to support s3 storage driver [1], so I need to
> add boto3 to the requirements [2].
>
> Why use boto3 but not boto?
>
> Boto3 [3] is the next version of Boto, is now stable and recommended for
> general use. It can be used side-by-side with Boto in the same project, so
> it is easy to start using Boto3 in your existing projects as well as new
> projects. Going forward, API updates and all new feature work will be
> focused on Boto3.
>
>
> Problem
>
> Boto3 requires botocore, jmespath and s3transfer. botocore and jmespath are
> already used by boto, this is because boto is used by ec2-api (and
> heat-cfntools, rally and swift3).  After adding Boto3 to requirements, we
> will have old and new libraries ATM.
>
>
> My suggenstion
>
> How about using boto3 instead of boto? This will require us to do some
> migration about boto to boto3.
>
>
> Anyone has ideas or suggesstions? Thank you very much!
>
>
> [1] https://review.openstack.org/#/c/471295
>
> [2]https://review.openstack.org/#/c/473067
>
> [3]https://github.com/boto/boto3
>
>
> Pengju Jiao
> mail: jiaopen...@cmss.chinamobile.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-19 Thread Flavio Percoco

On 16/06/17 04:32 +, gordon chung wrote:



On 15/06/17 06:28 PM, Doug Hellmann wrote:

i see, so this is less an existential question of 'what is openstack'
> and more 'how to differentiate governance projects from a random repo
> created last weekend'
>
> this might have been just me, but big tent was exactly 'big tent ==
> governance' so when i read 'moving away from "big tent"' i think 'what
> is this *new* thing we're moving to and if we're redefining this new
> thing, what for?'. it seems this is not the case.

No. We're trying to pick new words, because there continues to be
confusion about the old words.


my bad, apologies for taking the scenic route. regardless of new words,
we failed to properly describe what the big tent was the first go to
some people, how do we make sure they're not confused this time? and how
do we not confuse the ones that did understand the first time?

for me personally, the first go, the messaging was kind of muddled. i
remember 'level playing field' being used frequently. not sure if that's
still one of the reasons for ?


>
> sorry, i probably wasn't clear, i simply noticed that it was a corporate
> sponsor that was misusing the 'big tent' name so was just thinking we
> could easily tell them, that's not what it means. wasn't suggesting
> anything else by sponsor comment.

You'd think it would be that easy. A surprising number of folks
within the community don't really understand the old naming either,
though (see the rest of this thread for examples).


*sigh* so this is why we can't have nice things :p

as an aside, in telemetry project, we did something somewhat similar
when we renamed/rebranded to telemetry from ceilometer. we wrote several
notes to the ML, had a few blog posts, fixed the docs, mentioned the new
project structure in our presentations... 2 years on, we still
occasionally get asked "what's ceilometer", "is xyz not ceilometer?", or
"so ceilometer is deprecated?". to a certain extent i think we'll have
to be prepared to do some hand holding and say "hey, that's not what the
"big tent/."


Is it clear to these people, once you explain the difference, what telemetry is?

I would assume it is and this is one of the problems we're trying to solve. Even
after explaining the difference, it's sometimes hard for people to grasp the
concept because the naming that was used is poor and, to be honest, it feels
like it came out from an analogy without properly considering the impact it
would have in the community.

Over-communicating won't get rid of surprises but sometimes the problem is in
the message and not the receivers of it. We must stay honest with ourselves.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-19 Thread Andrea Frittoli
On Thu, Jun 15, 2017 at 4:45 PM Jeremy Stanley  wrote:

> On 2017-06-15 11:15:36 +0200 (+0200), Thierry Carrez wrote:
> [...]
> > I'd like to propose that we introduce a new concept: "OpenStack-Hosted
> > projects". There would be "OpenStack projects" on one side, and
> > "Projects hosted on OpenStack infrastructure" on the other side (all
> > still under the openstack/ git repo prefix).
>
> I'm still unconvinced a term is needed for this. Can't we just have
> "OpenStack Projects" (those under TC governance) and "everything
> else?" Why must the existence of any term require a term for its
> opposite?
>

+1!

We don't need to try and bring everything which is not an OpenStack
project under a single name which will also then require a definition which
may not fit all.

Andrea Frittoli (andreaf)


>
> > We'll stop saying "official OpenStack project" and "unofficial
> > OpenStack project". The only "OpenStack projects" will be the
> > official ones. We'll chase down the last mentions of "big tent" in
> > documentation and remove it from our vocabulary.
> [...]
>
> I agree on getting rid of the "big tent" phrase anywhere we find it,
> though I thought we'd already avoided using that in favor of more
> descriptive terminology anyway. Also I'm very excited to see a focus
> on "OpenStack projects" I just don't see a benefit to making up a
> name for "not an OpenStack project."



> --
> Jeremy Stanley
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [l2gw] DSVM gates for networking-l2gw

2017-06-19 Thread Ricardo Noriega De Soto
Hello,

The dummy driver allows you to make writes/reads from the database and
that's something we should be continuously testing. I think you cannot
achieve that mocking the API calls.

Cheers

On Thu, Jun 15, 2017 at 11:50 AM, Daniel Mellado  wrote:

> Hi Ricardo,
>
> That sounds like a totally valid approach to me, but I was wondering if
> there'd be a way to mock that API call. If just using the dummy driver
> would achieve that then I'd be more than happy to see that modification.
>
> Cheers!
>
> Daniel
>
> El 15/06/17 a las 06:38, Ricardo Noriega De Soto escribió:
> > Hello L2GWers
> >
> > Currently networking-l2gw CI only covers unit tests. However, there is
> > an experimental check that starts a devstack VM to be able to run more
> > complex tests. That experimental check is not working, and we are trying
> > to fix it, however we encountered some difficulties that we wanted to
> > share with you.
> >
> > https://review.openstack.org/#/c/471692/
> >
> > The configuration of the experimental check uses the L2GW agent which is
> > very good, however, the API tests try to create a l2gw connection and
> > fail since there is not an ovsdb instance with the vtep schema to
> execute.
> >
> > If we use the dummy driver, these three failing testcases will be
> > skipped and we have a way to test the API (without backend).
> >
> > So for now, our proposal is to modify this experimental check using the
> > dummy driver, and convert it to a possible non-voting -> voting gate
> > executing pure API tests.
> >
> > Furthermore, we will start working on a new gate with the l2gw agent and
> > create a new OVS entity in a namespace or something similar to be able
> > to test api and agent together.
> >
> > Any comment is more than welcome!
> >
> > Thanks guys
> >
> > --
> > Ricardo Noriega
> >
> > Senior Software Engineer - NFV Partner Engineer | Office of Technology
> >  | Red Hat
> > irc: rnoriega @freenode
> >
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Ricardo Noriega

Senior Software Engineer - NFV Partner Engineer | Office of Technology  |
Red Hat
irc: rnoriega @freenode
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [l2gw] How to handle correctly unknown-dst traffic

2017-06-19 Thread Saverio Proto
Hello,

I try again. Any l2gw plugin user that wants to comment on my email ?

thank you

Saverio


On 29/05/17 16:54, Saverio Proto wrote:
> Hello,
> 
> I have a question about the l2gw. I did a deployment, I described the
> steps here:
> https://review.openstack.org/#/c/453209/
> 
> The unicast traffic works fine, but I dont understand what is the idea
> behind the handling of the broadcast traffic.
> 
> Looking at openvswitch:
> 
> I obtain the uuid with `vtep-ctl list-ls`
> 
> vtep-ctl list-remote-macs 
> 
> In this output I get an entry for each VM that has an interface in the
> L2 network I am bridging:
> 
> 
> # vtep-ctl list-remote-macs 
> ucast-mac-remote
>   fa:16:3e:c2:7b:da -> vxlan_over_ipv4/10.1.1.167
> 
> mcast-mac-remote
> -
> 
> The ucast-mac-remote entry is created by Openstack when I start a VM.
> (Also it is never removed when I delete the instance, is this a bug ? )
> Note that 10.1.1.167 is the IP address of the hypervisor where the VM is
> running.
> 
> But mcast-mac-remote is empty. So this means that ARP learning for
> example works only in 1 way. The VM in openstack does not receive any
> broadcast traffic, unless I do manually:
> 
> vtep-ctl add-mcast-remote ee87db33-1b3a-42e9-bc09-02747f8a0ad5
> unknown-dst  10.1.1.167
> 
> This creates an entry in the table mcast-mac-remote and everything works
> correctly.
> 
> 
> Now I read here http://networkop.co.uk/blog/2016/05/21/neutron-l2gw/
> about sending add-mcast-remote to the network nodes and then doing some
> magic I dont really understand. But I am confused because in my setup
> the tenant does not have a L3 router, so there is not a qrouter
> namespace for this network, I was planning to keep the network node out
> of the game.
> 
> Is anyone running this in production and can shed some light ?
> 
> thanks
> 
> Saverio
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 


-- 
SWITCH
Saverio Proto, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 1573
saverio.pr...@switch.ch, http://www.switch.ch

http://www.switch.ch/stories

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Optimizing storage for small objects in Swift

2017-06-19 Thread Alexandre Lécuyer

Hello Clint,

Thanks for your feedback, replying in the email inline.

On 06/16/2017 10:54 PM, Clint Byrum wrote:

Excerpts from John Dickinson's message of 2017-06-16 11:35:39 -0700:

On 16 Jun 2017, at 10:51, Clint Byrum wrote:


This is great work.

I'm sure you've already thought of this, but could you explain why
you've chosen not to put the small objects in the k/v store as part of
the value rather than in secondary large files?

I don't want to co-opt an answer from Alex, but I do want to point to some of 
the other background on this LOSF work.

https://wiki.openstack.org/wiki/Swift/ideas/small_files
https://wiki.openstack.org/wiki/Swift/ideas/small_files/experimentations
https://wiki.openstack.org/wiki/Swift/ideas/small_files/implementation


These are great. Thanks for sharing them, I understand a lot more now.


Look at the second link for some context to your answer, but the summary is "that 
means writing a file system, and writing a file system is really hard".


I'm not sure we were thinking the same thing.

I was more asking, why not put the content of the object into the k/v
instead of the big_file_id:offset? My thinking was that for smaller
objects, you would just return the data immediately upon reading the k/v,
rather than then needing to go find the big file and read the offset.
However, I'm painfully aware that those directly involved with the problem
have likely thought of this. However, the experiments don't seem to show
that this was attempted. Perhaps I'm zooming too far out to see the real
problem space. You can all tell me to take my spray paint can and stop
staring at the bike shed if this is just too annoying. Seriously.

Of course, one important thing is, what does one consider "small"? Seems
like there's a size where the memory footprint of storing it in the
k/v would be justifiable if reads just returned immediately from k/v
vs. needing to also go get data from a big file on disk. Perhaps that
size is too low to really matter. I was hoping that this had been
considered and there was documentation, but I don't really see it.
Right, we had considered this when we started the project : storing 
small objects directly in the KV. It would not be too diffcult to do, 
but we see a few problems :


1) consistency
In the current design, we append data at the end of a "big file". When 
the data upload is finished, swift writes the metadata and commits the 
file. This triggers a fsync(). Only then do we return. We can rely on 
the data being stable on disk, even if there is a power loss.  Because 
we fallocate() space for the "big files" beforehand, we can also hope to 
have mostly sequential disk IO.

(Important as most swift clusters use SATA disks).

Once the object has been committed, we create an entry for it in the KV. 
This is done asynchronously, because synchronous writes on the KV kills 
performance. If we loose power, we loose the latest data. After the 
server is rebooted, we have to scan the end of volumes to create missing 
entries in the KV. (I will not discuss this in detail in this email to 
keep this short, but we can discuss it in another thread, or I can post 
some information on the wiki).


If we put small objects in the KV, we would need to do synchronous 
writes to make sure we don't loose data.
Also, currently we can completly reconstruct the KV from the "big 
files". It would not be possible anymore.



2) performance
On our clusters we see about 40% of physical disk IO being caused by 
readdir().
We want to serve directory listing requests from memory. So "small" 
means "the KV can fit in the page cache".
We estimate that we need the size per object to be below 50 bytes, which 
doesn't leave much room for data.


LevelDB causes write amplification, as it will regularly copy data to 
different files (levels) to keep keys compressed and in sorted order. If 
we store object data within the KV, it will be copied around multiple 
times as well.



Finally it is also more simple to have only one path to handle. Beyond 
these issues, it would not be difficult to store data in the KV. This is 
something we can revisit after more test and maybe some production 
experience.




Also the "writing your own filesystem" option in experiments seemed
more like a thing to do if you left the k/v stores out entirely.






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-19 Thread Thierry Carrez
Amrith Kumar wrote:
> [...]
> An important aspect of making this proposal work is that we seek to
> eliminate the effort (planning, and coding) involved in migrating
> existing Trove v1 deployments to the proposed Trove v2. Effectively,
> with work beginning on Trove v2 as proposed here, Trove v1 as released
> with Pike will be marked as deprecated and users will have to migrate to
> Trove v2 when it becomes available.
> 
> While I would very much like to continue to support the users on Trove
> v1 through this transition, the simple fact is that absent community
> participation this will be impossible. Furthermore, given that there are
> no production deployments of Trove at this time, it seems pointless to
> build that upgrade path from Trove v1 to Trove v2; it would be the
> proverbial bridge from nowhere.
> [...]
From an OpenStack project naming perspective, IMHO the line between a
"v2" and a completely new project (with a new name) is whether you
provide an upgrade path. I feel like if you won't support v1 users at
all (and I understand the reasons why you wouldn't), the new project
should not be called "Trove v2", but "Hoard". I don't really want to set
a precedent of breaking users by restarting from scratch and calling it
"v2", while everywhere else we encourage projects to never break their
users.

In all cases, providing offline tooling to migrate your Trove resources
to Hoard equivalents would be a nice plus, but I'd say that this tooling
is likely to appear if there is a need. Just be receptive to the idea of
adding that in a tools/ directory :)

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Do we still support core plugin not based on the ML2 framework?

2017-06-19 Thread Édouard Thuleau
Oops, sent too fast, sorry. I try again.

Hi,

Since Mitaka release, a default service plugins list is loaded when Neutron
server starts [1]. That list is not editable and was extended with few services
[2]. But all of them rely on the Neutron DB model.

If a core driver is not based on the ML2 core plugin framework or not based on
the 'neutron.db.models_v2' class, all that service plugins will not work.

So my first question is Does Neutron still support core plugin not based on ML2
or 'neutron.db.models_v2' class?

If yes, I would like to propose two solutions:
- permits core plugin to overload the service plugin class by it's own
implementation and continuing to use the actual Neutron db based services as
default.
- modifying all default plugin service to use service plugin driver
framework [3], and set the actual Neutron db based implementation as
default driver for services. That permits to core drivers not based on the
Neutron DB to specify a driver. We can see that solution was adopted in the
networking-bgpvpn project, where can find two abstract driver classes, one for
core driver based on Neutron DB model [4] and one used by core driver not based
on the DB [5] as the Contrail driver [6].

[1] 
https://github.com/openstack/neutron/commit/aadf2f30f84dff3d85f380a7ff4e16dbbb0c6bb0#diff-9169a6595980d19b2649d5bedfff05ce
[2] 
https://github.com/openstack/neutron/blob/master/neutron/plugins/common/constants.py#L43
[3] 
https://github.com/openstack/neutron/blob/master/neutron/services/service_base.py#L27
[4] 
https://github.com/openstack/networking-bgpvpn/blob/master/networking_bgpvpn/neutron/services/service_drivers/driver_api.py#L226
[5] 
https://github.com/openstack/networking-bgpvpn/blob/master/networking_bgpvpn/neutron/services/service_drivers/driver_api.py#L23
[6] 
https://github.com/Juniper/contrail-neutron-plugin/blob/master/neutron_plugin_contrail/plugins/opencontrail/networking_bgpvpn/contrail.py#L36

Regards,
Édouard.

On Mon, Jun 19, 2017 at 10:47 AM, Édouard Thuleau
 wrote:
> Hi,
> Since Mitaka release [1], a default service plugins list is loaded
> when Neutron server starts. That list is not editable and was extended
> with few services [2]. But none of th
>
> [1] 
> https://github.com/openstack/neutron/commit/aadf2f30f84dff3d85f380a7ff4e16dbbb0c6bb0#diff-9169a6595980d19b2649d5bedfff05ce
> [2] 
> https://github.com/openstack/neutron/blob/master/neutron/plugins/common/constants.py#L43

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Do we still support core plugin not based on the ML2 framework?

2017-06-19 Thread Édouard Thuleau
Hi,
Since Mitaka release [1], a default service plugins list is loaded
when Neutron server starts. That list is not editable and was extended
with few services [2]. But none of th

[1] 
https://github.com/openstack/neutron/commit/aadf2f30f84dff3d85f380a7ff4e16dbbb0c6bb0#diff-9169a6595980d19b2649d5bedfff05ce
[2] 
https://github.com/openstack/neutron/blob/master/neutron/plugins/common/constants.py#L43

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] notification update week 25

2017-06-19 Thread Balazs Gibizer

Hi,

Here is the status update / focus setting mail about notification work 
for week 25.


Bugs

[Undecided] https://bugs.launchpad.net/nova/+bug/1684860 Versioned 
server notifications don't include updated_at
Takashi proposed the fix https://review.openstack.org/#/c/475276/ that 
looks good.


[Low] https://bugs.launchpad.net/nova/+bug/1696152 nova notifications 
use nova-api as binary name instead of nova-osapi_compute
Agreed not to change the binary name in the notifications. Instead we 
make an enum for that name to show that the name is intentional.



Versioned notification transformation
-
Patches needs only a second +2:
* https://review.openstack.org/#/c/385644/ Transform rescue/unrescue 
instance notifications
* https://review.openstack.org/#/c/402124/ Transform 
instance.live_migration_rollback notification
* https://review.openstack.org/#/c/460029/ Transform 
instance.soft_delete notifications
* https://review.openstack.org/#/c/453077/ Add snapshot id to the 
snapshot notifications


Patches that looks good from the subteam perspective:
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/versioned-notification-transformation-pike+label:Code-Review%253E%253D%252B1+label:Verified%253E%253D1+AND+NOT+label:Code-Review%253C0


Searchlight integration
---
bp additional-notification-fields-for-searchlight
~
https://review.openstack.org/#/q/topic:bp/additional-notification-fields-for-searchlight+status:open
First patch in the series needs just a second +2. The rest needs 
general review.



Small improvements
~~
* https://review.openstack.org/#/c/428199/ Improve assertJsonEqual
error reporting

* https://review.openstack.org/#/q/topic:refactor-notification-samples
Factor out duplicated notification sample data
This is a start of a longer patch series to deduplicate notification
sample data. The third patch already shows how much sample data can be
deleted from nova tree. We added a minimal hand rolled json ref
implementation to notification sample test as the existing python json
ref implementations are not well maintained.


Weekly meeting
--
The notification subteam holds it's weekly meeting on Tuesday 17:00 UTC
on openstack-meeting-4. The next meeting will be held on 20th of June.
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170620T17

Cheers,
gibi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] tempest failures when deploying neutron-server in wsgi with apache

2017-06-19 Thread Gary Kotton
+1

From: Kevin Benton 
Reply-To: OpenStack List 
Date: Monday, June 19, 2017 at 10:33 AM
To: OpenStack List 
Subject: Re: [openstack-dev] [neutron] tempest failures when deploying 
neutron-server in wsgi with apache

Thanks. Maybe this would be a good opportunity to just have people start 
putting everything in neutron.conf if they want to switch to wsgi.

On Mon, Jun 19, 2017 at 12:21 AM, Matthew Treinish 
> wrote:
On Mon, Jun 19, 2017 at 12:09:12AM -0700, Kevin Benton wrote:
> I've been working on Victor's patch a bit. One thing that isn't clear to me
> is how we can get the neutron.conf options loaded when using WSGI. How are
> other projects doing this?

Most projects are using a default location, for example:

https://review.openstack.org/#/c/459450/11/glance/common/wsgi_app.py

Which IIRC I just looked at how other project's wsgi entrypoints were doing it
when I wrote that. The issue I think we'll hit with Neutron is that by default
we tell everyone to do that annoying multi-file config setup, which makes doing
a default like this difficult. Personally I think we need to change that,
because it's not needed and makes it generally confusing, but even if we did
it wouldn't solve the upgrade path from non-wsgi to wsgi.

-Matt Treinish

>
> On Fri, Jun 2, 2017 at 7:44 AM, Emilien Macchi 
> > wrote:
>
> > On Thu, Jun 1, 2017 at 10:28 PM, Morales, Victor
> > > wrote:
> > > Hi Emilien,
> > >
> > > I noticed that the configuration file was created using puppet.  I
> > submitted a patch[1] that was targeting to include the changes in Devstack.
> > My major concern is with the value of WSGIScriptAlias which should be
> > pointing to WSGI script.
> >
> > Thanks for looking, the script that is used is from
> > /usr/bin/neutron-api which is I think correct. If you look at logs,
> > you can see that API actually works but some tempest tests fail
> > though...
> >
> > > Regards/Saludos
> > > Victor Morales
> > >
> > > [1] https://review.openstack.org/#/c/439191
> > >
> > > On 5/31/17, 4:40 AM, "Emilien Macchi" 
> > > > wrote:
> > >
> > > Hey folks,
> > >
> > > I've been playing with deploying Neutron in WSGI with Apache and
> > > Tempest tests fail on spawning Nova server when creating Neutron
> > > ports:
> > > http://logs.openstack.org/89/459489/4/check/gate-puppet-
> > openstack-integration-4-scenario001-tempest-centos-7/
> > f2ee8bf/console.html#_2017-05-30_13_09_22_715400
> > >
> > > I haven't found anything useful in neutron-server logs:
> > > http://logs.openstack.org/89/459489/4/check/gate-puppet-
> > openstack-integration-4-scenario001-tempest-centos-7/
> > f2ee8bf/logs/apache/neutron_wsgi_access_ssl.txt.gz
> > >
> > > Before I file a bug in neutron, can anyone look at the logs with me
> > > and see if I missed something in the config:
> > > http://logs.openstack.org/89/459489/4/check/gate-puppet-
> > openstack-integration-4-scenario001-tempest-centos-7/
> > f2ee8bf/logs/apache_config/10-neutron_wsgi.conf.txt.gz
> > >
> > > Thanks for the help,
> > > --
> > > Emilien Macchi
> > >
> > > 
> > __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: 
> > > openstack-dev-requ...@lists.openstack.org?subject:
> > unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
> > > 
> > __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: 
> > > openstack-dev-requ...@lists.openstack.org?subject:
> > unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > --
> > Emilien Macchi
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: 
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] tempest failures when deploying neutron-server in wsgi with apache

2017-06-19 Thread Kevin Benton
Thanks. Maybe this would be a good opportunity to just have people start
putting everything in neutron.conf if they want to switch to wsgi.

On Mon, Jun 19, 2017 at 12:21 AM, Matthew Treinish 
wrote:

> On Mon, Jun 19, 2017 at 12:09:12AM -0700, Kevin Benton wrote:
> > I've been working on Victor's patch a bit. One thing that isn't clear to
> me
> > is how we can get the neutron.conf options loaded when using WSGI. How
> are
> > other projects doing this?
>
> Most projects are using a default location, for example:
>
> https://review.openstack.org/#/c/459450/11/glance/common/wsgi_app.py
>
> Which IIRC I just looked at how other project's wsgi entrypoints were
> doing it
> when I wrote that. The issue I think we'll hit with Neutron is that by
> default
> we tell everyone to do that annoying multi-file config setup, which makes
> doing
> a default like this difficult. Personally I think we need to change that,
> because it's not needed and makes it generally confusing, but even if we
> did
> it wouldn't solve the upgrade path from non-wsgi to wsgi.
>
> -Matt Treinish
>
> >
> > On Fri, Jun 2, 2017 at 7:44 AM, Emilien Macchi 
> wrote:
> >
> > > On Thu, Jun 1, 2017 at 10:28 PM, Morales, Victor
> > >  wrote:
> > > > Hi Emilien,
> > > >
> > > > I noticed that the configuration file was created using puppet.  I
> > > submitted a patch[1] that was targeting to include the changes in
> Devstack.
> > > My major concern is with the value of WSGIScriptAlias which should be
> > > pointing to WSGI script.
> > >
> > > Thanks for looking, the script that is used is from
> > > /usr/bin/neutron-api which is I think correct. If you look at logs,
> > > you can see that API actually works but some tempest tests fail
> > > though...
> > >
> > > > Regards/Saludos
> > > > Victor Morales
> > > >
> > > > [1] https://review.openstack.org/#/c/439191
> > > >
> > > > On 5/31/17, 4:40 AM, "Emilien Macchi"  wrote:
> > > >
> > > > Hey folks,
> > > >
> > > > I've been playing with deploying Neutron in WSGI with Apache and
> > > > Tempest tests fail on spawning Nova server when creating Neutron
> > > > ports:
> > > > http://logs.openstack.org/89/459489/4/check/gate-puppet-
> > > openstack-integration-4-scenario001-tempest-centos-7/
> > > f2ee8bf/console.html#_2017-05-30_13_09_22_715400
> > > >
> > > > I haven't found anything useful in neutron-server logs:
> > > > http://logs.openstack.org/89/459489/4/check/gate-puppet-
> > > openstack-integration-4-scenario001-tempest-centos-7/
> > > f2ee8bf/logs/apache/neutron_wsgi_access_ssl.txt.gz
> > > >
> > > > Before I file a bug in neutron, can anyone look at the logs with
> me
> > > > and see if I missed something in the config:
> > > > http://logs.openstack.org/89/459489/4/check/gate-puppet-
> > > openstack-integration-4-scenario001-tempest-centos-7/
> > > f2ee8bf/logs/apache_config/10-neutron_wsgi.conf.txt.gz
> > > >
> > > > Thanks for the help,
> > > > --
> > > > Emilien Macchi
> > > >
> > > > 
> > > __
> > > > OpenStack Development Mailing List (not for usage questions)
> > > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> > > unsubscribe
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack-dev
> > > >
> > > >
> > > > 
> > > __
> > > > OpenStack Development Mailing List (not for usage questions)
> > > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> > > unsubscribe
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
> > >
> > > --
> > > Emilien Macchi
> > >
> > > 
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
>
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [neutron] security group OVO change

2017-06-19 Thread Gary Kotton
Sorry for being vague – have been debugging.
We overwrite the base method:

with db_api.context_manager.writer.using(context):
secgroup_db = (
super(NsxV3Plugin, self).create_security_group(
context, security_group, default_sg))
nsx_db.save_sg_mappings(context,
secgroup_db['id'],
ns_group['id'],
firewall_section['id'])
self._process_security_group_properties_create(context,
   secgroup_db,
   secgroup,
   default_sg)

The secgroup_db that returns always has empty rules. If I remove the 
transaction then it works.
Still trying to figure out why when we call:
with db_api.context_manager.writer.using(context):
secgroup_db = (
super(NsxV3Plugin, self).create_security_group(…

The rules are not populated. The db_api.context_manager.writer.using is what is 
causing the problem.

As a work around we reread the object when we need to process the rules. Not 
sure if anyone else has hit this
Thanks
Gary

From: Kevin Benton 
Reply-To: OpenStack List 
Date: Monday, June 19, 2017 at 10:01 AM
To: OpenStack List 
Cc: "isaku.yamah...@gmail.com" 
Subject: Re: [openstack-dev] [neutron] security group OVO change

Do you mean the callback event for AFTER_CREATE is missing the rules when it's 
for default security groups?

On Sun, Jun 18, 2017 at 4:44 AM, Gary Kotton 
> wrote:
Hi,
That patch looks good. We still have an issue in that the create security 
groups does not return the list of the default rules.
Thanks
Gary

On 6/17/17, 2:33 AM, "Isaku Yamahata" 
> wrote:

It also broke networking-odl.
The patch[1] is needed to unbreak.
[1] https://review.openstack.org/#/c/448420/

necessary db info is taken from context.session.new.
But with OVO, those expunge themselves with create method.
Those info needs to be passed as callback argument.

Thanks,

On Fri, Jun 16, 2017 at 01:25:28PM -0700,
Ihar Hrachyshka > wrote:

> To close the loop here,
>
> - this also broke heat py3 job (https://launchpad.net/bugs/1698355)
> - we polished https://review.openstack.org/474575 to fix both
> vmware-nsx and heat issues
> - I also posted a patch for oslo.serialization for the bug that
> triggered MemoryError in heat gate:
> https://review.openstack.org/475052
> - the vmware-nsx adoption patch is at:
> https://review.openstack.org/#/c/474608/ and @boden is working on it,
> should be ready to go in due course.
>
> Thanks and sorry for inconveniences,
> Ihar
>
> On Thu, Jun 15, 2017 at 6:17 AM, Gary Kotton 
> wrote:
> > Hi,
> >
> > The commit https://review.openstack.org/284738 has broken decomposed 
plugins
> > (those that extend security groups and rules). The reason for this is 
that
> > there is a extend callback that we use which expects to get a database
> > object and the aforementioned patch passes a new neutron object.
> >
> > I have posted [i] to temporarily address the issue. An alternative is to
> > revert the patch until the decomposed plugins can figure out how to
> > correctly address this.
> >
> > Thanks
> >
> > Gary
> >
> > [i] https://review.openstack.org/474575
> >
> >
> > 
__
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Isaku Yamahata >

__
OpenStack Development Mailing List (not for usage questions)

Re: [openstack-dev] [neutron] tempest failures when deploying neutron-server in wsgi with apache

2017-06-19 Thread Matthew Treinish
On Mon, Jun 19, 2017 at 12:09:12AM -0700, Kevin Benton wrote:
> I've been working on Victor's patch a bit. One thing that isn't clear to me
> is how we can get the neutron.conf options loaded when using WSGI. How are
> other projects doing this?

Most projects are using a default location, for example: 

https://review.openstack.org/#/c/459450/11/glance/common/wsgi_app.py

Which IIRC I just looked at how other project's wsgi entrypoints were doing it
when I wrote that. The issue I think we'll hit with Neutron is that by default
we tell everyone to do that annoying multi-file config setup, which makes doing
a default like this difficult. Personally I think we need to change that,
because it's not needed and makes it generally confusing, but even if we did
it wouldn't solve the upgrade path from non-wsgi to wsgi.

-Matt Treinish

> 
> On Fri, Jun 2, 2017 at 7:44 AM, Emilien Macchi  wrote:
> 
> > On Thu, Jun 1, 2017 at 10:28 PM, Morales, Victor
> >  wrote:
> > > Hi Emilien,
> > >
> > > I noticed that the configuration file was created using puppet.  I
> > submitted a patch[1] that was targeting to include the changes in Devstack.
> > My major concern is with the value of WSGIScriptAlias which should be
> > pointing to WSGI script.
> >
> > Thanks for looking, the script that is used is from
> > /usr/bin/neutron-api which is I think correct. If you look at logs,
> > you can see that API actually works but some tempest tests fail
> > though...
> >
> > > Regards/Saludos
> > > Victor Morales
> > >
> > > [1] https://review.openstack.org/#/c/439191
> > >
> > > On 5/31/17, 4:40 AM, "Emilien Macchi"  wrote:
> > >
> > > Hey folks,
> > >
> > > I've been playing with deploying Neutron in WSGI with Apache and
> > > Tempest tests fail on spawning Nova server when creating Neutron
> > > ports:
> > > http://logs.openstack.org/89/459489/4/check/gate-puppet-
> > openstack-integration-4-scenario001-tempest-centos-7/
> > f2ee8bf/console.html#_2017-05-30_13_09_22_715400
> > >
> > > I haven't found anything useful in neutron-server logs:
> > > http://logs.openstack.org/89/459489/4/check/gate-puppet-
> > openstack-integration-4-scenario001-tempest-centos-7/
> > f2ee8bf/logs/apache/neutron_wsgi_access_ssl.txt.gz
> > >
> > > Before I file a bug in neutron, can anyone look at the logs with me
> > > and see if I missed something in the config:
> > > http://logs.openstack.org/89/459489/4/check/gate-puppet-
> > openstack-integration-4-scenario001-tempest-centos-7/
> > f2ee8bf/logs/apache_config/10-neutron_wsgi.conf.txt.gz
> > >
> > > Thanks for the help,
> > > --
> > > Emilien Macchi
> > >
> > > 
> > __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> > unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
> > > 
> > __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> > unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > --
> > Emilien Macchi
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements] How about using boto3 instead of boto in requirements

2017-06-19 Thread jiaopengju
Hi, everyone:
I have a blueprint of freezer to support s3 storage driver [1], so I need to 
add boto3 to the requirements [2].
Why use boto3 but not boto?
Boto3 [3] isthe next version of Boto, is now stable and recommended for general 
use. It can be used side-by-side with Boto in the same project, so it is easy 
to start using Boto3 in your existing projects as well as new projects. Going 
forward, API updates and all new feature work will be focused on Boto3.


Problem
Boto3 requires botocore, jmespath and s3transfer. botocore and jmespath are 
already used by boto, this is because boto is used by ec2-api (and 
heat-cfntools, rally and swift3). After adding Boto3 to requirements, we will 
have old and new libraries ATM.


My suggenstion
How about using boto3 instead of boto? This will require us to do some 
migration about boto to boto3.


Anyone has ideas or suggesstions? Thank you very much!


[1]https://review.openstack.org/#/c/471295
[2]https://review.openstack.org/#/c/473067
[3]https://github.com/boto/boto3


Pengju Jiao
mail:jiaopen...@cmss.chinamobile.com__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] tempest failures when deploying neutron-server in wsgi with apache

2017-06-19 Thread Kevin Benton
I've been working on Victor's patch a bit. One thing that isn't clear to me
is how we can get the neutron.conf options loaded when using WSGI. How are
other projects doing this?

On Fri, Jun 2, 2017 at 7:44 AM, Emilien Macchi  wrote:

> On Thu, Jun 1, 2017 at 10:28 PM, Morales, Victor
>  wrote:
> > Hi Emilien,
> >
> > I noticed that the configuration file was created using puppet.  I
> submitted a patch[1] that was targeting to include the changes in Devstack.
> My major concern is with the value of WSGIScriptAlias which should be
> pointing to WSGI script.
>
> Thanks for looking, the script that is used is from
> /usr/bin/neutron-api which is I think correct. If you look at logs,
> you can see that API actually works but some tempest tests fail
> though...
>
> > Regards/Saludos
> > Victor Morales
> >
> > [1] https://review.openstack.org/#/c/439191
> >
> > On 5/31/17, 4:40 AM, "Emilien Macchi"  wrote:
> >
> > Hey folks,
> >
> > I've been playing with deploying Neutron in WSGI with Apache and
> > Tempest tests fail on spawning Nova server when creating Neutron
> > ports:
> > http://logs.openstack.org/89/459489/4/check/gate-puppet-
> openstack-integration-4-scenario001-tempest-centos-7/
> f2ee8bf/console.html#_2017-05-30_13_09_22_715400
> >
> > I haven't found anything useful in neutron-server logs:
> > http://logs.openstack.org/89/459489/4/check/gate-puppet-
> openstack-integration-4-scenario001-tempest-centos-7/
> f2ee8bf/logs/apache/neutron_wsgi_access_ssl.txt.gz
> >
> > Before I file a bug in neutron, can anyone look at the logs with me
> > and see if I missed something in the config:
> > http://logs.openstack.org/89/459489/4/check/gate-puppet-
> openstack-integration-4-scenario001-tempest-centos-7/
> f2ee8bf/logs/apache_config/10-neutron_wsgi.conf.txt.gz
> >
> > Thanks for the help,
> > --
> > Emilien Macchi
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] security group OVO change

2017-06-19 Thread Kevin Benton
Do you mean the callback event for AFTER_CREATE is missing the rules when
it's for default security groups?

On Sun, Jun 18, 2017 at 4:44 AM, Gary Kotton  wrote:

> Hi,
> That patch looks good. We still have an issue in that the create security
> groups does not return the list of the default rules.
> Thanks
> Gary
>
> On 6/17/17, 2:33 AM, "Isaku Yamahata"  wrote:
>
> It also broke networking-odl.
> The patch[1] is needed to unbreak.
> [1] https://review.openstack.org/#/c/448420/
>
> necessary db info is taken from context.session.new.
> But with OVO, those expunge themselves with create method.
> Those info needs to be passed as callback argument.
>
> Thanks,
>
> On Fri, Jun 16, 2017 at 01:25:28PM -0700,
> Ihar Hrachyshka  wrote:
>
> > To close the loop here,
> >
> > - this also broke heat py3 job (https://launchpad.net/bugs/1698355)
> > - we polished https://review.openstack.org/474575 to fix both
> > vmware-nsx and heat issues
> > - I also posted a patch for oslo.serialization for the bug that
> > triggered MemoryError in heat gate:
> > https://review.openstack.org/475052
> > - the vmware-nsx adoption patch is at:
> > https://review.openstack.org/#/c/474608/ and @boden is working on
> it,
> > should be ready to go in due course.
> >
> > Thanks and sorry for inconveniences,
> > Ihar
> >
> > On Thu, Jun 15, 2017 at 6:17 AM, Gary Kotton 
> wrote:
> > > Hi,
> > >
> > > The commit https://review.openstack.org/284738 has broken
> decomposed plugins
> > > (those that extend security groups and rules). The reason for this
> is that
> > > there is a extend callback that we use which expects to get a
> database
> > > object and the aforementioned patch passes a new neutron object.
> > >
> > > I have posted [i] to temporarily address the issue. An alternative
> is to
> > > revert the patch until the decomposed plugins can figure out how to
> > > correctly address this.
> > >
> > > Thanks
> > >
> > > Gary
> > >
> > > [i] https://review.openstack.org/474575
> > >
> > >
> > > 
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> --
> Isaku Yamahata 
>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] How about using boto3 instead of boto in requirements

2017-06-19 Thread jiaopengju
Hi, everyone:
I have a blueprint of freezer to support s3 storage driver [1], so I need to 
add boto3 to the requirements [2].
Why use boto3 but not boto?
Boto3 [3] isthe next version of Boto, is now stable and recommended for general 
use. It can be used side-by-side with Boto in the same project, so it is easy 
to start using Boto3 in your existing projects as well as new projects. Going 
forward, API updates and all new feature work will be focused on Boto3.


Problem
Boto3 requires botocore, jmespath and s3transfer. botocore and jmespath are 
already used by boto, this is because boto is used by ec2-api (and 
heat-cfntools, rally and swift3). After adding Boto3 to requirements, we will 
have old and new libraries ATM.


My suggenstion
How about using boto3 instead of boto? This will require us to do some 
migration about boto to boto3.


Anyone has ideas or suggesstions? Thank you very much!


[1]https://review.openstack.org/#/c/471295
[2]https://review.openstack.org/#/c/473067
   [3]https://github.com/boto/boto3


Pengju Jiao
mail: jiaopen...@cmss.chinamobile.com__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev