Re: [openstack-dev] [all] [tc] Policy Goal Queens-2 Update

2017-11-30 Thread hie...@vn.fujitsu.com
FYI, I have updated the topic for Heat's works [1]. And finally no more 
projects in 'Not Started' list. :-)

[1]. 
https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:policy-and-docs-in-code

Regards,
Hieu

> -Original Message-
> From: Lance Bragstad [mailto:lbrags...@gmail.com]
> Sent: Friday, December 01, 2017 12:01 PM
> To: OpenStack Development Mailing List (not for usage questions)  d...@lists.openstack.org>
> Subject: Re: [openstack-dev] [all] [tc] Policy Goal Queens-2 Update
> 
> 
> 
> On 11/30/2017 07:00 PM, hie...@vn.fujitsu.com wrote:
> > Lance,
> >
> >>> For the Swift project, I don't see oslo.policy in requirements.txt
> >>> for now, then not sure they need to implement policy in code and the
> >>> we got the same
> >> thing with Solum.
> >> So does that mean these can be removed as well? I'm wondering if
> >> there is an official process here, or just a simple sign-off from a 
> >> project maintainer?
> > Swift did not use oslo.policy and use their own mechanism instead, so I 
> > guess we
> can remove Swift along with remaining networking-* plugins as well.
> >
> > BTW, ceilometer had already deprecated and removed ceilometer API from
> > Q, thus we can also remove ceilometer from the list too. [1]
> >
> > I have created PR regarding all above changes in [2].
> Merged. Thanks for looking into this. New results should be available in the 
> burndown
> chart.
> > Thanks,
> > Hieu.
> >
> > [1].
> > https://github.com/openstack/ceilometer/commit/d881dd52289d453b9f9d94c
> > 7c32c0672a70a8064 [2].
> > https://github.com/lbragstad/openstack-doc-migration-burndown/pull/1
> >
> >
> >> -Original Message-
> >> From: Lance Bragstad [mailto:lbrags...@gmail.com]
> >> Sent: Thursday, November 30, 2017 10:41 PM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> 
> >> Subject: Re: [openstack-dev] [all] [tc] Policy Goal Queens-2 Update
> >>
> >>
> >>
> >> On 11/29/2017 09:13 PM, da...@vn.fujitsu.com wrote:
> >>> Hi all,
> >>>
> >>> I just want to share some related things to anyone are interested in.
> >>>
> >>> For the Neutron projects, I have discussed with them[1] but it is
> >>> not really started, they want to consider more about all of
> >>> networking projects before and I'm still waiting for the feedback to
> >>> define the right way to
> >> implement policy-in-code for networking projects.
> >>> For the other extensions of Neutron, we got some
> >>> recommendations[2][3] that we no need to implement policy-in-code
> >>> into those projects because we already register policy in Neutron,
> >>> so I think we can remove neutron-
> >> fwaas, neutron-dynamic-routing, neutron-lib or even other networking
> >> plugins out of "Not Started" list.
> >> Awesome, thanks for the update! I've gone ahead and removed these
> >> from the burndown chart [0]. Let me know if there are any others that
> >> fall into this category and I'll get things updated in the tracking tool.
> >>
> >> [0]
> >> https://github.com/lbragstad/openstack-doc-migration-
> >> burndown/commit/f34c2f56692230f104354240bf0e4378dc0fea82
> >>> For the Swift project, I don't see oslo.policy in requirements.txt
> >>> for now, then not sure they need to implement policy in code and the
> >>> we got the same
> >> thing with Solum.
> >> So does that mean these can be removed as well? I'm wondering if
> >> there is an official process here, or just a simple sign-off from a 
> >> project maintainer?
> >>> [1]
> >>> http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23opens
> >>> ta
> >>> ck-neutron.2017-10-31.log.html [2]
> >>> http://eavesdrop.openstack.org/irclogs/%23openstack-lbaas/%23opensta
> >>> ck
> >>> -lbaas.2017-10-06.log.html#t2017-10-06T02:50:10
> >>> [3] https://review.openstack.org/#/c/509389/
> >>>
> >>> Dai
> >>>
> >>>
> >>
> ___
> >> ___
> >>>  OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> >>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>> http://lists.open

Re: [openstack-dev] [all] [tc] Policy Goal Queens-2 Update

2017-11-30 Thread hie...@vn.fujitsu.com
Lance,

> > For the Swift project, I don't see oslo.policy in requirements.txt for
> > now, then not sure they need to implement policy in code and the we got the 
> > same
> thing with Solum.
> So does that mean these can be removed as well? I'm wondering if there is an 
> official
> process here, or just a simple sign-off from a project maintainer?

Swift did not use oslo.policy and use their own mechanism instead, so I guess 
we can remove Swift along with remaining networking-* plugins as well.

BTW, ceilometer had already deprecated and removed ceilometer API from Q, thus 
we can also remove ceilometer from the list too. [1]

I have created PR regarding all above changes in [2].

Thanks,
Hieu.

[1]. 
https://github.com/openstack/ceilometer/commit/d881dd52289d453b9f9d94c7c32c0672a70a8064
[2]. https://github.com/lbragstad/openstack-doc-migration-burndown/pull/1


> -Original Message-
> From: Lance Bragstad [mailto:lbrags...@gmail.com]
> Sent: Thursday, November 30, 2017 10:41 PM
> To: OpenStack Development Mailing List (not for usage questions)  d...@lists.openstack.org>
> Subject: Re: [openstack-dev] [all] [tc] Policy Goal Queens-2 Update
> 
> 
> 
> On 11/29/2017 09:13 PM, da...@vn.fujitsu.com wrote:
> > Hi all,
> >
> > I just want to share some related things to anyone are interested in.
> >
> > For the Neutron projects, I have discussed with them[1] but it is not
> > really started, they want to consider more about all of networking
> > projects before and I'm still waiting for the feedback to define the right 
> > way to
> implement policy-in-code for networking projects.
> >
> > For the other extensions of Neutron, we got some recommendations[2][3]
> > that we no need to implement policy-in-code into those projects
> > because we already register policy in Neutron, so I think we can remove 
> > neutron-
> fwaas, neutron-dynamic-routing, neutron-lib or even other networking plugins 
> out of
> "Not Started" list.
> Awesome, thanks for the update! I've gone ahead and removed these from the
> burndown chart [0]. Let me know if there are any others that fall into this 
> category and
> I'll get things updated in the tracking tool.
> 
> [0]
> https://github.com/lbragstad/openstack-doc-migration-
> burndown/commit/f34c2f56692230f104354240bf0e4378dc0fea82
> >
> > For the Swift project, I don't see oslo.policy in requirements.txt for
> > now, then not sure they need to implement policy in code and the we got the 
> > same
> thing with Solum.
> So does that mean these can be removed as well? I'm wondering if there is an 
> official
> process here, or just a simple sign-off from a project maintainer?
> >
> > [1]
> > http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23opensta
> > ck-neutron.2017-10-31.log.html [2]
> > http://eavesdrop.openstack.org/irclogs/%23openstack-lbaas/%23openstack
> > -lbaas.2017-10-06.log.html#t2017-10-06T02:50:10
> > [3] https://review.openstack.org/#/c/509389/
> >
> > Dai
> >
> >
> ___
> ___
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] [Magnum] Using common tooling for API docs

2016-08-29 Thread hie...@vn.fujitsu.com
Hi,

The works in Magnum api-ref can be reviewed at [1]. Please take a look and get 
these to be merged ASAP.

[1]. https://blueprints.launchpad.net/magnum/+spec/magnum-doc-rest-api

Thanks,
Hieu LE.

From: Anne Gentle [mailto:annegen...@justwriteclick.com]
Sent: Sunday, August 21, 2016 8:20 AM
To: Shuu Mutou 
Cc: m...@redhat.com; Haruhiko Katou ; 
openstack-dev@lists.openstack.org; openstack-d...@lists.openstack.org; 
kenichi.omi...@necam.com
Subject: Re: [openstack-dev] [OpenStack-docs] [Magnum] Using common tooling for 
API docs



On Fri, Aug 19, 2016 at 2:27 AM, Shuu Mutou 
mailto:shu-mu...@rf.jp.nec.com>> wrote:
>   AFAIK, the API WG adopted Swagger (OpenAPI) as common tool for API
> docs.
>   Anne, has not the adoption been changed? Otherwise, do we need to
> maintain much RST files also?
>
>
>
> It does say either/or in the API WG guideline:
> http://specs.openstack.org/openstack/api-wg/guidelines/api-docs.html

Yes. Also Ken'ichi Omichi said.


> This isn't about a contest between projects for the best approach. This
> is about serving end-users the best information we can.

Yes. Correct information is the best information. The Accuracy is more 
important than web experience. When I was a user (and SIer), the document 
accuracy had not been kept. So we had to read source code at last. And now, as 
a developer (mainly UI plugins), I don't want maintain overlapped content 
several times (API source code, API reference, helps in client, helps in WebUI, 
etc). So I spend efforts to the spec auto-generation.


> I'm reporting what I'm seeing from a broader viewpoint than a single project.
> I don't have a solution other than RST/YAML for common navigation, and I'm
> asking you to provide ideas for that integration point.
>
> My vision is that even if you choose to publish with OpenAPI, you would
> find a way to make this web experience better. We can do better than this
> scattered approach. I'm asking you to find a way to unify and consider the
> web experience of a consumer of OpenStack services. Can you generate HTML
> that can plug into the openstackdocstheme we are providing as a common tool?

I need to know about the "common tools". Please, let me know what is difference 
between HTMLs built by Lars's patch and by common tools? Or can fairy-slipper 
do that from OpenAPI file?

Sure, sounds like there's some info missing that I can clarify.

All HTML built for OpenStack sites are copied via FTP. There's no difference 
except for the CSS and JavaScript provided by openstackdocstheme and built by 
os-api-ref.

Fairy-slipper is no longer being worked on as a common solution to serving all 
OpenStack API information. It was used for migration purposes.

Lars's patch could find a way to use the CSS and JS to create a seamless 
experience for end-users.

Anne



Thanks,
Shu


> -Original Message-
> From: Anne Gentle 
> [mailto:annegen...@justwriteclick.com]
> Sent: Wednesday, August 17, 2016 11:55 AM
> To: Mutou Shuu(武藤 周) mailto:shu-mu...@rf.jp.nec.com>>
> Cc: 
> openstack-dev@lists.openstack.org; 
> m...@redhat.com; Katou Haruhiko(加
> 藤 治彦) mailto:har-ka...@ts.jp.nec.com>>; 
> openstack-d...@lists.openstack.org;
> kenichi.omi...@necam.com
> Subject: Re: [OpenStack-docs] [openstack-dev] [Magnum] Using common
> tooling for API docs
>
>
>
> On Tue, Aug 16, 2016 at 1:05 AM, Shuu Mutou 
> mailto:shu-mu...@rf.jp.nec.com>
> > > wrote:
>
>
>   Hi Anne,
>
>   AFAIK, the API WG adopted Swagger (OpenAPI) as common tool for API
> docs.
>   Anne, has not the adoption been changed? Otherwise, do we need to
> maintain much RST files also?
>
>
>
> It does say either/or in the API WG guideline:
> http://specs.openstack.org/openstack/api-wg/guidelines/api-docs.html
>
>
>
>   IMO, for that the reference and the source code doesn't have
> conflict, these should be near each other as possible as follow. And it
> decreases maintainance costs for documents, and increases document
> reliability. So I believe our approach is more ideal.
>
>
>
>
> This isn't about a contest between projects for the best approach. This
> is about serving end-users the best information we can.
>
>
>   The Best: the references generated from source code.
>
>
>
> I don't want to argue, but anything generated from the source code suffers
> if the source code changes in a way that reviewers don't catch as a
> backwards-incompatible change you can break your contract.
>
>
>   Better: the references written in docstring.
>
>   We know some projects abandoned these approach, and then they uses
> RST + YAML.
>   But we hope decreasing maintainance cost for the documents. So we
> should not create so much RST files, I think.
>
>
>
>
> I think you'll see the evolution o

Re: [openstack-dev] [Magnum] Next auto-scaling feature design?

2016-08-19 Thread hie...@vn.fujitsu.com
Thanks for all the information.

Yeah, hope that we can help a session about auto-scaling this summit.

From: Ton Ngo [mailto:t...@us.ibm.com]
Sent: Friday, August 19, 2016 4:19 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Magnum] Next auto-scaling feature design?


We have had numerous discussion on this topic, including a presentation and a 
design session
in Tokyo, but we have not really arrived at a consensus yet. Part of the 
problem is that auto-scaling
at the container level is still being developed, so it is still a moving target.
However, a few points did emerge from the discussion (not necessarily 
consensus):

  *   It's preferable to have a single point of decision on auto-scaling for 
both the container and infrastructure level.
One approach is to make this decision at the container orchestration level, so 
the infrastructure level would just
provide the service to handle request to scale the infrastructure. This would 
require coordinating support with
upstream like Kubernetes. This approach also means that we don't want a major 
component in Magnum to
drive auto-scaling.
  *   It's good to have a policy-driven mechanism for auto-scaling to handle 
complex scenarios. For this, Senlin
is a candidate; upstream is another potential choice.
We may want to revisit this topic as a design session in the next summit.
Ton Ngo,

[Inactive hide details for Hongbin Lu ---08/18/2016 12:26:07 PM---> 
-Original Message----- > From: hie...@vn.fujitsu.com [ma]Hongbin Lu 
---08/18/2016 12:26:07 PM---> -Original Message- > From: 
hie...@vn.fujitsu.com<mailto:hie...@vn.fujitsu.com> 
[mailto:hie...@vn.fujitsu.com]

From: Hongbin Lu mailto:hongbin...@huawei.com>>
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: 08/18/2016 12:26 PM
Subject: Re: [openstack-dev] [Magnum] Next auto-scaling feature design?
____





> -Original Message-
> From: hie...@vn.fujitsu.com<mailto:hie...@vn.fujitsu.com> 
> [mailto:hie...@vn.fujitsu.com]
> Sent: August-18-16 3:57 AM
> To: 
> openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
> Subject: [openstack-dev] [Magnum] Next auto-scaling feature design?
>
> Hi Magnum folks,
>
> I have some interests in our auto scaling features and currently
> testing with some container monitoring solutions such as heapster,
> telegraf and prometheus. I have seen the PoC session corporate with
> Senlin in Austin and have some questions regarding of this design:
> - We have decided to move all container management from Magnum to Zun,
> so is there only one level of scaling (node) instead of both node and
> container?
> - The PoC design show that Magnum (Magnum Scaler) need to depend on
> Heat/Ceilometer for gathering metrics and do the scaling work based on
> auto scaling policies, but is Heat/Ceilometer is the best choice for
> Magnum auto scaling?
>
> Currently, I saw that Magnum only send CPU and Memory metric to
> Ceilometer, and Heat can grab these to decide the right scaling method.
> IMO, this approach have some problems, please take a look and give
> feedbacks:
> - The AutoScaling Policy and AutoScaling Resource of Heat cannot handle
> complex scaling policies. For example:
> If CPU > 80% then scale out
> If Mem < 40% then scale in
> -> What if CPU = 90% and Mem = 30%, the conflict policy will appear.
> There are some WIP patch-set of Heat conditional logic in [1]. But IMO,
> the conditional logic of Heat also cannot resolve the conflict of
> scaling policies. For example:
> If CPU > 80% and Mem >70% then scale out If CPU < 30% or Mem < 50% then
> scale in
> -> What if CPU = 90% and Mem = 30%.
> Thus, I think that we need to implement magnum scaler for validating
> the policy conflicts.
> - Ceilometer may have troubles if we deploy thousands of COE.
>
> I think we need a new design for auto scaling feature, not for Magnum
> only but also Zun (because the scaling level of container maybe forked
> to Zun too). Here are some ideas:
> 1. Add new field enable_monitor to cluster template (ex baymodel) and
> show the monitoring URL when creating cluster (bay) complete. For
> example, we can use Prometheus as monitoring container for each cluster.
> (Heapster is the best choice for k8s, but not good enough for swarm or
> mesos).

[Hongbin Lu] Personally, I think this is a good idea.

> 2. Create Magnum scaler manager (maybe a new service):
> - Monitoring enabled monitor cluster and send metric to ceilometer if
> need.
> - Manage user-defined scaling policy: not only cpu and memory but also
> other metrics like network bw, CCU.
> - Validate user-defined scal

Re: [openstack-dev] [Magnum] Next auto-scaling feature design?

2016-08-18 Thread hie...@vn.fujitsu.com
> > Currently, I saw that Magnum only send CPU and Memory metric to Ceilometer, 
> > and Heat can grab these to decide the right scaling method. IMO, this 
> > approach have some problems, please take a look and give feedbacks:
> > - The AutoScaling Policy and AutoScaling Resource of Heat cannot handle 
> > complex scaling policies. For example: 
> > If CPU > 80% then scale out
> > If Mem < 40% then scale in
> > -> What if CPU = 90% and Mem = 30%, the conflict policy will appear.
> > There are some WIP patch-set of Heat conditional logic in [1]. But IMO, the 
> > conditional logic of Heat also cannot resolve the conflict of scaling 
> > policies. For example:
> > If CPU > 80% and Mem >70% then scale out If CPU < 30% or Mem < 50% 
> > then scale in
> > -> What if CPU = 90% and Mem = 30%.
>
> What would you like Heat to do in this scenario ? Is it that you would like 
> to have a user defined logic option as well as basic conditionals ?

Thank you Tim for feedback.

Yes, I'd like Heat to validate the user-defined policies along with heat 
template-validate mechanism. 

>
> I would expect the same problem to occur in pure Heat scenarios also so a 
> user defined scaling policy would probably be of interest there too and avoid 
> code duplication.
>
> Tim

Currently, there are some bp like [1] related to auto scaling policies but I 
cannot see the interest related to this problem. Hope someone can show me.
The Magnum scaler can be a centralize spot for both Magnum and Zun to scale COE 
node via Heat or container via COE scaling API (k8s already have auto scaling 
engine).

[1]. https://blueprints.launchpad.net/heat/+spec/as-lib

-Original Message-
From: Tim Bell [mailto:tim.b...@cern.ch] 
Sent: Thursday, August 18, 2016 3:19 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Magnum] Next auto-scaling feature design?


> On 18 Aug 2016, at 09:56, hie...@vn.fujitsu.com wrote:
> 
> Hi Magnum folks,
> 
> I have some interests in our auto scaling features and currently testing with 
> some container monitoring solutions such as heapster, telegraf and 
> prometheus. I have seen the PoC session corporate with Senlin in Austin and 
> have some questions regarding of this design:
> - We have decided to move all container management from Magnum to Zun, so is 
> there only one level of scaling (node) instead of both node and container?
> - The PoC design show that Magnum (Magnum Scaler) need to depend on 
> Heat/Ceilometer for gathering metrics and do the scaling work based on auto 
> scaling policies, but is Heat/Ceilometer is the best choice for Magnum auto 
> scaling? 
> 
> Currently, I saw that Magnum only send CPU and Memory metric to Ceilometer, 
> and Heat can grab these to decide the right scaling method. IMO, this 
> approach have some problems, please take a look and give feedbacks:
> - The AutoScaling Policy and AutoScaling Resource of Heat cannot handle 
> complex scaling policies. For example: 
> If CPU > 80% then scale out
> If Mem < 40% then scale in
> -> What if CPU = 90% and Mem = 30%, the conflict policy will appear.
> There are some WIP patch-set of Heat conditional logic in [1]. But IMO, the 
> conditional logic of Heat also cannot resolve the conflict of scaling 
> policies. For example:
> If CPU > 80% and Mem >70% then scale out If CPU < 30% or Mem < 50% 
> then scale in
> -> What if CPU = 90% and Mem = 30%.

What would you like Heat to do in this scenario ? Is it that you would like to 
have a user defined logic option as well as basic conditionals ?

I would expect the same problem to occur in pure Heat scenarios also so a user 
defined scaling policy would probably be of interest there too and avoid code 
duplication.

Tim

> Thus, I think that we need to implement magnum scaler for validating the 
> policy conflicts.
> - Ceilometer may have troubles if we deploy thousands of COE. 
> 
> I think we need a new design for auto scaling feature, not for Magnum only 
> but also Zun (because the scaling level of container maybe forked to Zun 
> too). Here are some ideas:
> 1. Add new field enable_monitor to cluster template (ex baymodel) and show 
> the monitoring URL when creating cluster (bay) complete. For example, we can 
> use Prometheus as monitoring container for each cluster. (Heapster is the 
> best choice for k8s, but not good enough for swarm or mesos).
> 2. Create Magnum scaler manager (maybe a new service):
> - Monitoring enabled monitor cluster and send metric to ceilometer if need.
> - Manage user-defined scaling policy: not only cpu and memory but also other 
> metrics like network bw, CCU.
> - Validate user-defined scaling policy and t

[openstack-dev] [Magnum] Next auto-scaling feature design?

2016-08-18 Thread hie...@vn.fujitsu.com
Hi Magnum folks,

I have some interests in our auto scaling features and currently testing with 
some container monitoring solutions such as heapster, telegraf and prometheus. 
I have seen the PoC session corporate with Senlin in Austin and have some 
questions regarding of this design:
- We have decided to move all container management from Magnum to Zun, so is 
there only one level of scaling (node) instead of both node and container?
- The PoC design show that Magnum (Magnum Scaler) need to depend on 
Heat/Ceilometer for gathering metrics and do the scaling work based on auto 
scaling policies, but is Heat/Ceilometer is the best choice for Magnum auto 
scaling? 

Currently, I saw that Magnum only send CPU and Memory metric to Ceilometer, and 
Heat can grab these to decide the right scaling method. IMO, this approach have 
some problems, please take a look and give feedbacks:
- The AutoScaling Policy and AutoScaling Resource of Heat cannot handle complex 
scaling policies. For example: 
If CPU > 80% then scale out
If Mem < 40% then scale in
-> What if CPU = 90% and Mem = 30%, the conflict policy will appear.
There are some WIP patch-set of Heat conditional logic in [1]. But IMO, the 
conditional logic of Heat also cannot resolve the conflict of scaling policies. 
For example:
If CPU > 80% and Mem >70% then scale out
If CPU < 30% or Mem < 50% then scale in
-> What if CPU = 90% and Mem = 30%.
Thus, I think that we need to implement magnum scaler for validating the policy 
conflicts.
- Ceilometer may have troubles if we deploy thousands of COE. 

I think we need a new design for auto scaling feature, not for Magnum only but 
also Zun (because the scaling level of container maybe forked to Zun too). Here 
are some ideas:
1. Add new field enable_monitor to cluster template (ex baymodel) and show the 
monitoring URL when creating cluster (bay) complete. For example, we can use 
Prometheus as monitoring container for each cluster. (Heapster is the best 
choice for k8s, but not good enough for swarm or mesos).
2. Create Magnum scaler manager (maybe a new service):
- Monitoring enabled monitor cluster and send metric to ceilometer if need.
- Manage user-defined scaling policy: not only cpu and memory but also other 
metrics like network bw, CCU.
- Validate user-defined scaling policy and trigger heat for scaling actions. 
(can trigger nova-scheduler for more scaling options)
- Need highly scalable architecture, first step we can implement simple 
validator method but in the future, there are some other approach such as using 
fuzzy logic or AI to make an appropriate decision.

Some use case for operators:
- I want to create a k8s cluster, and if CCU or network bandwidth is high 
please scale-out X nodes in other regions.
- I want to create swarm cluster, and if CPU or memory is too high, please 
scale-out X nodes to make sure total CPU and memory is about 50%.

What do you think about these above ideas/problems?

[1]. https://blueprints.launchpad.net/heat/+spec/support-conditions-function

Thanks,
Hieu LE.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Select our project mascot/logo

2016-07-26 Thread hie...@vn.fujitsu.com
Hi all,

I think Magnum mascot should be related to something that really big, can cover 
small things inside.
So, my 2$ idea is the yin-yang whale: 
https://s-media-cache-ak0.pinimg.com/736x/90/1d/66/901d66b496d9b5470f22981ab3c16da4.jpg

Best regards,
Hieu LE.

-Original Message-
From: Hongbin Lu [mailto:hongbin...@huawei.com] 
Sent: Wednesday, July 27, 2016 9:26 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Magnum] Select our project mascot/logo

Dims,

You are fast :). I believe OpenStack Foundation will coordinate in this case.

Best regards,
Hongbin

> -Original Message-
> From: Davanum Srinivas [mailto:dava...@gmail.com]
> Sent: July-26-16 9:56 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Magnum] Select our project mascot/logo
> 
> Hongbin,
> 
> Moose is already taken by Oslo team (2 summits ago :)
> 
> -- Dims
> 
> On Tue, Jul 26, 2016 at 9:48 PM, Hongbin Lu 
> wrote:
> > Hi all,
> >
> >
> >
> > Thanks for providing mascot ideas. As discussed at the team meeting, 
> > below is the short list of popular mascots. I believe you will
> receive
> > a link to vote among them later.
> >
> > * Waves - http://www.123rf.com/photo_11649085_set-of-waves.html
> >
> > * Kangaroo - http://www.supercoloring.com/pages/red-kangaroo
> >
> > * Shark - http://www.logoground.com/logo.php?id=10554
> >
> > * Majestic moose -
> >
> https://images.indiegogo.com/file_attachments/1328366/files/2015032608
> > 3908-Mooselaughing.jpg?1427384348
> >
> >
> >
> > Best regards,
> >
> > Hongbin
> >
> >
> >
> > From: Watson, Stephen [mailto:stephen.wat...@intel.com]
> > Sent: July-26-16 12:09 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [Magnum] Select our project mascot/logo
> >
> >
> >
> > +1 for kangaroo and stallion
> >
> >
> >
> > And my own suggestion even though it doesn’t fit the container or
> name
> > themes directly would be a St. Bernard because dogs and myths are
> cool:
> > http://mentalfloss.com/article/20908/why-are-st-bernards-always-
> depict
> > ed-barrels-around-their-necks
> >
> >
> >
> > -Stephen
> >
> >
> >
> > From: Hongbin Lu 
> > Reply-To: "OpenStack Development Mailing List (not for usage
> questions)"
> > 
> > Date: Monday, July 25, 2016 at 5:54 PM
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Subject: [openstack-dev] [Magnum] Select our project mascot/logo
> >
> >
> >
> > Hi team,
> >
> >
> >
> > OpenStack want to promote individual projects by choosing a mascot 
> > to represent the project. The idea is to create a family of logos 
> > for OpenStack projects that are unique, yet immediately identifiable 
> > as
> part of OpenStack.
> > OpenStack will be using these logos to promote each project on the 
> > OpenStack website, at the Summit and in marketing materials.
> >
> >
> >
> > We can select our own mascot, and then OpenStack will have an 
> > illustrator create the logo for us. The mascot can be anything from 
> > the natural world—an animal, fish, plant, or natural feature such as
> a
> > mountain or waterfall. We need to select our top mascot candidates 
> > by the first deadline (July 27, this Wednesday). There’s more info 
> > on
> the website:
> > http://www.openstack.org/project-mascots
> >
> >
> >
> > Action Item: Everyone please let me know what is your favorite mascot.
> > You can either reply to this ML or discuss it in the next team
> meeting.
> >
> >
> >
> > Best regards,
> >
> > Hongbin
> >
> >
> >
> __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> 
> --
> Davanum Srinivas :: https://twitter.com/dims
> 
> __
> _
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistal] Mistral logo ideas?

2016-06-26 Thread hie...@vn.fujitsu.com
Hi folks,

Maybe smth simple like that: http://prntscr.com/blhcyq 

De : Ilya Kutukov [ikutu...@mirantis.com]
Envoyé : vendredi 24 juin 2016 13:25
À : OpenStack Development Mailing List (not for usage questions)
Objet : Re: [openstack-dev] [mistal] Mistral logo ideas?
Maybe smth like this


On Fri, Jun 24, 2016 at 2:18 PM, Ilya Kutukov  wrote:
Here is top-down projection 
https://www.the-blueprints.com/blueprints-depot/ships/ships-france/nmf-mistral-l9013.png

On Fri, Jun 24, 2016 at 2:17 PM, Ilya Kutukov  wrote:
Look, Mistral landing markup (white stripes and circles with numbers) looks 
like tasks queue: 
https://patriceayme.files.wordpress.com/2014/05/mistral.jpg

On Fri, Jun 24, 2016 at 12:55 PM, Hardik  
wrote:
+1 :) 

On Friday 24 June 2016 03:08 PM, Nikolay Makhotkin wrote:
I like the idea of the logo being a stylized wind turbine. Perhaps it could be
a turbine with a gust of wind. Then we can show that Mistral harnesses the 
power of the wind :-)

I like this idea! Combine Mistral functionality symbol with wind :)

-- 
Best Regards,
Nikolay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev