Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-13 Thread Morgan Fainberg
On December 13, 2014 at 3:26:34 AM, Henry (henry4...@gmail.com) wrote:
Hi Morgan,

A good question about keystone.

In fact, keystone is naturally suitable for multi-region deployment. It has 
only REST service interface, and PKI based token greatly reduce the central 
service workload. So, unlike other openstack service, it would not be set to 
cascade mode.


I agree that Keystone is suitable for multi-region in some cases, I am still 
concerned from a security standpoint. The cascade examples all assert a 
*global* tenant_id / project_id in a lot of comments/documentation. The answer 
you gave me doesn’t quite address this issue nor the issue of a disparate 
deployment having a wildly different role-set or security profile. A PKI token 
is not (as of today) possible to use with a Keystone (or OpenStack deployment) 
that it didn’t come from. This is like this because Keystone needs to control 
the AuthZ for it’s local deployment (same design as the keystone-to-keystone 
federation). 

So I have to direct questions:

* Is there something specific you expect to happen with the cascading that 
makes resolving a project_id to something globally unique or am I mis-reading 
this as part of the design? 

* Does the cascade centralization just ask for Keystone tokens for each of the 
deployments or is there something else being done? Essentially how does one 
work with a Nova from cloud XXX and cloud YYY from an authorization standpoint?

You don’t need to answer these right away, but they are clarification points 
that need to be thought about as this design moves forward. There are a number 
of security / authorization questions I can expand on, but the above two are 
the really big ones to start with. As you scale up (or utilize deployments 
owned by different providers) it isn’t always possible to replicate the 
Keystone data around.

Cheers,
Morgan

Best regards
Henry

Sent from my iPad

On 2014-12-13, at 下午3:12, Morgan Fainberg  wrote:



On Dec 12, 2014, at 10:30, Joe Gordon  wrote:



On Fri, Dec 12, 2014 at 6:50 AM, Russell Bryant  wrote:
On 12/11/2014 12:55 PM, Andrew Laski wrote:
> Cells can handle a single API on top of globally distributed DCs.  I
> have spoken with a group that is doing exactly that.  But it requires
> that the API is a trusted part of the OpenStack deployments in those
> distributed DCs.

And the way the rest of the components fit into that scenario is far
from clear to me.  Do you consider this more of a "if you can make it
work, good for you", or something we should aim to be more generally
supported over time?  Personally, I see the globally distributed
OpenStack under a single API case much more complex, and worth
considering out of scope for the short to medium term, at least.

For me, this discussion boils down to ...

1) Do we consider these use cases in scope at all?

2) If we consider it in scope, is it enough of a priority to warrant a
cross-OpenStack push in the near term to work on it?

3) If yes to #2, how would we do it?  Cascading, or something built
around cells?

I haven't worried about #3 much, because I consider #2 or maybe even #1
to be a show stopper here.

Agreed

I agree with Russell as well. I also am curious on how identity will work in 
these cases. As it stands identity provides authoritative information only for 
the deployment it runs. There is a lot of concern I have from a security 
standpoint when I start needing to address what the central api can do on the 
other providers. We have had this discussion a number of times in Keystone, 
specifically when designing the keystone-to-keystone identity federation, and 
we came to the conclusion that we needed to ensure that the keystone local to a 
given cloud is the only source of authoritative authz information. While it 
may, in some cases, accept authn from a source that is trusted, it still 
controls the local set of roles and grants. 

Second, we only guarantee that a tenan_id / project_id is unique within a 
single deployment of keystone (e.g. Shared/replicated backends such as a 
percona cluster, which cannot be when crossing between differing IAAS 
deployers/providers). If there is ever a tenant_id conflict (in theory possible 
with ldap assignment or an unlucky random uuid generation) between 
installations, you end up with potentially granting access that should not 
exist to a given user. 

With that in mind, how does Keystone fit into this conversation? What is 
expected of identity? What would keystone need to actually support to make this 
a reality?

I ask because I've only seen information on nova, glance, cinder, and 
ceilometer in the documentation. Based upon the above information I outlined, I 
would be concerned with an assumption that identity would "just work" without 
also being part of this conversation. 

Thanks,
Morgan 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/

Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-13 Thread Henry
Hi Morgan,

A good question about keystone.

In fact, keystone is naturally suitable for multi-region deployment. It has 
only REST service interface, and PKI based token greatly reduce the central 
service workload. So, unlike other openstack service, it would not be set to 
cascade mode.

Best regards
Henry

Sent from my iPad

On 2014-12-13, at 下午3:12, Morgan Fainberg  wrote:

> 
> 
> On Dec 12, 2014, at 10:30, Joe Gordon  wrote:
> 
>> 
>> 
>> On Fri, Dec 12, 2014 at 6:50 AM, Russell Bryant  wrote:
>> On 12/11/2014 12:55 PM, Andrew Laski wrote:
>> > Cells can handle a single API on top of globally distributed DCs.  I
>> > have spoken with a group that is doing exactly that.  But it requires
>> > that the API is a trusted part of the OpenStack deployments in those
>> > distributed DCs.
>> 
>> And the way the rest of the components fit into that scenario is far
>> from clear to me.  Do you consider this more of a "if you can make it
>> work, good for you", or something we should aim to be more generally
>> supported over time?  Personally, I see the globally distributed
>> OpenStack under a single API case much more complex, and worth
>> considering out of scope for the short to medium term, at least.
>> 
>> For me, this discussion boils down to ...
>> 
>> 1) Do we consider these use cases in scope at all?
>> 
>> 2) If we consider it in scope, is it enough of a priority to warrant a
>> cross-OpenStack push in the near term to work on it?
>> 
>> 3) If yes to #2, how would we do it?  Cascading, or something built
>> around cells?
>> 
>> I haven't worried about #3 much, because I consider #2 or maybe even #1
>> to be a show stopper here.
>> 
>> Agreed
> 
> I agree with Russell as well. I also am curious on how identity will work in 
> these cases. As it stands identity provides authoritative information only 
> for the deployment it runs. There is a lot of concern I have from a security 
> standpoint when I start needing to address what the central api can do on the 
> other providers. We have had this discussion a number of times in Keystone, 
> specifically when designing the keystone-to-keystone identity federation, and 
> we came to the conclusion that we needed to ensure that the keystone local to 
> a given cloud is the only source of authoritative authz information. While it 
> may, in some cases, accept authn from a source that is trusted, it still 
> controls the local set of roles and grants. 
> 
> Second, we only guarantee that a tenan_id / project_id is unique within a 
> single deployment of keystone (e.g. Shared/replicated backends such as a 
> percona cluster, which cannot be when crossing between differing IAAS 
> deployers/providers). If there is ever a tenant_id conflict (in theory 
> possible with ldap assignment or an unlucky random uuid generation) between 
> installations, you end up with potentially granting access that should not 
> exist to a given user. 
> 
> With that in mind, how does Keystone fit into this conversation? What is 
> expected of identity? What would keystone need to actually support to make 
> this a reality?
> 
> I ask because I've only seen information on nova, glance, cinder, and 
> ceilometer in the documentation. Based upon the above information I outlined, 
> I would be concerned with an assumption that identity would "just work" 
> without also being part of this conversation. 
> 
> Thanks,
> Morgan 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-12 Thread Morgan Fainberg


> On Dec 12, 2014, at 10:30, Joe Gordon  wrote:
> 
> 
> 
>> On Fri, Dec 12, 2014 at 6:50 AM, Russell Bryant  wrote:
>> On 12/11/2014 12:55 PM, Andrew Laski wrote:
>> > Cells can handle a single API on top of globally distributed DCs.  I
>> > have spoken with a group that is doing exactly that.  But it requires
>> > that the API is a trusted part of the OpenStack deployments in those
>> > distributed DCs.
>> 
>> And the way the rest of the components fit into that scenario is far
>> from clear to me.  Do you consider this more of a "if you can make it
>> work, good for you", or something we should aim to be more generally
>> supported over time?  Personally, I see the globally distributed
>> OpenStack under a single API case much more complex, and worth
>> considering out of scope for the short to medium term, at least.
>> 
>> For me, this discussion boils down to ...
>> 
>> 1) Do we consider these use cases in scope at all?
>> 
>> 2) If we consider it in scope, is it enough of a priority to warrant a
>> cross-OpenStack push in the near term to work on it?
>> 
>> 3) If yes to #2, how would we do it?  Cascading, or something built
>> around cells?
>> 
>> I haven't worried about #3 much, because I consider #2 or maybe even #1
>> to be a show stopper here.
> 
> Agreed

I agree with Russell as well. I also am curious on how identity will work in 
these cases. As it stands identity provides authoritative information only for 
the deployment it runs. There is a lot of concern I have from a security 
standpoint when I start needing to address what the central api can do on the 
other providers. We have had this discussion a number of times in Keystone, 
specifically when designing the keystone-to-keystone identity federation, and 
we came to the conclusion that we needed to ensure that the keystone local to a 
given cloud is the only source of authoritative authz information. While it 
may, in some cases, accept authn from a source that is trusted, it still 
controls the local set of roles and grants. 

Second, we only guarantee that a tenan_id / project_id is unique within a 
single deployment of keystone (e.g. Shared/replicated backends such as a 
percona cluster, which cannot be when crossing between differing IAAS 
deployers/providers). If there is ever a tenant_id conflict (in theory possible 
with ldap assignment or an unlucky random uuid generation) between 
installations, you end up with potentially granting access that should not 
exist to a given user. 

With that in mind, how does Keystone fit into this conversation? What is 
expected of identity? What would keystone need to actually support to make this 
a reality?

I ask because I've only seen information on nova, glance, cinder, and 
ceilometer in the documentation. Based upon the above information I outlined, I 
would be concerned with an assumption that identity would "just work" without 
also being part of this conversation. 

Thanks,
Morgan ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-12 Thread joehuang
Hello, Russell,

> Personally, I see the globally distributed
> OpenStack under a single API case much more complex, and worth
> considering out of scope for the short to medium term, at least.

Thanks for your thougths. Do you mean it could be set in the roadmap, but not a 
scope in short or medium term ( for example, Kilo and L release )? Or we need 
more discussion to include it in the roadmap, then I would like to know how to 
do that.

Best Regards

Chaoyi Huang ( joehuang )

From: Russell Bryant [rbry...@redhat.com]
Sent: 12 December 2014 22:50
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit 
recap and move forward

On 12/11/2014 12:55 PM, Andrew Laski wrote:
> Cells can handle a single API on top of globally distributed DCs.  I
> have spoken with a group that is doing exactly that.  But it requires
> that the API is a trusted part of the OpenStack deployments in those
> distributed DCs.

And the way the rest of the components fit into that scenario is far
from clear to me.  Do you consider this more of a "if you can make it
work, good for you", or something we should aim to be more generally
supported over time?  Personally, I see the globally distributed
OpenStack under a single API case much more complex, and worth
considering out of scope for the short to medium term, at least.

For me, this discussion boils down to ...

1) Do we consider these use cases in scope at all?

2) If we consider it in scope, is it enough of a priority to warrant a
cross-OpenStack push in the near term to work on it?

3) If yes to #2, how would we do it?  Cascading, or something built
around cells?

I haven't worried about #3 much, because I consider #2 or maybe even #1
to be a show stopper here.

--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-12 Thread joehuang
Hello,  Andrew, 

> I do consider this to be out of scope for cells, for at least the medium
> term as you've said.  There is additional complexity in making that a
> supported configuration that is not being addressed in the cells
> effort.  I am just making the statement that this is something cells
> could address if desired, and therefore doesn't need an additional solution

1. Does your solution include Cinder,Neutron, Glance, Ceilometer, or only Nova 
involved? Could you describe it more clear how your solution works.
2. The tenant's resources need to be distributed in different data centers, how 
these resources are connected through L2/L3 networing, and isolated from other 
tenants, including provide advanced service like LB/FW/VPN, and service 
chainning?
3. How to distribute the image to geo-distributed data-centers when the user 
upload image, or you mean all VM will boot from the remote image data?
4. How would the metering and monitoring function will work in geo-distributed 
data-centers? Or say, if we use Ceilometer, how to hadnle the sampling 
data/alarm?
5. How to support multi-vendor's OpenStack distribution in one multi-site 
cloud? If only support one vendor's OpenStack distribution, and use RPC for 
inter-dc communication, how to do the cross data center integration and 
troubleshooting, upgrade with RPC if the 
driver/agent/backend(storage/network/sever) from different vendor

I have lots of doubts how the cells would address these challenges,  these 
questions are only part of them. It would be the best if cells can address all 
challenges, then of course no need an adtional solution.

Best regards

Chaoyi Huang ( joehuang )


From: Andrew Laski [andrew.la...@rackspace.com]
Sent: 13 December 2014 0:06
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit 
recap and move forward

On 12/12/2014 09:50 AM, Russell Bryant wrote:
> On 12/11/2014 12:55 PM, Andrew Laski wrote:
>> Cells can handle a single API on top of globally distributed DCs.  I
>> have spoken with a group that is doing exactly that.  But it requires
>> that the API is a trusted part of the OpenStack deployments in those
>> distributed DCs.
> And the way the rest of the components fit into that scenario is far
> from clear to me.  Do you consider this more of a "if you can make it
> work, good for you", or something we should aim to be more generally
> supported over time?  Personally, I see the globally distributed
> OpenStack under a single API case much more complex, and worth
> considering out of scope for the short to medium term, at least.

I do consider this to be out of scope for cells, for at least the medium
term as you've said.  There is additional complexity in making that a
supported configuration that is not being addressed in the cells
effort.  I am just making the statement that this is something cells
could address if desired, and therefore doesn't need an additional solution.

> For me, this discussion boils down to ...
>
> 1) Do we consider these use cases in scope at all?
>
> 2) If we consider it in scope, is it enough of a priority to warrant a
> cross-OpenStack push in the near term to work on it?
>
> 3) If yes to #2, how would we do it?  Cascading, or something built
> around cells?
>
> I haven't worried about #3 much, because I consider #2 or maybe even #1
> to be a show stopper here.
>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-12 Thread Joe Gordon
On Fri, Dec 12, 2014 at 6:50 AM, Russell Bryant  wrote:

> On 12/11/2014 12:55 PM, Andrew Laski wrote:
> > Cells can handle a single API on top of globally distributed DCs.  I
> > have spoken with a group that is doing exactly that.  But it requires
> > that the API is a trusted part of the OpenStack deployments in those
> > distributed DCs.
>
> And the way the rest of the components fit into that scenario is far
> from clear to me.  Do you consider this more of a "if you can make it
> work, good for you", or something we should aim to be more generally
> supported over time?  Personally, I see the globally distributed
> OpenStack under a single API case much more complex, and worth
> considering out of scope for the short to medium term, at least.
>
> For me, this discussion boils down to ...
>
> 1) Do we consider these use cases in scope at all?
>
> 2) If we consider it in scope, is it enough of a priority to warrant a
> cross-OpenStack push in the near term to work on it?
>
> 3) If yes to #2, how would we do it?  Cascading, or something built
> around cells?
>
> I haven't worried about #3 much, because I consider #2 or maybe even #1
> to be a show stopper here.
>

Agreed


>
> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-12 Thread Russell Bryant
On 12/12/2014 11:06 AM, Andrew Laski wrote:
> 
> On 12/12/2014 09:50 AM, Russell Bryant wrote:
>> On 12/11/2014 12:55 PM, Andrew Laski wrote:
>>> Cells can handle a single API on top of globally distributed DCs.  I
>>> have spoken with a group that is doing exactly that.  But it requires
>>> that the API is a trusted part of the OpenStack deployments in those
>>> distributed DCs.
>> And the way the rest of the components fit into that scenario is far
>> from clear to me.  Do you consider this more of a "if you can make it
>> work, good for you", or something we should aim to be more generally
>> supported over time?  Personally, I see the globally distributed
>> OpenStack under a single API case much more complex, and worth
>> considering out of scope for the short to medium term, at least.
> 
> I do consider this to be out of scope for cells, for at least the medium
> term as you've said.  There is additional complexity in making that a
> supported configuration that is not being addressed in the cells
> effort.  I am just making the statement that this is something cells
> could address if desired, and therefore doesn't need an additional
> solution.

OK, great.  Thanks for the clarification.  I think we're on the same
page.  :-)

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-12 Thread Andrew Laski


On 12/12/2014 09:50 AM, Russell Bryant wrote:

On 12/11/2014 12:55 PM, Andrew Laski wrote:

Cells can handle a single API on top of globally distributed DCs.  I
have spoken with a group that is doing exactly that.  But it requires
that the API is a trusted part of the OpenStack deployments in those
distributed DCs.

And the way the rest of the components fit into that scenario is far
from clear to me.  Do you consider this more of a "if you can make it
work, good for you", or something we should aim to be more generally
supported over time?  Personally, I see the globally distributed
OpenStack under a single API case much more complex, and worth
considering out of scope for the short to medium term, at least.


I do consider this to be out of scope for cells, for at least the medium 
term as you've said.  There is additional complexity in making that a 
supported configuration that is not being addressed in the cells 
effort.  I am just making the statement that this is something cells 
could address if desired, and therefore doesn't need an additional solution.



For me, this discussion boils down to ...

1) Do we consider these use cases in scope at all?

2) If we consider it in scope, is it enough of a priority to warrant a
cross-OpenStack push in the near term to work on it?

3) If yes to #2, how would we do it?  Cascading, or something built
around cells?

I haven't worried about #3 much, because I consider #2 or maybe even #1
to be a show stopper here.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-12 Thread Russell Bryant
On 12/11/2014 12:55 PM, Andrew Laski wrote:
> Cells can handle a single API on top of globally distributed DCs.  I
> have spoken with a group that is doing exactly that.  But it requires
> that the API is a trusted part of the OpenStack deployments in those
> distributed DCs.

And the way the rest of the components fit into that scenario is far
from clear to me.  Do you consider this more of a "if you can make it
work, good for you", or something we should aim to be more generally
supported over time?  Personally, I see the globally distributed
OpenStack under a single API case much more complex, and worth
considering out of scope for the short to medium term, at least.

For me, this discussion boils down to ...

1) Do we consider these use cases in scope at all?

2) If we consider it in scope, is it enough of a priority to warrant a
cross-OpenStack push in the near term to work on it?

3) If yes to #2, how would we do it?  Cascading, or something built
around cells?

I haven't worried about #3 much, because I consider #2 or maybe even #1
to be a show stopper here.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-12 Thread henry hly
On Fri, Dec 12, 2014 at 4:10 PM, Steve Gordon  wrote:
> - Original Message -
>> From: "henry hly" 
>> To: "OpenStack Development Mailing List (not for usage questions)" 
>> 
>>
>> On Fri, Dec 12, 2014 at 11:41 AM, Dan Smith 
>> wrote:
>> >> [joehuang] Could you pls. make it more clear for the deployment
>> >> mode
>> >> of cells when used for globally distributed DCs with single API.
>> >> Do
>> >> you mean cinder/neutron/glance/ceilometer will be shared by all
>> >> cells, and use RPC for inter-dc communication, and only support
>> >> one
>> >> vendor's OpenStack distribution? How to do the cross data center
>> >> integration and troubleshooting with RPC if the
>> >> driver/agent/backend(storage/network/sever) from different vendor.
>> >
>> > Correct, cells only applies to single-vendor distributed
>> > deployments. In
>> > both its current and future forms, it uses private APIs for
>> > communication between the components, and thus isn't suited for a
>> > multi-vendor environment.
>> >
>> > Just MHO, but building functionality into existing or new
>> > components to
>> > allow deployments from multiple vendors to appear as a single API
>> > endpoint isn't something I have much interest in.
>> >
>> > --Dan
>> >
>>
>> Even with the same distribution, cell still face many challenges
>> across multiple DC connected with WAN. Considering OAM, it's easier
>> to
>> manage autonomous systems connected with external northband interface
>> across remote sites, than a single monolithic system connected with
>> internal RPC message.
>
> The key question here is this primarily the role of OpenStack or an external 
> cloud management platform, and I don't profess to know the answer. What do 
> people use (workaround or otherwise) for these use cases *today*? Another 
> question I have is, one of the stated use cases is for managing OpenStack 
> clouds from multiple vendors - is the implication here that some of these 
> have additional divergent API extensions or is the concern solely the 
> incompatibilities inherent in communicating using the RPC mechanisms? If 
> there are divergent API extensions, how is that handled from a proxying point 
> of view if not all underlying OpenStack clouds necessarily support it (I 
> guess same applies when using distributions without additional extensions but 
> of different versions - e.g. Icehouse vs Juno which I believe was also a 
> targeted use case?)?

It's not about divergent northband API extension. Services between
Openstack projects are SOA based, this is a vertical splitting, so
when building large and distributed system (whatever it is) with
horizontal splitting, shouldn't we prefer clear and stable RESTful
interface between these building blocks?

>
>> Although Cell did some separation and modulation (not to say it's
>> still internal RPC across WAN), they leaves cinder, neutron,
>> ceilometer. Shall we wait for all these projects to re-factor with
>> Cell-like hierarchy structure, or adopt a more loose coupled way, to
>> distribute them into autonomous units at the basis of the whole
>> Openstack (except Keystone which can handle multiple region
>> naturally)?
>
> Similarly though, is the intent with Cascading that each new project would 
> have to also implement and provide a proxy for use in these deployments? One 
> of the challenges with maintaining/supporting the existing Cells 
> implementation has been that it's effectively it's own thing and as a result 
> it is often not considered when adding new functionality.

Yes we need a new proxy, but nova proxy is just a new type of virt
driver, neutron proxy a new type of agent, cinder proxy a new type of
volume store...They just utilize existing standard driver/agent
mechanism, no influence on other code in tree.

>
>> As we can see, compared with Cell, much less work is needed to build
>> a
>> Cascading solution, No patch is needed except Neutron (waiting some
>> upcoming features not landed in Juno), nearly all work lies in the
>> proxy, which is in fact another kind of driver/agent.
>
> Right, but the proxies still appear to be a not insignificant amount of code 
> - is the intent not that the proxies would eventually reside within the 
> relevant projects? I've been assuming yes but I am wondering if this was an 
> incorrect assumption on my part based on your comment.
>
> Thanks,
>
> Steve
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-12 Thread Steve Gordon
- Original Message -
> From: "henry hly" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> 
> On Fri, Dec 12, 2014 at 11:41 AM, Dan Smith 
> wrote:
> >> [joehuang] Could you pls. make it more clear for the deployment
> >> mode
> >> of cells when used for globally distributed DCs with single API.
> >> Do
> >> you mean cinder/neutron/glance/ceilometer will be shared by all
> >> cells, and use RPC for inter-dc communication, and only support
> >> one
> >> vendor's OpenStack distribution? How to do the cross data center
> >> integration and troubleshooting with RPC if the
> >> driver/agent/backend(storage/network/sever) from different vendor.
> >
> > Correct, cells only applies to single-vendor distributed
> > deployments. In
> > both its current and future forms, it uses private APIs for
> > communication between the components, and thus isn't suited for a
> > multi-vendor environment.
> >
> > Just MHO, but building functionality into existing or new
> > components to
> > allow deployments from multiple vendors to appear as a single API
> > endpoint isn't something I have much interest in.
> >
> > --Dan
> >
> 
> Even with the same distribution, cell still face many challenges
> across multiple DC connected with WAN. Considering OAM, it's easier
> to
> manage autonomous systems connected with external northband interface
> across remote sites, than a single monolithic system connected with
> internal RPC message.

The key question here is this primarily the role of OpenStack or an external 
cloud management platform, and I don't profess to know the answer. What do 
people use (workaround or otherwise) for these use cases *today*? Another 
question I have is, one of the stated use cases is for managing OpenStack 
clouds from multiple vendors - is the implication here that some of these have 
additional divergent API extensions or is the concern solely the 
incompatibilities inherent in communicating using the RPC mechanisms? If there 
are divergent API extensions, how is that handled from a proxying point of view 
if not all underlying OpenStack clouds necessarily support it (I guess same 
applies when using distributions without additional extensions but of different 
versions - e.g. Icehouse vs Juno which I believe was also a targeted use case?)?

> Although Cell did some separation and modulation (not to say it's
> still internal RPC across WAN), they leaves cinder, neutron,
> ceilometer. Shall we wait for all these projects to re-factor with
> Cell-like hierarchy structure, or adopt a more loose coupled way, to
> distribute them into autonomous units at the basis of the whole
> Openstack (except Keystone which can handle multiple region
> naturally)?

Similarly though, is the intent with Cascading that each new project would have 
to also implement and provide a proxy for use in these deployments? One of the 
challenges with maintaining/supporting the existing Cells implementation has 
been that it's effectively it's own thing and as a result it is often not 
considered when adding new functionality.

> As we can see, compared with Cell, much less work is needed to build
> a
> Cascading solution, No patch is needed except Neutron (waiting some
> upcoming features not landed in Juno), nearly all work lies in the
> proxy, which is in fact another kind of driver/agent.

Right, but the proxies still appear to be a not insignificant amount of code - 
is the intent not that the proxies would eventually reside within the relevant 
projects? I've been assuming yes but I am wondering if this was an incorrect 
assumption on my part based on your comment.

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-11 Thread joehuang
Hello, Dan,

> Correct, cells only applies to single-vendor distributed deployments. 
> In both its current and future forms, it uses private APIs for 
> communication between the components, and thus isn't suited for a 
> multi-vendor environment.

Thank you for your confirmation. My doubt is what's the "private APIs", which 
commponents included " communication between the components ".

Best Regards

Chaoyi Huang ( joehuang )

-Original Message-
From: Dan Smith [mailto:d...@danplanet.com] 
Sent: Friday, December 12, 2014 11:41 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit 
recap and move forward

> [joehuang] Could you pls. make it more clear for the deployment mode 
> of cells when used for globally distributed DCs with single API. Do 
> you mean cinder/neutron/glance/ceilometer will be shared by all cells, 
> and use RPC for inter-dc communication, and only support one vendor's 
> OpenStack distribution? How to do the cross data center integration 
> and troubleshooting with RPC if the
> driver/agent/backend(storage/network/sever) from different vendor.

Correct, cells only applies to single-vendor distributed deployments. In both 
its current and future forms, it uses private APIs for communication between 
the components, and thus isn't suited for a multi-vendor environment.

Just MHO, but building functionality into existing or new components to allow 
deployments from multiple vendors to appear as a single API endpoint isn't 
something I have much interest in.

--Dan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells - summit recap and move forward

2014-12-11 Thread joehuang
Hi Joe,

Thank you to lead us to deep diving into cascading. My answer is listed below 
your question.

> I don't think this is sufficient. If the underlying hardware the
> between multiple vendors is different setting the same values for
> a flavor will result in different performance characteristics.
> For example, nova allows for setting VCPUs, but nova doesn't provide
> an easy way to define how powerful a VCPU is.   Also flavors are commonly
> hardware dependent, take what rackspace offers:

> http://www.rackspace.com/cloud/public-pricing#cloud-servers

> Rackspace has "I/O Optimized" flavors

> * High-performance, RAID 10-protected SSD storage
> * Option of booting from Cloud Block Storage (additional charges apply
> for Cloud Block Storage)
> * Redundant 10-Gigabit networking
> * Disk I/O scales with the number of data disks up to ~80,000 4K random
> read IOPS and ~70,000 4K random write IOPS.*

> How would cascading support something like this?

[joehuang] Just reminder you that the cascading works like a normal OpenStack, 
if it can be solved by one OpenStack instance, it should be feasible too in 
cascading through the self-similar mechanism used ( just treat the cascaded 
OpenStack as one huge compute node ). The only difference between cascading 
OpenStack and normal OpenStack is that the agent/driver processes running on 
compute-node /cinder-volume node / L2/L3 agent.

Let me give an example how the issues you mentioned can be solved in cascading.

Suppose that we have one cascading OpenStack (OpenStack0), two cascaded 
OpenStack( OpenStack1, OpenStack2 )

For OpenStack1: there are 5 compute nodes in OpenStack1 with “High-performance, 
RAID 10-protected SSD storage”, we can add these 5 nodes to host aggregate 
“SSD” with extra spec (Storage:SSD), there are another 5 nodes booting from 
Cloud Block Storage, we can add these 5 nodes to host aggregate “cloudstorage” 
with extra spec (Storage:cloud). All these 10 nodes belongs to AZ1 
(availability zone 1)

For OpenStack2: there are 5 compute nodes in OpenStack2 with “Redundant 
10-Gigabit networking”, we can add these 5 nodes to host aggregate “SSD” with 
extra spec (Storage:SSD), there are another 5 nodes with random access to 
volume with QoS requirement, we can add these 5 nodes to host aggregate 
“randomio” with extra spec (IO:random). All these 10 nodes belongs to AZ2 
(availability zone 2). We can define volume QoS associated with volume-type: 
vol-type-random-qos.

In the cascading OpenStack, add compute-node1 as the proxy-node (proxy-node1) 
for the cascaded OpenStack1, add compute-node2 as the proxy-node (proxy-node2) 
for the cascaded OpenStack2. From the information described for cascaded 
OpenStack, add proxy-node1 to AZ1, and host aggregate “SSD” and “cloudstorage”, 
add proxy-node2 to AZ2, and host aggregate “SSD” and “randomio”, cinder-proxy 
running on proxy-node2 will retrieve the volume type with QoS information from 
the cascaded OpenStack2. After that, the tenant user or the cloud admin can 
define “Flavor” with extra-spec which will be matched with host-aggregate spec.

In the cascading layer, you need to configure the regarding scheduler filter.

Now: if you boot a VM in AZ1 with flavor  (Storage:SSD), the request will be 
scheduled to proxy-node1, and the request will be reassembled as restful 
request to cascaded OpenStack1, and the node which were added into SSD host 
aggregate will be scheduled just like a normal OpenStack done.
if you boot a VM in AZ2 with flavor  (Storage:SSD), the request will be 
scheduled to proxy-node2, and the request will be reassembled as restful 
request to cascaded OpenStack2, and the node which were added into SSD host 
aggregate will be scheduled just like a normal OpenStack done.
But if you boot a VM in AZ2 with flavor  (randomio), the request will be 
scheduled to proxy-node2, and the request will be reassembled as restful 
request to cascaded OpenStack2, and the node which were added into randomio 
host aggregate will be scheduled just like a normal OpenStack done. If you 
attach a volume which was created with the volume-type “vol-type-random-qos” in 
AZ2 to VM2, and the qos for VM to access volume will also taken into effect.

I just give a relative easy to understand example, more complicated use cases 
can also be settled using the cascading amazing self-similar mechanism, I 
called it as FRACTAL (fantastic math) pattern 
(https://www.linkedin.com/pulse/20140729022031-23841540-openstack-cascading-and-fractal?trk=prof-post).

Best regards

Chaoyi Huang ( joehuang )

From: Joe Gordon [mailto:joe.gord...@gmail.com]
Sent: Friday, December 12, 2014 11:29 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells - summit 
recap and move forward



On Thu, Dec 11, 2014 at 6:25 PM, joehuang 
mailto:joehu...@huawei.com>> wrote:
Hello, Joe

Thank 

Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-11 Thread henry hly
On Fri, Dec 12, 2014 at 11:41 AM, Dan Smith  wrote:
>> [joehuang] Could you pls. make it more clear for the deployment mode
>> of cells when used for globally distributed DCs with single API. Do
>> you mean cinder/neutron/glance/ceilometer will be shared by all
>> cells, and use RPC for inter-dc communication, and only support one
>> vendor's OpenStack distribution? How to do the cross data center
>> integration and troubleshooting with RPC if the
>> driver/agent/backend(storage/network/sever) from different vendor.
>
> Correct, cells only applies to single-vendor distributed deployments. In
> both its current and future forms, it uses private APIs for
> communication between the components, and thus isn't suited for a
> multi-vendor environment.
>
> Just MHO, but building functionality into existing or new components to
> allow deployments from multiple vendors to appear as a single API
> endpoint isn't something I have much interest in.
>
> --Dan
>

Even with the same distribution, cell still face many challenges
across multiple DC connected with WAN. Considering OAM, it's easier to
manage autonomous systems connected with external northband interface
across remote sites, than a single monolithic system connected with
internal RPC message.

Although Cell did some separation and modulation (not to say it's
still internal RPC across WAN), they leaves cinder, neutron,
ceilometer. Shall we wait for all these projects to re-factor with
Cell-like hierarchy structure, or adopt a more loose coupled way, to
distribute them into autonomous units at the basis of the whole
Openstack (except Keystone which can handle multiple region
naturally)?

As we can see, compared with Cell, much less work is needed to build a
Cascading solution, No patch is needed except Neutron (waiting some
upcoming features not landed in Juno), nearly all work lies in the
proxy, which is in fact another kind of driver/agent.

Best Regards
Henry


>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-11 Thread Dan Smith
> [joehuang] Could you pls. make it more clear for the deployment mode
> of cells when used for globally distributed DCs with single API. Do
> you mean cinder/neutron/glance/ceilometer will be shared by all
> cells, and use RPC for inter-dc communication, and only support one
> vendor's OpenStack distribution? How to do the cross data center
> integration and troubleshooting with RPC if the
> driver/agent/backend(storage/network/sever) from different vendor.

Correct, cells only applies to single-vendor distributed deployments. In
both its current and future forms, it uses private APIs for
communication between the components, and thus isn't suited for a
multi-vendor environment.

Just MHO, but building functionality into existing or new components to
allow deployments from multiple vendors to appear as a single API
endpoint isn't something I have much interest in.

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells - summit recap and move forward

2014-12-11 Thread Joe Gordon
On Thu, Dec 11, 2014 at 6:25 PM, joehuang  wrote:

>  Hello, Joe
>
>
>
> Thank you for your good question.
>
>
>
> Question:
>
> How would something like flavors work across multiple vendors. The
> OpenStack API doesn't have any hard coded names and sizes for flavors. So a
> flavor such as m1.tiny may actually be very different vendor to vendor.
>
>
>
> Answer:
>
> The flavor is defined by Cloud Operator from the cascading OpenStack. And
> Nova-proxy ( which is the driver for “Nova as hypervisor” ) will sync the
> flavor to the cascaded OpenStack when it was first used in the cascaded
> OpenStack. If flavor was changed before a new VM is booted, the changed
> flavor will also be updated to the cascaded OpenStack just before the new
> VM booted request. Through this synchronization mechanism, all flavor used
> in multi-vendor’s cascaded OpenStack will be kept the same as what used in
> the cascading level, provide a consistent view for flavor.
>

I don't think this is sufficient. If the underlying hardware the between
multiple vendors is different setting the same values for a flavor will
result in different performance characteristics.  For example, nova allows
for setting VCPUs, but nova doesn't provide an easy way to define how
powerful a VCPU is.   Also flavors are commonly hardware dependent, take
what rackspace offers:

http://www.rackspace.com/cloud/public-pricing#cloud-servers

Rackspace has "I/O Optimized" flavors

* High-performance, RAID 10-protected SSD storage
* Option of booting from Cloud Block Storage (additional charges apply for
Cloud Block Storage)
* Redundant 10-Gigabit networking
* Disk I/O scales with the number of data disks up to ~80,000 4K random
read IOPS and ~70,000 4K random write IOPS.*

How would cascading support something like this?


>
>
> Best Regards
>
>
>
> Chaoyi Huang ( joehuang )
>
>
>
> *From:* Joe Gordon [mailto:joe.gord...@gmail.com]
> *Sent:* Friday, December 12, 2014 8:17 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
>
> *Subject:* Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells -
> summit recap and move forward
>
>
>
>
>
>
>
> On Thu, Dec 11, 2014 at 1:02 AM, joehuang  wrote:
>
> Hello, Russell,
>
> Many thanks for your reply. See inline comments.
>
> -----Original Message-----
> From: Russell Bryant [mailto:rbry...@redhat.com]
> Sent: Thursday, December 11, 2014 5:22 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit
> recap and move forward
>
> >> On Fri, Dec 5, 2014 at 8:23 AM, joehuang  wrote:
> >>> Dear all & TC & PTL,
> >>>
> >>> In the 40 minutes cross-project summit session “Approaches for
> >>> scaling out”[1], almost 100 peoples attended the meeting, and the
> >>> conclusion is that cells can not cover the use cases and
> >>> requirements which the OpenStack cascading solution[2] aim to
> >>> address, the background including use cases and requirements is also
> >>> described in the mail.
>
> >I must admit that this was not the reaction I came away with the
> discussion with.
> >There was a lot of confusion, and as we started looking closer, many (or
> perhaps most)
> >people speaking up in the room did not agree that the requirements being
> stated are
> >things we want to try to satisfy.
>
> [joehuang] Could you pls. confirm your opinion: 1) cells can not cover the
> use cases and requirements which the OpenStack cascading solution aim to
> address. 2) Need further discussion whether to satisfy the use cases and
> requirements.
>
> On 12/05/2014 06:47 PM, joehuang wrote:
> >>> Hello, Davanum,
> >>>
> >>> Thanks for your reply.
> >>>
> >>> Cells can't meet the demand for the use cases and requirements
> described in the mail.
>
> >You're right that cells doesn't solve all of the requirements you're
> discussing.
> >Cells addresses scale in a region.  My impression from the summit session
> > and other discussions is that the scale issues addressed by cells are
> considered
> > a priority, while the "global API" bits are not.
>
> [joehuang] Agree cells is in the first class priority.
>
> >>> 1. Use cases
> >>> a). Vodafone use case[4](OpenStack summit speech video from 9'02"
> >>> to 12'30" ), establishing globally addressable tenants which result
> >>> in efficient services deployment.
>
> > Keystone has been working on federated identity.
> >That part makes 

Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-11 Thread joehuang
Hello, Joshua,

Sorry, my fault. You are right. I owe you two dollars.

Best regards

Chaoyi Huang ( joehuang )

-Original Message-
From: Joshua Harlow [mailto:harlo...@outlook.com] 
Sent: Friday, December 12, 2014 9:47 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit 
recap and move forward

So I think u mean 'proprietary'?

http://www.merriam-webster.com/dictionary/proprietary

-Josh

joehuang wrote:
> Hi, Jay,
>
> Good question, see inline comments, pls.
>
> Best Regards
> Chaoyi Huang ( Joe Huang )
>
> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Friday, December 12, 2014 1:58 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – 
> summit recap and move forward
>
> On 12/11/2014 04:02 AM, joehuang wrote:
>>> [joehuang] The major challenge for VDF use case is cross OpenStack 
>>> networking for tenants. The tenant's VM/Volume may be allocated in 
>>> different data centers geographically, but virtual network
>>> (L2/L3/FW/VPN/LB) should be built for each tenant automatically and 
>>> isolated between tenants. Keystone federation can help authorization 
>>> automation, but the cross OpenStack network automation challenge is 
>>> still there. Using prosperity orchestration layer can solve the 
>>> automation issue, but VDF don't like prosperity API in the 
>>> north-bound, because no ecosystem is available. And other issues, 
>>> for example, how to distribute image, also cannot be solved by 
>>> Keystone federation.
>
>> What is "prosperity orchestration layer" and "prosperity API"?
>
> [joehuang] suppose that there are two OpenStack instances in the cloud, and 
> vendor A developed an orchestration layer called CMPa (cloud management 
> platform a), vendor B's orchestration layer CMPb. CMPa will define boot VM 
> interface as CreateVM( Num, NameList, VMTemplate), CMPb may like to define 
> boot VM interface as bootVM( Name, projectID, flavorID, volumeSize, location, 
> networkID). After the customer asked more and more function to the cloud, the 
> API set of CMPa will be quite different from that of CMPb, and different from 
> OpenStack API. Now, all apps which consume OpenStack API like Heat, will not 
> be able to run above the prosperity software CMPa/CMPb. All OpenStack API 
> APPs ecosystem will be lost in the customer's cloud.
>
>>> [joehuang] This is the ETSI requirement and use cases specification 
>>> for NFV. ETSI is the home of the Industry Specification Group for NFV.
>>> In Figure 14 (virtualization of EPC) of this document, you can see 
>>> that the operator's  cloud including many data centers to provide 
>>> connection service to end user by inter-connected VNFs. The 
>>> requirements listed in
>>> (https://wiki.openstack.org/wiki/TelcoWorkingGroup) is mainly about 
>>> the requirements from specific VNF(like IMS, SBC, MME, HSS, S/P GW
>>> etc) to run over cloud, eg. migrate the traditional telco. APP from 
>>> prosperity hardware to cloud. Not all NFV requirements have been 
>>> covered yet. Forgive me there are so many telco terms here.
>
>> What is "prosperity hardware"?
>
> [joehuang] For example, Huawei's IMS can only run over Huawei's ATCA 
> hardware, even you bought Nokia ATCA, the IMS from Huawei will not be 
> able to work over Nokia ATCA. The telco APP is sold with hardware 
> together. (More comments on ETSI: ETSI is also the standard 
> organization for GSM, 3G, 4G.)
>
> Thanks,
> -jay
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells - summit recap and move forward

2014-12-11 Thread joehuang
Hello, Joe

Thank you for your good question.

Question:
How would something like flavors work across multiple vendors. The OpenStack 
API doesn't have any hard coded names and sizes for flavors. So a flavor such 
as m1.tiny may actually be very different vendor to vendor.

Answer:
The flavor is defined by Cloud Operator from the cascading OpenStack. And 
Nova-proxy ( which is the driver for "Nova as hypervisor" ) will sync the 
flavor to the cascaded OpenStack when it was first used in the cascaded 
OpenStack. If flavor was changed before a new VM is booted, the changed flavor 
will also be updated to the cascaded OpenStack just before the new VM booted 
request. Through this synchronization mechanism, all flavor used in 
multi-vendor's cascaded OpenStack will be kept the same as what used in the 
cascading level, provide a consistent view for flavor.

Best Regards

Chaoyi Huang ( joehuang )

From: Joe Gordon [mailto:joe.gord...@gmail.com]
Sent: Friday, December 12, 2014 8:17 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells - summit 
recap and move forward



On Thu, Dec 11, 2014 at 1:02 AM, joehuang 
mailto:joehu...@huawei.com>> wrote:
Hello, Russell,

Many thanks for your reply. See inline comments.

-Original Message-
From: Russell Bryant [mailto:rbry...@redhat.com<mailto:rbry...@redhat.com>]
Sent: Thursday, December 11, 2014 5:22 AM
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells - summit 
recap and move forward

>> On Fri, Dec 5, 2014 at 8:23 AM, joehuang 
>> mailto:joehu...@huawei.com>> wrote:
>>> Dear all & TC & PTL,
>>>
>>> In the 40 minutes cross-project summit session "Approaches for
>>> scaling out"[1], almost 100 peoples attended the meeting, and the
>>> conclusion is that cells can not cover the use cases and
>>> requirements which the OpenStack cascading solution[2] aim to
>>> address, the background including use cases and requirements is also
>>> described in the mail.

>I must admit that this was not the reaction I came away with the discussion 
>with.
>There was a lot of confusion, and as we started looking closer, many (or 
>perhaps most)
>people speaking up in the room did not agree that the requirements being 
>stated are
>things we want to try to satisfy.

[joehuang] Could you pls. confirm your opinion: 1) cells can not cover the use 
cases and requirements which the OpenStack cascading solution aim to address. 
2) Need further discussion whether to satisfy the use cases and requirements.

On 12/05/2014 06:47 PM, joehuang wrote:
>>> Hello, Davanum,
>>>
>>> Thanks for your reply.
>>>
>>> Cells can't meet the demand for the use cases and requirements described in 
>>> the mail.

>You're right that cells doesn't solve all of the requirements you're 
>discussing.
>Cells addresses scale in a region.  My impression from the summit session
> and other discussions is that the scale issues addressed by cells are 
> considered
> a priority, while the "global API" bits are not.

[joehuang] Agree cells is in the first class priority.

>>> 1. Use cases
>>> a). Vodafone use case[4](OpenStack summit speech video from 9'02"
>>> to 12'30" ), establishing globally addressable tenants which result
>>> in efficient services deployment.

> Keystone has been working on federated identity.
>That part makes sense, and is already well under way.

[joehuang] The major challenge for VDF use case is cross OpenStack networking 
for tenants. The tenant's VM/Volume may be allocated in different data centers 
geographically, but virtual network (L2/L3/FW/VPN/LB) should be built for each 
tenant automatically and isolated between tenants. Keystone federation can help 
authorization automation, but the cross OpenStack network automation challenge 
is still there.
Using prosperity orchestration layer can solve the automation issue, but VDF 
don't like prosperity API in the north-bound, because no ecosystem is 
available. And other issues, for example, how to distribute image, also cannot 
be solved by Keystone federation.

>>> b). Telefonica use case[5], create virtual DC( data center) cross
>>> multiple physical DCs with seamless experience.

>If we're talking about multiple DCs that are effectively local to each other
>with high bandwidth and low latency, that's one conversation.
>My impression is that you want to provide a single OpenStack API on top of
>globally distributed DCs.  I honestly don't see that as a problem we should
>be trying t

Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-11 Thread joehuang
Hello, Andrew,

Thanks for your confirmation. See inline comments, pls.

-Original Message-
From: Andrew Laski [mailto:andrew.la...@rackspace.com] 
Sent: Friday, December 12, 2014 3:56 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit 
recap and move forward


On 12/11/2014 04:02 AM, joehuang wrote:
> Hello, Russell,
>
> Many thanks for your reply. See inline comments.
>
> -Original Message-
> From: Russell Bryant [mailto:rbry...@redhat.com]
> Sent: Thursday, December 11, 2014 5:22 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – 
> summit recap and move forward
>
>>> On Fri, Dec 5, 2014 at 8:23 AM, joehuang  wrote:
>>>> Dear all & TC & PTL,
>>>>
>>>> In the 40 minutes cross-project summit session “Approaches for 
>>>> scaling out”[1], almost 100 peoples attended the meeting, and the 
>>>> conclusion is that cells can not cover the use cases and 
>>>> requirements which the OpenStack cascading solution[2] aim to 
>>>> address, the background including use cases and requirements is 
>>>> also described in the mail.
>> I must admit that this was not the reaction I came away with the discussion 
>> with.
>> There was a lot of confusion, and as we started looking closer, many 
>> (or perhaps most) people speaking up in the room did not agree that 
>> the requirements being stated are things we want to try to satisfy.
>> [joehuang] Could you pls. confirm your opinion: 1) cells can not cover the 
>> use cases and requirements which the OpenStack cascading solution aim to 
>> address. 2) Need further discussion whether to satisfy the use cases and 
>> requirements.

>Correct, cells does not cover all of the use cases that cascading aims to 
>address.  But it was expressed that the use cases that are not covered may not 
>be cases that we want addressed.

[joehuang] Ok, Need further discussion to address the cases or not.

> On 12/05/2014 06:47 PM, joehuang wrote:
>>>> Hello, Davanum,
>>>>
>>>> Thanks for your reply.
>>>>
>>>> Cells can't meet the demand for the use cases and requirements described 
>>>> in the mail.
>> You're right that cells doesn't solve all of the requirements you're 
>> discussing.
>> Cells addresses scale in a region.  My impression from the summit 
>> session and other discussions is that the scale issues addressed by 
>> cells are considered a priority, while the "global API" bits are not.
> [joehuang] Agree cells is in the first class priority.
>
>>>> 1. Use cases
>>>> a). Vodafone use case[4](OpenStack summit speech video from 9'02"
>>>> to 12'30" ), establishing globally addressable tenants which result 
>>>> in efficient services deployment.
>> Keystone has been working on federated identity.
>> That part makes sense, and is already well under way.
> [joehuang] The major challenge for VDF use case is cross OpenStack networking 
> for tenants. The tenant's VM/Volume may be allocated in different data 
> centers geographically, but virtual network (L2/L3/FW/VPN/LB) should be built 
> for each tenant automatically and isolated between tenants. Keystone 
> federation can help authorization automation, but the cross OpenStack network 
> automation challenge is still there.
> Using prosperity orchestration layer can solve the automation issue, but VDF 
> don't like prosperity API in the north-bound, because no ecosystem is 
> available. And other issues, for example, how to distribute image, also 
> cannot be solved by Keystone federation.
>
>>>> b). Telefonica use case[5], create virtual DC( data center) cross 
>>>> multiple physical DCs with seamless experience.
>> If we're talking about multiple DCs that are effectively local to 
>> each other with high bandwidth and low latency, that's one conversation.
>> My impression is that you want to provide a single OpenStack API on 
>> top of globally distributed DCs.  I honestly don't see that as a 
>> problem we should be trying to tackle.  I'd rather continue to focus 
>> on making OpenStack work
>> *really* well split into regions.
>> I think some people are trying to use cells in a geographically 
>> distributed way, as well.  I'm not sure that's a well understood or 
>> supported thing, though.
>> Perhaps the folks working on the new version of cells can comment further.
>> [joehuang] 1) Splited region way

Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-11 Thread Joshua Harlow

So I think u mean 'proprietary'?

http://www.merriam-webster.com/dictionary/proprietary

-Josh

joehuang wrote:

Hi, Jay,

Good question, see inline comments, pls.

Best Regards
Chaoyi Huang ( Joe Huang )

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Friday, December 12, 2014 1:58 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit 
recap and move forward

On 12/11/2014 04:02 AM, joehuang wrote:

[joehuang] The major challenge for VDF use case is cross OpenStack
networking for tenants. The tenant's VM/Volume may be allocated in
different data centers geographically, but virtual network
(L2/L3/FW/VPN/LB) should be built for each tenant automatically and
isolated between tenants. Keystone federation can help authorization
automation, but the cross OpenStack network automation challenge is
still there. Using prosperity orchestration layer can solve the
automation issue, but VDF don't like prosperity API in the
north-bound, because no ecosystem is available. And other issues, for
example, how to distribute image, also cannot be solved by Keystone
federation.



What is "prosperity orchestration layer" and "prosperity API"?


[joehuang] suppose that there are two OpenStack instances in the cloud, and 
vendor A developed an orchestration layer called CMPa (cloud management 
platform a), vendor B's orchestration layer CMPb. CMPa will define boot VM 
interface as CreateVM( Num, NameList, VMTemplate), CMPb may like to define boot 
VM interface as bootVM( Name, projectID, flavorID, volumeSize, location, 
networkID). After the customer asked more and more function to the cloud, the 
API set of CMPa will be quite different from that of CMPb, and different from 
OpenStack API. Now, all apps which consume OpenStack API like Heat, will not be 
able to run above the prosperity software CMPa/CMPb. All OpenStack API APPs 
ecosystem will be lost in the customer's cloud.


[joehuang] This is the ETSI requirement and use cases specification
for NFV. ETSI is the home of the Industry Specification Group for NFV.
In Figure 14 (virtualization of EPC) of this document, you can see
that the operator's  cloud including many data centers to provide
connection service to end user by inter-connected VNFs. The
requirements listed in
(https://wiki.openstack.org/wiki/TelcoWorkingGroup) is mainly about
the requirements from specific VNF(like IMS, SBC, MME, HSS, S/P GW
etc) to run over cloud, eg. migrate the traditional telco. APP from
prosperity hardware to cloud. Not all NFV requirements have been
covered yet. Forgive me there are so many telco terms here.



What is "prosperity hardware"?


[joehuang] For example, Huawei's IMS can only run over Huawei's ATCA hardware, 
even you bought Nokia ATCA, the IMS from Huawei will not be able to work over 
Nokia ATCA. The telco APP is sold with hardware together. (More comments on 
ETSI: ETSI is also the standard organization for GSM, 3G, 4G.)

Thanks,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-11 Thread joehuang
Hi, Jay,

Good question, see inline comments, pls.

Best Regards
Chaoyi Huang ( Joe Huang )

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Friday, December 12, 2014 1:58 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit 
recap and move forward

On 12/11/2014 04:02 AM, joehuang wrote:
>> [joehuang] The major challenge for VDF use case is cross OpenStack 
>> networking for tenants. The tenant's VM/Volume may be allocated in 
>> different data centers geographically, but virtual network
>> (L2/L3/FW/VPN/LB) should be built for each tenant automatically and 
>> isolated between tenants. Keystone federation can help authorization 
>> automation, but the cross OpenStack network automation challenge is 
>> still there. Using prosperity orchestration layer can solve the 
>> automation issue, but VDF don't like prosperity API in the 
>> north-bound, because no ecosystem is available. And other issues, for 
>> example, how to distribute image, also cannot be solved by Keystone 
>> federation.

>What is "prosperity orchestration layer" and "prosperity API"?

[joehuang] suppose that there are two OpenStack instances in the cloud, and 
vendor A developed an orchestration layer called CMPa (cloud management 
platform a), vendor B's orchestration layer CMPb. CMPa will define boot VM 
interface as CreateVM( Num, NameList, VMTemplate), CMPb may like to define boot 
VM interface as bootVM( Name, projectID, flavorID, volumeSize, location, 
networkID). After the customer asked more and more function to the cloud, the 
API set of CMPa will be quite different from that of CMPb, and different from 
OpenStack API. Now, all apps which consume OpenStack API like Heat, will not be 
able to run above the prosperity software CMPa/CMPb. All OpenStack API APPs 
ecosystem will be lost in the customer's cloud.  

>> [joehuang] This is the ETSI requirement and use cases specification 
>> for NFV. ETSI is the home of the Industry Specification Group for NFV. 
>> In Figure 14 (virtualization of EPC) of this document, you can see 
>> that the operator's  cloud including many data centers to provide 
>> connection service to end user by inter-connected VNFs. The 
>> requirements listed in
>> (https://wiki.openstack.org/wiki/TelcoWorkingGroup) is mainly about 
>> the requirements from specific VNF(like IMS, SBC, MME, HSS, S/P GW
>> etc) to run over cloud, eg. migrate the traditional telco. APP from 
>> prosperity hardware to cloud. Not all NFV requirements have been 
>> covered yet. Forgive me there are so many telco terms here.

>What is "prosperity hardware"?

[joehuang] For example, Huawei's IMS can only run over Huawei's ATCA hardware, 
even you bought Nokia ATCA, the IMS from Huawei will not be able to work over 
Nokia ATCA. The telco APP is sold with hardware together. (More comments on 
ETSI: ETSI is also the standard organization for GSM, 3G, 4G.)
 
Thanks,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells - summit recap and move forward

2014-12-11 Thread Joe Gordon
On Thu, Dec 11, 2014 at 1:02 AM, joehuang  wrote:

> Hello, Russell,
>
> Many thanks for your reply. See inline comments.
>
> -Original Message-
> From: Russell Bryant [mailto:rbry...@redhat.com]
> Sent: Thursday, December 11, 2014 5:22 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells - summit
> recap and move forward
>
> >> On Fri, Dec 5, 2014 at 8:23 AM, joehuang  wrote:
> >>> Dear all & TC & PTL,
> >>>
> >>> In the 40 minutes cross-project summit session "Approaches for
> >>> scaling out"[1], almost 100 peoples attended the meeting, and the
> >>> conclusion is that cells can not cover the use cases and
> >>> requirements which the OpenStack cascading solution[2] aim to
> >>> address, the background including use cases and requirements is also
> >>> described in the mail.
>
> >I must admit that this was not the reaction I came away with the
> discussion with.
> >There was a lot of confusion, and as we started looking closer, many (or
> perhaps most)
> >people speaking up in the room did not agree that the requirements being
> stated are
> >things we want to try to satisfy.
>
> [joehuang] Could you pls. confirm your opinion: 1) cells can not cover the
> use cases and requirements which the OpenStack cascading solution aim to
> address. 2) Need further discussion whether to satisfy the use cases and
> requirements.
>
> On 12/05/2014 06:47 PM, joehuang wrote:
> >>> Hello, Davanum,
> >>>
> >>> Thanks for your reply.
> >>>
> >>> Cells can't meet the demand for the use cases and requirements
> described in the mail.
>
> >You're right that cells doesn't solve all of the requirements you're
> discussing.
> >Cells addresses scale in a region.  My impression from the summit session
> > and other discussions is that the scale issues addressed by cells are
> considered
> > a priority, while the "global API" bits are not.
>
> [joehuang] Agree cells is in the first class priority.
>
> >>> 1. Use cases
> >>> a). Vodafone use case[4](OpenStack summit speech video from 9'02"
> >>> to 12'30" ), establishing globally addressable tenants which result
> >>> in efficient services deployment.
>
> > Keystone has been working on federated identity.
> >That part makes sense, and is already well under way.
>
> [joehuang] The major challenge for VDF use case is cross OpenStack
> networking for tenants. The tenant's VM/Volume may be allocated in
> different data centers geographically, but virtual network
> (L2/L3/FW/VPN/LB) should be built for each tenant automatically and
> isolated between tenants. Keystone federation can help authorization
> automation, but the cross OpenStack network automation challenge is still
> there.
> Using prosperity orchestration layer can solve the automation issue, but
> VDF don't like prosperity API in the north-bound, because no ecosystem is
> available. And other issues, for example, how to distribute image, also
> cannot be solved by Keystone federation.
>
> >>> b). Telefonica use case[5], create virtual DC( data center) cross
> >>> multiple physical DCs with seamless experience.
>
> >If we're talking about multiple DCs that are effectively local to each
> other
> >with high bandwidth and low latency, that's one conversation.
> >My impression is that you want to provide a single OpenStack API on top of
> >globally distributed DCs.  I honestly don't see that as a problem we
> should
> >be trying to tackle.  I'd rather continue to focus on making OpenStack
> work
> >*really* well split into regions.
> > I think some people are trying to use cells in a geographically
> distributed way,
> > as well.  I'm not sure that's a well understood or supported thing,
> though.
> > Perhaps the folks working on the new version of cells can comment
> further.
>
> [joehuang] 1) Splited region way cannot provide cross OpenStack networking
> automation for tenant. 2) exactly, the motivation for cascading is "single
> OpenStack API on top of globally distributed DCs". Of course, cascading can
> also be used for DCs close to each other with high bandwidth and low
> latency. 3) Folks comment from cells are welcome.
> .
>
> >>> c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6,
> >>> 8#. For NFV cloud, it's in nature the cloud will be distributed but
> >>> inter-co

Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-11 Thread Andrew Laski


On 12/11/2014 04:02 AM, joehuang wrote:

Hello, Russell,

Many thanks for your reply. See inline comments.

-Original Message-
From: Russell Bryant [mailto:rbry...@redhat.com]
Sent: Thursday, December 11, 2014 5:22 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit 
recap and move forward


On Fri, Dec 5, 2014 at 8:23 AM, joehuang  wrote:

Dear all & TC & PTL,

In the 40 minutes cross-project summit session “Approaches for
scaling out”[1], almost 100 peoples attended the meeting, and the
conclusion is that cells can not cover the use cases and
requirements which the OpenStack cascading solution[2] aim to
address, the background including use cases and requirements is also
described in the mail.

I must admit that this was not the reaction I came away with the discussion 
with.
There was a lot of confusion, and as we started looking closer, many (or 
perhaps most)
people speaking up in the room did not agree that the requirements being stated 
are
things we want to try to satisfy.

[joehuang] Could you pls. confirm your opinion: 1) cells can not cover the use 
cases and requirements which the OpenStack cascading solution aim to address. 
2) Need further discussion whether to satisfy the use cases and requirements.


Correct, cells does not cover all of the use cases that cascading aims 
to address.  But it was expressed that the use cases that are not 
covered may not be cases that we want addressed.



On 12/05/2014 06:47 PM, joehuang wrote:

Hello, Davanum,

Thanks for your reply.

Cells can't meet the demand for the use cases and requirements described in the 
mail.

You're right that cells doesn't solve all of the requirements you're discussing.
Cells addresses scale in a region.  My impression from the summit session
and other discussions is that the scale issues addressed by cells are considered
a priority, while the "global API" bits are not.

[joehuang] Agree cells is in the first class priority.


1. Use cases
a). Vodafone use case[4](OpenStack summit speech video from 9'02"
to 12'30" ), establishing globally addressable tenants which result
in efficient services deployment.

Keystone has been working on federated identity.
That part makes sense, and is already well under way.

[joehuang] The major challenge for VDF use case is cross OpenStack networking 
for tenants. The tenant's VM/Volume may be allocated in different data centers 
geographically, but virtual network (L2/L3/FW/VPN/LB) should be built for each 
tenant automatically and isolated between tenants. Keystone federation can help 
authorization automation, but the cross OpenStack network automation challenge 
is still there.
Using prosperity orchestration layer can solve the automation issue, but VDF 
don't like prosperity API in the north-bound, because no ecosystem is 
available. And other issues, for example, how to distribute image, also cannot 
be solved by Keystone federation.


b). Telefonica use case[5], create virtual DC( data center) cross
multiple physical DCs with seamless experience.

If we're talking about multiple DCs that are effectively local to each other
with high bandwidth and low latency, that's one conversation.
My impression is that you want to provide a single OpenStack API on top of
globally distributed DCs.  I honestly don't see that as a problem we should
be trying to tackle.  I'd rather continue to focus on making OpenStack work
*really* well split into regions.
I think some people are trying to use cells in a geographically distributed way,
as well.  I'm not sure that's a well understood or supported thing, though.
Perhaps the folks working on the new version of cells can comment further.

[joehuang] 1) Splited region way cannot provide cross OpenStack networking automation for 
tenant. 2) exactly, the motivation for cascading is "single OpenStack API on top of 
globally distributed DCs". Of course, cascading can also be used for DCs close to 
each other with high bandwidth and low latency. 3) Folks comment from cells are welcome.
.


Cells can handle a single API on top of globally distributed DCs.  I 
have spoken with a group that is doing exactly that.  But it requires 
that the API is a trusted part of the OpenStack deployments in those 
distributed DCs.





c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6,
8#. For NFV cloud, it’s in nature the cloud will be distributed but
inter-connected in many data centers.

I'm afraid I don't understand this one.  In many conversations about NFV, I 
haven't heard this before.

[joehuang] This is the ETSI requirement and use cases specification for NFV. 
ETSI is the home of the Industry Specification Group for NFV. In Figure 14 
(virtualization of EPC) of this document, you can see that the operator's  
cloud including many data centers 

Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-11 Thread Jay Pipes

On 12/11/2014 04:02 AM, joehuang wrote:

[joehuang] The major challenge for VDF use case is cross OpenStack
networking for tenants. The tenant's VM/Volume may be allocated in
different data centers geographically, but virtual network
(L2/L3/FW/VPN/LB) should be built for each tenant automatically and
isolated between tenants. Keystone federation can help authorization
automation, but the cross OpenStack network automation challenge is
still there. Using prosperity orchestration layer can solve the
automation issue, but VDF don't like prosperity API in the
north-bound, because no ecosystem is available. And other issues, for
example, how to distribute image, also cannot be solved by Keystone
federation.


What is "prosperity orchestration layer" and "prosperity API"?


[joehuang] This is the ETSI requirement and use cases specification
for NFV. ETSI is the home of the Industry Specification Group for
NFV. In Figure 14 (virtualization of EPC) of this document, you can
see that the operator's  cloud including many data centers to provide
connection service to end user by inter-connected VNFs. The
requirements listed in
(https://wiki.openstack.org/wiki/TelcoWorkingGroup) is mainly about
the requirements from specific VNF(like IMS, SBC, MME, HSS, S/P GW
etc) to run over cloud, eg. migrate the traditional telco. APP from
prosperity hardware to cloud. Not all NFV requirements have been
covered yet. Forgive me there are so many telco terms here.


What is "prosperity hardware"?

Thanks,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-11 Thread joehuang
Hello, Russell,

Many thanks for your reply. See inline comments.

-Original Message-
From: Russell Bryant [mailto:rbry...@redhat.com] 
Sent: Thursday, December 11, 2014 5:22 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit 
recap and move forward

>> On Fri, Dec 5, 2014 at 8:23 AM, joehuang  wrote:
>>> Dear all & TC & PTL,
>>>
>>> In the 40 minutes cross-project summit session “Approaches for 
>>> scaling out”[1], almost 100 peoples attended the meeting, and the 
>>> conclusion is that cells can not cover the use cases and 
>>> requirements which the OpenStack cascading solution[2] aim to 
>>> address, the background including use cases and requirements is also 
>>> described in the mail.

>I must admit that this was not the reaction I came away with the discussion 
>with.  
>There was a lot of confusion, and as we started looking closer, many (or 
>perhaps most) 
>people speaking up in the room did not agree that the requirements being 
>stated are 
>things we want to try to satisfy.

[joehuang] Could you pls. confirm your opinion: 1) cells can not cover the use 
cases and requirements which the OpenStack cascading solution aim to address. 
2) Need further discussion whether to satisfy the use cases and requirements.

On 12/05/2014 06:47 PM, joehuang wrote:
>>> Hello, Davanum,
>>> 
>>> Thanks for your reply.
>>> 
>>> Cells can't meet the demand for the use cases and requirements described in 
>>> the mail. 

>You're right that cells doesn't solve all of the requirements you're 
>discussing.  
>Cells addresses scale in a region.  My impression from the summit session 
> and other discussions is that the scale issues addressed by cells are 
> considered 
> a priority, while the "global API" bits are not.

[joehuang] Agree cells is in the first class priority.

>>> 1. Use cases
>>> a). Vodafone use case[4](OpenStack summit speech video from 9'02"
>>> to 12'30" ), establishing globally addressable tenants which result 
>>> in efficient services deployment.

> Keystone has been working on federated identity.  
>That part makes sense, and is already well under way.

[joehuang] The major challenge for VDF use case is cross OpenStack networking 
for tenants. The tenant's VM/Volume may be allocated in different data centers 
geographically, but virtual network (L2/L3/FW/VPN/LB) should be built for each 
tenant automatically and isolated between tenants. Keystone federation can help 
authorization automation, but the cross OpenStack network automation challenge 
is still there.
Using prosperity orchestration layer can solve the automation issue, but VDF 
don't like prosperity API in the north-bound, because no ecosystem is 
available. And other issues, for example, how to distribute image, also cannot 
be solved by Keystone federation.

>>> b). Telefonica use case[5], create virtual DC( data center) cross 
>>> multiple physical DCs with seamless experience.

>If we're talking about multiple DCs that are effectively local to each other 
>with high bandwidth and low latency, that's one conversation.  
>My impression is that you want to provide a single OpenStack API on top of 
>globally distributed DCs.  I honestly don't see that as a problem we should 
>be trying to tackle.  I'd rather continue to focus on making OpenStack work
>*really* well split into regions.
> I think some people are trying to use cells in a geographically distributed 
> way, 
> as well.  I'm not sure that's a well understood or supported thing, though.  
> Perhaps the folks working on the new version of cells can comment further.

[joehuang] 1) Splited region way cannot provide cross OpenStack networking 
automation for tenant. 2) exactly, the motivation for cascading is "single 
OpenStack API on top of globally distributed DCs". Of course, cascading can 
also be used for DCs close to each other with high bandwidth and low latency. 
3) Folks comment from cells are welcome.
.

>>> c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6, 
>>> 8#. For NFV cloud, it’s in nature the cloud will be distributed but 
>>> inter-connected in many data centers.

>I'm afraid I don't understand this one.  In many conversations about NFV, I 
>haven't heard this before.

[joehuang] This is the ETSI requirement and use cases specification for NFV. 
ETSI is the home of the Industry Specification Group for NFV. In Figure 14 
(virtualization of EPC) of this document, you can see that the operator's  
cloud including many data centers to provide connection servic

Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-10 Thread Russell Bryant
>> On Fri, Dec 5, 2014 at 8:23 AM, joehuang  wrote:
>>> Dear all & TC & PTL,
>>>
>>> In the 40 minutes cross-project summit session “Approaches for
>>> scaling out”[1], almost 100 peoples attended the meeting, and the
>>> conclusion is that cells can not cover the use cases and
>>> requirements which the OpenStack cascading solution[2] aim to
>>> address, the background including use cases and requirements is
>>> also described in the mail.

I must admit that this was not the reaction I came away with the
discussion with.  There was a lot of confusion, and as we started
looking closer, many (or perhaps most) people speaking up in the room
did not agree that the requirements being stated are things we want to
try to satisfy.

On 12/05/2014 06:47 PM, joehuang wrote:
> Hello, Davanum,
> 
> Thanks for your reply.
> 
> Cells can't meet the demand for the use cases and requirements described in 
> the mail. 

You're right that cells doesn't solve all of the requirements you're
discussing.  Cells addresses scale in a region.  My impression from the
summit session and other discussions is that the scale issues addressed
by cells are considered a priority, while the "global API" bits are not.

>> 1. Use cases
>> a). Vodafone use case[4](OpenStack summit speech video from 9'02"
>> to 12'30" ), establishing globally addressable tenants which result
>> in efficient services deployment.

Keystone has been working on federated identity.  That part makes sense,
and is already well under way.

>> b). Telefonica use case[5], create virtual DC( data center) cross
>> multiple physical DCs with seamless experience.

If we're talking about multiple DCs that are effectively local to each
other with high bandwidth and low latency, that's one conversation.  My
impression is that you want to provide a single OpenStack API on top of
globally distributed DCs.  I honestly don't see that as a problem we
should be trying to tackle.  I'd rather continue to focus on making
OpenStack work *really* well split into regions.

I think some people are trying to use cells in a geographically
distributed way, as well.  I'm not sure that's a well understood or
supported thing, though.  Perhaps the folks working on the new version
of cells can comment further.

>> c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6,
>> 8#. For NFV cloud, it’s in nature the cloud will be distributed but
>> inter-connected in many data centers.

I'm afraid I don't understand this one.  In many conversations about
NFV, I haven't heard this before.

>>
>> 2.requirements
>> a). The operator has multiple sites cloud; each site can use one or
>> multiple vendor’s OpenStack distributions.

Is this a technical problem, or is a business problem of vendors not
wanting to support a mixed environment that you're trying to work around
with a technical solution?

>> b). Each site with its own requirements and upgrade schedule while
>> maintaining standard OpenStack API
>> c). The multi-site cloud must provide unified resource management
>> with global Open API exposed, for example create virtual DC cross
>> multiple physical DCs with seamless experience.

>> Although a prosperity orchestration layer could be developed for
>> the multi-site cloud, but it's prosperity API in the north bound
>> interface. The cloud operators want the ecosystem friendly global
>> open API for the mutli-site cloud for global access.

I guess the question is, do we see a "global API" as something we want
to accomplish.  What you're talking about is huge, and I'm not even sure
how you would expect it to work in some cases (like networking).

In any case, to be as clear as possible, I'm not convinced this is
something we should be working on.  I'm going to need to see much more
overwhelming support for the idea before helping to figure out any
further steps.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-05 Thread joehuang
Hello, Davanum,

Thanks for your reply.

Cells can't meet the demand for the use cases and requirements described in the 
mail. 

> 1. Use cases
> a). Vodafone use case[4](OpenStack summit speech video from 9'02" to 12'30" 
> ), establishing globally addressable tenants which result in efficient 
> services deployment.
> b). Telefonica use case[5], create virtual DC( data center) cross multiple 
> physical DCs with seamless experience.
> c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6, 8#. For 
> NFV cloud, it’s in nature the cloud will be distributed but inter-connected 
> in many data centers.
>
> 2.requirements
> a). The operator has multiple sites cloud; each site can use one or multiple 
> vendor’s OpenStack distributions.
> b). Each site with its own requirements and upgrade schedule while 
> maintaining standard OpenStack API
> c). The multi-site cloud must provide unified resource management with global 
> Open API exposed, for example create virtual DC cross multiple physical DCs 
> with seamless experience.
> Although a prosperity orchestration layer could be developed for the 
> multi-site cloud, but it's prosperity API in the north bound interface. The 
> cloud operators want the ecosystem friendly global open API for the 
> mutli-site cloud for global access.


Best Regards

Chaoyi Huang ( joehuang )


From: Davanum Srinivas [dava...@gmail.com]
Sent: 05 December 2014 21:56
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit 
recap and move forward

Joe,

Related to this topic, At the summit, there was a session on Cells v2
and following up on that there have been BP(s) filed in Nova
championed by Andrew -
https://review.openstack.org/#/q/owner:%22Andrew+Laski%22+status:open,n,z

thanks,
dims

On Fri, Dec 5, 2014 at 8:23 AM, joehuang  wrote:
> Dear all & TC & PTL,
>
> In the 40 minutes cross-project summit session “Approaches for scaling 
> out”[1], almost 100 peoples attended the meeting, and the conclusion is that 
> cells can not cover the use cases and requirements which the OpenStack 
> cascading solution[2] aim to address, the background including use cases and 
> requirements is also described in the mail.
>
> After the summit, we just ported the PoC[3] source code from IceHouse based 
> to Juno based.
>
> Now, let's move forward:
>
> The major task is to introduce new driver/agent to existing core projects, 
> for the core idea of cascading is to add Nova as the hypervisor backend of 
> Nova, Cinder as the block storage backend of Cinder, Neutron as the backend 
> of Neutron, Glance as one image location of Glance, Ceilometer as the store 
> of Ceilometer.
> a). Need cross-program decision to run cascading as an incubated project mode 
> or register BP separately in each involved project. CI for cascading is quite 
> different from traditional test environment, at least 3 OpenStack instance 
> required for cross OpenStack networking test cases.
> b). Volunteer as the cross project coordinator.
> c). Volunteers for implementation and CI.
>
> Background of OpenStack cascading vs cells:
>
> 1. Use cases
> a). Vodafone use case[4](OpenStack summit speech video from 9'02" to 12'30" 
> ), establishing globally addressable tenants which result in efficient 
> services deployment.
> b). Telefonica use case[5], create virtual DC( data center) cross multiple 
> physical DCs with seamless experience.
> c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6, 8#. For 
> NFV cloud, it’s in nature the cloud will be distributed but inter-connected 
> in many data centers.
>
> 2.requirements
> a). The operator has multiple sites cloud; each site can use one or multiple 
> vendor’s OpenStack distributions.
> b). Each site with its own requirements and upgrade schedule while 
> maintaining standard OpenStack API
> c). The multi-site cloud must provide unified resource management with global 
> Open API exposed, for example create virtual DC cross multiple physical DCs 
> with seamless experience.
> Although a prosperity orchestration layer could be developed for the 
> multi-site cloud, but it's prosperity API in the north bound interface. The 
> cloud operators want the ecosystem friendly global open API for the 
> mutli-site cloud for global access.
>
> 3. What problems does cascading solve that cells doesn't cover:
> OpenStack cascading solution is "OpenStack orchestrate OpenStacks". The core 
> architecture idea of OpenStack cascading is to add Nova as the hypervisor 
> backend of Nova, Cinder as the block storage backend of Cinder, Neutron

Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-05 Thread Davanum Srinivas
Joe,

Related to this topic, At the summit, there was a session on Cells v2
and following up on that there have been BP(s) filed in Nova
championed by Andrew -
https://review.openstack.org/#/q/owner:%22Andrew+Laski%22+status:open,n,z

thanks,
dims

On Fri, Dec 5, 2014 at 8:23 AM, joehuang  wrote:
> Dear all & TC & PTL,
>
> In the 40 minutes cross-project summit session “Approaches for scaling 
> out”[1], almost 100 peoples attended the meeting, and the conclusion is that 
> cells can not cover the use cases and requirements which the OpenStack 
> cascading solution[2] aim to address, the background including use cases and 
> requirements is also described in the mail.
>
> After the summit, we just ported the PoC[3] source code from IceHouse based 
> to Juno based.
>
> Now, let's move forward:
>
> The major task is to introduce new driver/agent to existing core projects, 
> for the core idea of cascading is to add Nova as the hypervisor backend of 
> Nova, Cinder as the block storage backend of Cinder, Neutron as the backend 
> of Neutron, Glance as one image location of Glance, Ceilometer as the store 
> of Ceilometer.
> a). Need cross-program decision to run cascading as an incubated project mode 
> or register BP separately in each involved project. CI for cascading is quite 
> different from traditional test environment, at least 3 OpenStack instance 
> required for cross OpenStack networking test cases.
> b). Volunteer as the cross project coordinator.
> c). Volunteers for implementation and CI.
>
> Background of OpenStack cascading vs cells:
>
> 1. Use cases
> a). Vodafone use case[4](OpenStack summit speech video from 9'02" to 12'30" 
> ), establishing globally addressable tenants which result in efficient 
> services deployment.
> b). Telefonica use case[5], create virtual DC( data center) cross multiple 
> physical DCs with seamless experience.
> c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6, 8#. For 
> NFV cloud, it’s in nature the cloud will be distributed but inter-connected 
> in many data centers.
>
> 2.requirements
> a). The operator has multiple sites cloud; each site can use one or multiple 
> vendor’s OpenStack distributions.
> b). Each site with its own requirements and upgrade schedule while 
> maintaining standard OpenStack API
> c). The multi-site cloud must provide unified resource management with global 
> Open API exposed, for example create virtual DC cross multiple physical DCs 
> with seamless experience.
> Although a prosperity orchestration layer could be developed for the 
> multi-site cloud, but it's prosperity API in the north bound interface. The 
> cloud operators want the ecosystem friendly global open API for the 
> mutli-site cloud for global access.
>
> 3. What problems does cascading solve that cells doesn't cover:
> OpenStack cascading solution is "OpenStack orchestrate OpenStacks". The core 
> architecture idea of OpenStack cascading is to add Nova as the hypervisor 
> backend of Nova, Cinder as the block storage backend of Cinder, Neutron as 
> the backend of Neutron, Glance as one image location of Glance, Ceilometer as 
> the store of Ceilometer. Thus OpenStack is able to orchestrate OpenStacks 
> (from different vendor's distribution, or different version ) which may 
> located in different sites (or data centers ) through the OpenStack API, 
> meanwhile the cloud still expose OpenStack API as the north-bound API in the 
> cloud level.
>
> 4. Why cells can’t do that:
> Cells provide the scale out capability to Nova, but from the OpenStack as a 
> whole point of view, it’s still working like one OpenStack instance.
> a). if Cells is deployed with shared Cinder, Neutron, Glance, Ceilometer. 
> This approach provides the multi-site cloud with one unified API endpoint and 
> unified resource management, but consolidation of multi-vendor/multi-version 
> OpenStack instances across one or more data centers cannot be fulfilled.
> b). Each site installed one child cell and accompanied standalone Cinder, 
> Neutron(or Nova-network), Glance, Ceilometer. This approach makes 
> multi-vendor/multi-version OpenStack distribution co-existence in multi-site 
> seem to be feasible, but the requirement for unified API endpoint and unified 
> resource management cannot be fulfilled. Cross Neutron networking automation 
> is also missing, which should otherwise be done manually or use proprietary 
> orchestration layer.
>
> For more information about cascading and cells, please refer to the 
> discussion thread before Paris Summit [7].
>
> [1]Approaches for scaling out: 
> https://etherpad.openstack.org/p/kilo-crossproject-scale-out-openstack
> [2]OpenStack cascading solution: 
> https://wiki.openstack.org/wiki/OpenStack_cascading_solution
> [3]Cascading PoC: https://github.com/stackforge/tricircle
> [4]Vodafone use case (9'02" to 12'30"): 
> https://www.youtube.com/watch?v=-KOJYvhmxQI
> [5]Telefonica use case: 
> http://www.telefonica.com/en/descargas/mwc/prese