Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-30 Thread A, Keshava
Hi,
Can the VM migration happens across POD (Zone) ?
If so then how reachability of VM is addressed dynamically without any packet 
loss ?

Thanks  Regards,
keshava

-Original Message-
From: Wuhongning [mailto:wuhongn...@huawei.com] 
Sent: Thursday, October 30, 2014 7:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

Hi keshava,

Thanks for interested in Cascading. Here are some very simple explanation:

Basically Datacenter is not in the 2-level tree of cascading. We use term POD 
to represent a cascaded child openstack (same meaning of your term Zone?). 
There may be single or multiple PODs in one Datacenter, Just like below:

(A, B, C)  ...  (D, E)  ...  (F)  ...   (G)
Each character represent a POD or child openstack, while parenthesis represent 
a Datacenter. 

Each POD has a corresponding virtual host node in the parent openstack, so when 
scheduler of any projects (nova/neutron/cinder...) locate a host node, the 
resource POD is determined, also with its geo-located Datacenter by side 
effect. Cascading don't schedule by Datacenter directly, DC is just an 
attribute of POD (for example we can configure host aggregate to identify a DC 
with multiple PODs). The upper scale of POD is fixed, maybe several hundreds, 
so a super large DC with tens of thousands servers could be built by 
modularized PODs, avoiding the difficult of tuning and maintaining such a huge 
monolithic openstack.

Next do you mean networking reachability? Sorry for the limitation of mail post 
I can just give some very simple idea: in parent openstack the L2pop and DVR is 
used, so L2/L3 agent-proxy in each virtual host node can get all the vm 
reachability information of other POD, then they are set to local POD by 
Neutron REST API. However, cascading depends on some feature not exists yet in 
current Neutron, like L2GW, pluggable external network, WE Fwaas in DVR, 
centralized FIP in DVR... so we have to do some little patch in the front. In 
the future if these features is merged, these patch code can be removed. 

Indeed Neutron is the most challenge part of cascading, without considering 
those proxies in the parent openstack virtual host node, Neutron patchs account 
for 85% or more LOC in the whole project.

Regards,
Wu

From: keshava [keshav...@hp.com]
Sent: Wednesday, October 29, 2014 2:22 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
OpenStack cascading

This is very interesting problem to solve.
I am curious to know how the reachability is provided across different 
Datacenter.
How to know which VM is part of which Datacenter?
VM may be in different Zone but under same DC or in different DC itself.

How this problem is solved?


thanks  regards,
keshava



--
View this message in context: 
http://openstack.10931.n7.nabble.com/all-tc-Multi-clouds-integration-by-OpenStack-cascading-tp54115p56323.html
Sent from the Developer mailing list archive at Nabble.com.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-30 Thread joehuang
Hello, Keshava

Live migration is allowed inside one pod ( one cascaded OpenStack instance ), 
not support cross pods live migration yet. 

But cold migration could be done between pods, even cross data centers.

Live migration cross pods will be studied in the future.

Best Regards

Chaoyi Huang ( joehuang )


From: A, Keshava [keshav...@hp.com]
Sent: 30 October 2014 17:45
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

Hi,
Can the VM migration happens across POD (Zone) ?
If so then how reachability of VM is addressed dynamically without any packet 
loss ?

Thanks  Regards,
keshava

-Original Message-
From: Wuhongning [mailto:wuhongn...@huawei.com]
Sent: Thursday, October 30, 2014 7:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

Hi keshava,

Thanks for interested in Cascading. Here are some very simple explanation:

Basically Datacenter is not in the 2-level tree of cascading. We use term POD 
to represent a cascaded child openstack (same meaning of your term Zone?). 
There may be single or multiple PODs in one Datacenter, Just like below:

(A, B, C)  ...  (D, E)  ...  (F)  ...   (G)
Each character represent a POD or child openstack, while parenthesis represent 
a Datacenter.

Each POD has a corresponding virtual host node in the parent openstack, so when 
scheduler of any projects (nova/neutron/cinder...) locate a host node, the 
resource POD is determined, also with its geo-located Datacenter by side 
effect. Cascading don't schedule by Datacenter directly, DC is just an 
attribute of POD (for example we can configure host aggregate to identify a DC 
with multiple PODs). The upper scale of POD is fixed, maybe several hundreds, 
so a super large DC with tens of thousands servers could be built by 
modularized PODs, avoiding the difficult of tuning and maintaining such a huge 
monolithic openstack.

Next do you mean networking reachability? Sorry for the limitation of mail post 
I can just give some very simple idea: in parent openstack the L2pop and DVR is 
used, so L2/L3 agent-proxy in each virtual host node can get all the vm 
reachability information of other POD, then they are set to local POD by 
Neutron REST API. However, cascading depends on some feature not exists yet in 
current Neutron, like L2GW, pluggable external network, WE Fwaas in DVR, 
centralized FIP in DVR... so we have to do some little patch in the front. In 
the future if these features is merged, these patch code can be removed.

Indeed Neutron is the most challenge part of cascading, without considering 
those proxies in the parent openstack virtual host node, Neutron patchs account 
for 85% or more LOC in the whole project.

Regards,
Wu

From: keshava [keshav...@hp.com]
Sent: Wednesday, October 29, 2014 2:22 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
OpenStack cascading

This is very interesting problem to solve.
I am curious to know how the reachability is provided across different 
Datacenter.
How to know which VM is part of which Datacenter?
VM may be in different Zone but under same DC or in different DC itself.

How this problem is solved?


thanks  regards,
keshava



--
View this message in context: 
http://openstack.10931.n7.nabble.com/all-tc-Multi-clouds-integration-by-OpenStack-cascading-tp54115p56323.html
Sent from the Developer mailing list archive at Nabble.com.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-30 Thread Hly
hi,

Network reachability is not an issue for live migration, it is the same as 
cold. The challenge is near realtime order control of interaction between 
parent proxies, child virt drivers, agents, and libvirt lib.

Wu


Sent from my iPad

On 2014-10-30, at 下午7:28, joehuang joehu...@huawei.com wrote:

 Hello, Keshava
 
 Live migration is allowed inside one pod ( one cascaded OpenStack instance ), 
 not support cross pods live migration yet. 
 
 But cold migration could be done between pods, even cross data centers.
 
 Live migration cross pods will be studied in the future.
 
 Best Regards
 
 Chaoyi Huang ( joehuang )
 
 
 From: A, Keshava [keshav...@hp.com]
 Sent: 30 October 2014 17:45
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
 cascading
 
 Hi,
 Can the VM migration happens across POD (Zone) ?
 If so then how reachability of VM is addressed dynamically without any packet 
 loss ?
 
 Thanks  Regards,
 keshava
 
 -Original Message-
 From: Wuhongning [mailto:wuhongn...@huawei.com]
 Sent: Thursday, October 30, 2014 7:56 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
 cascading
 
 Hi keshava,
 
 Thanks for interested in Cascading. Here are some very simple explanation:
 
 Basically Datacenter is not in the 2-level tree of cascading. We use term 
 POD to represent a cascaded child openstack (same meaning of your term 
 Zone?). There may be single or multiple PODs in one Datacenter, Just like 
 below:
 
 (A, B, C)  ...  (D, E)  ...  (F)  ...   (G)
 Each character represent a POD or child openstack, while parenthesis 
 represent a Datacenter.
 
 Each POD has a corresponding virtual host node in the parent openstack, so 
 when scheduler of any projects (nova/neutron/cinder...) locate a host node, 
 the resource POD is determined, also with its geo-located Datacenter by side 
 effect. Cascading don't schedule by Datacenter directly, DC is just an 
 attribute of POD (for example we can configure host aggregate to identify a 
 DC with multiple PODs). The upper scale of POD is fixed, maybe several 
 hundreds, so a super large DC with tens of thousands servers could be built 
 by modularized PODs, avoiding the difficult of tuning and maintaining such a 
 huge monolithic openstack.
 
 Next do you mean networking reachability? Sorry for the limitation of mail 
 post I can just give some very simple idea: in parent openstack the L2pop and 
 DVR is used, so L2/L3 agent-proxy in each virtual host node can get all the 
 vm reachability information of other POD, then they are set to local POD by 
 Neutron REST API. However, cascading depends on some feature not exists yet 
 in current Neutron, like L2GW, pluggable external network, WE Fwaas in DVR, 
 centralized FIP in DVR... so we have to do some little patch in the front. In 
 the future if these features is merged, these patch code can be removed.
 
 Indeed Neutron is the most challenge part of cascading, without considering 
 those proxies in the parent openstack virtual host node, Neutron patchs 
 account for 85% or more LOC in the whole project.
 
 Regards,
 Wu
 
 From: keshava [keshav...@hp.com]
 Sent: Wednesday, October 29, 2014 2:22 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
 OpenStack cascading
 
 This is very interesting problem to solve.
 I am curious to know how the reachability is provided across different 
 Datacenter.
 How to know which VM is part of which Datacenter?
 VM may be in different Zone but under same DC or in different DC itself.
 
 How this problem is solved?
 
 
 thanks  regards,
 keshava
 
 
 
 --
 View this message in context: 
 http://openstack.10931.n7.nabble.com/all-tc-Multi-clouds-integration-by-OpenStack-cascading-tp54115p56323.html
 Sent from the Developer mailing list archive at Nabble.com.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-30 Thread Hly


Sent from my iPad

On 2014-10-30, at 下午8:05, Hly henry4...@gmail.com wrote:

 hi,
 
 Network reachability is not an issue for live migration, it is the same as 
 cold. The challenge is near realtime order control of interaction between 
 parent proxies, child virt drivers, agents, and libvirt lib.
 
 Wu
 

Also it destroy the principle of only REST between POD, so we may study it in 
some special POC cases


 
 Sent from my iPad
 
 On 2014-10-30, at 下午7:28, joehuang joehu...@huawei.com wrote:
 
 Hello, Keshava
 
 Live migration is allowed inside one pod ( one cascaded OpenStack instance 
 ), not support cross pods live migration yet. 
 
 But cold migration could be done between pods, even cross data centers.
 
 Live migration cross pods will be studied in the future.
 
 Best Regards
 
 Chaoyi Huang ( joehuang )
 
 
 From: A, Keshava [keshav...@hp.com]
 Sent: 30 October 2014 17:45
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
 OpenStack cascading
 
 Hi,
 Can the VM migration happens across POD (Zone) ?
 If so then how reachability of VM is addressed dynamically without any 
 packet loss ?
 
 Thanks  Regards,
 keshava
 
 -Original Message-
 From: Wuhongning [mailto:wuhongn...@huawei.com]
 Sent: Thursday, October 30, 2014 7:56 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
 OpenStack cascading
 
 Hi keshava,
 
 Thanks for interested in Cascading. Here are some very simple explanation:
 
 Basically Datacenter is not in the 2-level tree of cascading. We use term 
 POD to represent a cascaded child openstack (same meaning of your term 
 Zone?). There may be single or multiple PODs in one Datacenter, Just like 
 below:
 
 (A, B, C)  ...  (D, E)  ...  (F)  ...   (G)
 Each character represent a POD or child openstack, while parenthesis 
 represent a Datacenter.
 
 Each POD has a corresponding virtual host node in the parent openstack, so 
 when scheduler of any projects (nova/neutron/cinder...) locate a host node, 
 the resource POD is determined, also with its geo-located Datacenter by side 
 effect. Cascading don't schedule by Datacenter directly, DC is just an 
 attribute of POD (for example we can configure host aggregate to identify a 
 DC with multiple PODs). The upper scale of POD is fixed, maybe several 
 hundreds, so a super large DC with tens of thousands servers could be built 
 by modularized PODs, avoiding the difficult of tuning and maintaining such a 
 huge monolithic openstack.
 
 Next do you mean networking reachability? Sorry for the limitation of mail 
 post I can just give some very simple idea: in parent openstack the L2pop 
 and DVR is used, so L2/L3 agent-proxy in each virtual host node can get all 
 the vm reachability information of other POD, then they are set to local POD 
 by Neutron REST API. However, cascading depends on some feature not exists 
 yet in current Neutron, like L2GW, pluggable external network, WE Fwaas in 
 DVR, centralized FIP in DVR... so we have to do some little patch in the 
 front. In the future if these features is merged, these patch code can be 
 removed.
 
 Indeed Neutron is the most challenge part of cascading, without considering 
 those proxies in the parent openstack virtual host node, Neutron patchs 
 account for 85% or more LOC in the whole project.
 
 Regards,
 Wu
 
 From: keshava [keshav...@hp.com]
 Sent: Wednesday, October 29, 2014 2:22 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
 OpenStack cascading
 
 This is very interesting problem to solve.
 I am curious to know how the reachability is provided across different 
 Datacenter.
 How to know which VM is part of which Datacenter?
 VM may be in different Zone but under same DC or in different DC itself.
 
 How this problem is solved?
 
 
 thanks  regards,
 keshava
 
 
 
 --
 View this message in context: 
 http://openstack.10931.n7.nabble.com/all-tc-Multi-clouds-integration-by-OpenStack-cascading-tp54115p56323.html
 Sent from the Developer mailing list archive at Nabble.com.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-30 Thread A, Keshava
OK,
You may  need to think of brining BGP routing between POD to support Live 
migration.


Thanks  Regards,
keshava

-Original Message-
From: joehuang [mailto:joehu...@huawei.com] 
Sent: Thursday, October 30, 2014 4:58 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

Hello, Keshava

Live migration is allowed inside one pod ( one cascaded OpenStack instance ), 
not support cross pods live migration yet. 

But cold migration could be done between pods, even cross data centers.

Live migration cross pods will be studied in the future.

Best Regards

Chaoyi Huang ( joehuang )


From: A, Keshava [keshav...@hp.com]
Sent: 30 October 2014 17:45
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

Hi,
Can the VM migration happens across POD (Zone) ?
If so then how reachability of VM is addressed dynamically without any packet 
loss ?

Thanks  Regards,
keshava

-Original Message-
From: Wuhongning [mailto:wuhongn...@huawei.com]
Sent: Thursday, October 30, 2014 7:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

Hi keshava,

Thanks for interested in Cascading. Here are some very simple explanation:

Basically Datacenter is not in the 2-level tree of cascading. We use term POD 
to represent a cascaded child openstack (same meaning of your term Zone?). 
There may be single or multiple PODs in one Datacenter, Just like below:

(A, B, C)  ...  (D, E)  ...  (F)  ...   (G)
Each character represent a POD or child openstack, while parenthesis represent 
a Datacenter.

Each POD has a corresponding virtual host node in the parent openstack, so when 
scheduler of any projects (nova/neutron/cinder...) locate a host node, the 
resource POD is determined, also with its geo-located Datacenter by side 
effect. Cascading don't schedule by Datacenter directly, DC is just an 
attribute of POD (for example we can configure host aggregate to identify a DC 
with multiple PODs). The upper scale of POD is fixed, maybe several hundreds, 
so a super large DC with tens of thousands servers could be built by 
modularized PODs, avoiding the difficult of tuning and maintaining such a huge 
monolithic openstack.

Next do you mean networking reachability? Sorry for the limitation of mail post 
I can just give some very simple idea: in parent openstack the L2pop and DVR is 
used, so L2/L3 agent-proxy in each virtual host node can get all the vm 
reachability information of other POD, then they are set to local POD by 
Neutron REST API. However, cascading depends on some feature not exists yet in 
current Neutron, like L2GW, pluggable external network, WE Fwaas in DVR, 
centralized FIP in DVR... so we have to do some little patch in the front. In 
the future if these features is merged, these patch code can be removed.

Indeed Neutron is the most challenge part of cascading, without considering 
those proxies in the parent openstack virtual host node, Neutron patchs account 
for 85% or more LOC in the whole project.

Regards,
Wu

From: keshava [keshav...@hp.com]
Sent: Wednesday, October 29, 2014 2:22 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
OpenStack cascading

This is very interesting problem to solve.
I am curious to know how the reachability is provided across different 
Datacenter.
How to know which VM is part of which Datacenter?
VM may be in different Zone but under same DC or in different DC itself.

How this problem is solved?


thanks  regards,
keshava



--
View this message in context: 
http://openstack.10931.n7.nabble.com/all-tc-Multi-clouds-integration-by-OpenStack-cascading-tp54115p56323.html
Sent from the Developer mailing list archive at Nabble.com.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-29 Thread keshava
This is very interesting problem to solve.
I am curious to know how the reachability is provided across different
Datacenter.
How to know which VM is part of which Datacenter? 
VM may be in different Zone but under same DC or in different DC itself.

How this problem is solved?


thanks  regards,
keshava



--
View this message in context: 
http://openstack.10931.n7.nabble.com/all-tc-Multi-clouds-integration-by-OpenStack-cascading-tp54115p56323.html
Sent from the Developer mailing list archive at Nabble.com.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-29 Thread Wuhongning
Hi keshava,

Thanks for interested in Cascading. Here are some very simple explanation:

Basically Datacenter is not in the 2-level tree of cascading. We use term POD 
to represent a cascaded child openstack (same meaning of your term Zone?). 
There may be single or multiple PODs in one Datacenter, Just like below:

(A, B, C)  ...  (D, E)  ...  (F)  ...   (G)
Each character represent a POD or child openstack, while parenthesis represent 
a Datacenter. 

Each POD has a corresponding virtual host node in the parent openstack, so when 
scheduler of any projects (nova/neutron/cinder...) locate a host node, the 
resource POD is determined, also with its geo-located Datacenter by side 
effect. Cascading don't schedule by Datacenter directly, DC is just an 
attribute of POD (for example we can configure host aggregate to identify a DC 
with multiple PODs). The upper scale of POD is fixed, maybe several hundreds, 
so a super large DC with tens of thousands servers could be built by 
modularized PODs, avoiding the difficult of tuning and maintaining such a huge 
monolithic openstack.

Next do you mean networking reachability? Sorry for the limitation of mail post 
I can just give some very simple idea: in parent openstack the L2pop and DVR is 
used, so L2/L3 agent-proxy in each virtual host node can get all the vm 
reachability information of other POD, then they are set to local POD by 
Neutron REST API. However, cascading depends on some feature not exists yet in 
current Neutron, like L2GW, pluggable external network, WE Fwaas in DVR, 
centralized FIP in DVR... so we have to do some little patch in the front. In 
the future if these features is merged, these patch code can be removed. 

Indeed Neutron is the most challenge part of cascading, without considering 
those proxies in the parent openstack virtual host node, Neutron patchs account 
for 85% or more LOC in the whole project.

Regards,
Wu

From: keshava [keshav...@hp.com]
Sent: Wednesday, October 29, 2014 2:22 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
OpenStack cascading

This is very interesting problem to solve.
I am curious to know how the reachability is provided across different
Datacenter.
How to know which VM is part of which Datacenter?
VM may be in different Zone but under same DC or in different DC itself.

How this problem is solved?


thanks  regards,
keshava



--
View this message in context: 
http://openstack.10931.n7.nabble.com/all-tc-Multi-clouds-integration-by-OpenStack-cascading-tp54115p56323.html
Sent from the Developer mailing list archive at Nabble.com.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-29 Thread joehuang
Hello, Keshava,

Wu described the simplified pictures of cascading as following, if you attend 
Paris summit, you can join the cross-project design summit session approached 
for scaling out, cascading will also be discussed in this session, we can have 
f2f talk in more detail.

Best Regards

Chaoyi Hunag ( joehuang )

From: Wuhongning [wuhongn...@huawei.com]
Sent: 30 October 2014 10:25
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

Hi keshava,

Thanks for interested in Cascading. Here are some very simple explanation:

Basically Datacenter is not in the 2-level tree of cascading. We use term POD 
to represent a cascaded child openstack (same meaning of your term Zone?). 
There may be single or multiple PODs in one Datacenter, Just like below:

(A, B, C)  ...  (D, E)  ...  (F)  ...   (G)
Each character represent a POD or child openstack, while parenthesis represent 
a Datacenter.

Each POD has a corresponding virtual host node in the parent openstack, so when 
scheduler of any projects (nova/neutron/cinder...) locate a host node, the 
resource POD is determined, also with its geo-located Datacenter by side 
effect. Cascading don't schedule by Datacenter directly, DC is just an 
attribute of POD (for example we can configure host aggregate to identify a DC 
with multiple PODs). The upper scale of POD is fixed, maybe several hundreds, 
so a super large DC with tens of thousands servers could be built by 
modularized PODs, avoiding the difficult of tuning and maintaining such a huge 
monolithic openstack.

Next do you mean networking reachability? Sorry for the limitation of mail post 
I can just give some very simple idea: in parent openstack the L2pop and DVR is 
used, so L2/L3 agent-proxy in each virtual host node can get all the vm 
reachability information of other POD, then they are set to local POD by 
Neutron REST API. However, cascading depends on some feature not exists yet in 
current Neutron, like L2GW, pluggable external network, WE Fwaas in DVR, 
centralized FIP in DVR... so we have to do some little patch in the front. In 
the future if these features is merged, these patch code can be removed.

Indeed Neutron is the most challenge part of cascading, without considering 
those proxies in the parent openstack virtual host node, Neutron patchs account 
for 85% or more LOC in the whole project.

Regards,
Wu

From: keshava [keshav...@hp.com]
Sent: Wednesday, October 29, 2014 2:22 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
OpenStack cascading

This is very interesting problem to solve.
I am curious to know how the reachability is provided across different
Datacenter.
How to know which VM is part of which Datacenter?
VM may be in different Zone but under same DC or in different DC itself.

How this problem is solved?


thanks  regards,
keshava



--
View this message in context: 
http://openstack.10931.n7.nabble.com/all-tc-Multi-clouds-integration-by-OpenStack-cascading-tp54115p56323.html
Sent from the Developer mailing list archive at Nabble.com.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-23 Thread henry hly
Hi Phil,

Thanks for your feedback, and patience of this long history reading :)
See comments inline.

On Wed, Oct 22, 2014 at 5:59 PM, Day, Phil philip@hp.com wrote:
 -Original Message-
 From: henry hly [mailto:henry4...@gmail.com]
 Sent: 08 October 2014 09:16
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack
 cascading

 Hi,

 Good questions: why not just keeping multiple endpoints, and leaving
 orchestration effort in the client side?

 From feedback of some large data center operators, they want the cloud
 exposed to tenant as a single region with multiple AZs, while each AZ may be
 distributed in different/same locations, very similar with AZ concept of AWS.
 And the OpenStack API is indispensable for the cloud for eco-system
 friendly.

 The cascading is mainly doing one thing: map each standalone child
 Openstack to AZs in the parent Openstack, hide separated child endpoints,
 thus converge them into a single standard OS-API endpoint.

 One of the obvious benefit doing so is the networking: we can create a single
 Router/LB, with subnet/port member from different child, just like in a 
 single
 OpenStack instance. Without the parent OpenStack working as the
 aggregation layer, it is not so easy to do so. Explicit VPN endpoint may be
 required in each child.

 I've read through the thread and the various links, and to me this still 
 sounds an awful lot like having multiple regions in Keystone.

 First of all I think we're in danger of getting badly mixed up in terminology 
 here around AZs which is an awfully overloaded term - esp when we make 
 comparisons to AWS AZs.  Whether we think the current Openstack usage of 
 these terms or not, lets at least stick to how they are currently defined and 
 used in Openstack:

 AZs - A scheduling concept in Nova and Cinder.Simply provides some 
 isolation schemantic about a compute host or storage server.  Nothing to do 
 with explicit physical or geographical location, although some degree of that 
 (separate racks, power, etc) is usually implied.

 Regions - A keystone concept for a collection of Openstack Endpoints.   They 
 may be distinct (a completely isolated set of Openstack service) or overlap 
 (some shared services).  Openstack clients support explicit user selection of 
 a region.

 Cells - A scalability / fault-isolation concept within Nova.  Because Cells 
 aspires to provide all Nova features transparently across cells this kind or 
 acts like multiple regions where only the Nova service is distinct 
 (Networking has to be common, Glance has to be common or at least federated 
 in a transparent way, etc).   The difference from regions is that the user 
 doesn’t have to make an explicit region choice - they get a single Nova URL 
 for all cells.   From what I remember Cells originally started out also using 
 the existing APIs as the way to connect the Cells together, but had to move 
 away from that because of the performance overhead of going through multiple 
 layers.



Agree, it's very clear now. However isolation is not all about
hardware and facility fault, REST API is preferred in terms of system
level isolation despite the theoretical protocol serialization
overhead.


 Now with Cascading it seems that we're pretty much building on the Regions 
 concept, wrapping it behind a single set of endpoints for user convenience, 
 overloading the term AZ

Sorry not very certain of the meaning overloading. It's just a
configuration choice by admin in the wrapper Openstack. As you
mentioned, there is no explicit definition of what a AZ should be, so
Cascading select to map it to a child Openstack. Surely we could use
another concept or invent new concept instead of AZ, but AZ is the
most appropriate one because it share the same semantic of isolation
with those child.

 to re-expose those sets of services to allow the user to choose between them 
 (doesn't this kind of negate the advantage of not having to specify the 
 region in the client- is that really such a bit deal for users ?) , and doing 
 something to provide a sort of federated Neutron service - because as we all 
 know the hard part in all of this is how you handle the Networking.

 It kind of feels to me that if we just concentrated on the part of this that 
 is working out how to distribute/federate Neutron then we'd have a solution 
 that could be mapped as easily cells and/or regions - and I wonder if then 
 why really need yet another aggregation concept ?


I agree that it's not so huge a gap between cascading AZ and
standalone endpoints for Nova and Cinder. However, wrapping is
strongly needed by customer feedback for Neutron, especially for those
who operate multiple internally connected DC. They don't like to force
tenants to create multiple route domain, connected with explicit
vpnaas. Instead they prefer a simple L3 router connecting subnets and
ports from

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-23 Thread joehuang
Hi,

Because I am not able to find a meeting room to have deep diving OpenStack 
cascading before design summit. You are welcome to have a f2f conversation 
about the cascading before design summit. I planned to stay at Paris from 
Oct.30 to Nov.8, if you have any doubt or question, please feel free to contact 
me. All the conversation is for clarification / idea exchange purpose, not for 
any secret agreement purpose. It is necessary before design summit, for design 
summit session, it's only 40 minutes, if all 40 minutes are spent on basic 
question and clarification, then no valuable conclusion can be drawn in the 
meeting. So I want to work as client-server mode, anyone who is interested in 
talking cascading with me, just tell me when he will come to the hotel where I 
stay at Paris, then a chat could be made to reduce misunderstanding, get more 
clear picture, and focus on what need to be discussed and consensuses during 
the design summit session. 

It kind of feels to me that if we just concentrated on the part of this 
that is working out how to distribute/federate Neutron then we'd have a 
solution that could be mapped as easily cells and/or regions - and I wonder 
if then why really need yet another aggregation concept ?

My answer is that it seems to be feasible but can not meet the muti-site cloud 
demand (that's the drive force for cascading): 
1) large cloud operator ask multi-vendor to build the distributed but unified 
multi-site cloud together and each vendor has his own OpenStack based solution. 
If shared Nova/Cinder with federated Neutron used, the cross data center 
integration through RPC message for multi-vendor infrastrcuture is very 
difficult, and no clear responsibility boundry, it leads to difficulty for 
trouble shooting, upgrade, etc.
2) restful API /CLI is required for each site to make the cloud always workable 
and manageable. If shared Nova/Cinder with federated Neutron, then some data 
center is not able to expose restful API/CLI for management purpose.
3) the unified cloud need to expose open and standard api. If shared Nova / 
Cinder with federated Neutron, this point can be arhieved.

Best Regards

Chaoyi Huang ( joehuang )

-Original Message-
From: henry hly [mailto:henry4...@gmail.com] 
Sent: Thursday, October 23, 2014 3:13 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

Hi Phil,

Thanks for your feedback, and patience of this long history reading :) See 
comments inline.

On Wed, Oct 22, 2014 at 5:59 PM, Day, Phil philip@hp.com wrote:
 -Original Message-
 From: henry hly [mailto:henry4...@gmail.com]
 Sent: 08 October 2014 09:16
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
 OpenStack cascading

 Hi,

 Good questions: why not just keeping multiple endpoints, and leaving 
 orchestration effort in the client side?

 From feedback of some large data center operators, they want the 
 cloud exposed to tenant as a single region with multiple AZs, while 
 each AZ may be distributed in different/same locations, very similar with AZ 
 concept of AWS.
 And the OpenStack API is indispensable for the cloud for eco-system 
 friendly.

 The cascading is mainly doing one thing: map each standalone child 
 Openstack to AZs in the parent Openstack, hide separated child 
 endpoints, thus converge them into a single standard OS-API endpoint.

 One of the obvious benefit doing so is the networking: we can create 
 a single Router/LB, with subnet/port member from different child, 
 just like in a single OpenStack instance. Without the parent 
 OpenStack working as the aggregation layer, it is not so easy to do 
 so. Explicit VPN endpoint may be required in each child.

 I've read through the thread and the various links, and to me this still 
 sounds an awful lot like having multiple regions in Keystone.

 First of all I think we're in danger of getting badly mixed up in terminology 
 here around AZs which is an awfully overloaded term - esp when we make 
 comparisons to AWS AZs.  Whether we think the current Openstack usage of 
 these terms or not, lets at least stick to how they are currently defined and 
 used in Openstack:

 AZs - A scheduling concept in Nova and Cinder.Simply provides some 
 isolation schemantic about a compute host or storage server.  Nothing to do 
 with explicit physical or geographical location, although some degree of that 
 (separate racks, power, etc) is usually implied.

 Regions - A keystone concept for a collection of Openstack Endpoints.   They 
 may be distinct (a completely isolated set of Openstack service) or overlap 
 (some shared services).  Openstack clients support explicit user selection of 
 a region.

 Cells - A scalability / fault-isolation concept within Nova.  Because Cells 
 aspires to provide all Nova features

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-23 Thread Day, Phil
Hi,

 -Original Message-
 From: joehuang [mailto:joehu...@huawei.com]
 Sent: 23 October 2014 09:59
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack
 cascading
 
 Hi,
 
 Because I am not able to find a meeting room to have deep diving OpenStack
 cascading before design summit. You are welcome to have a f2f conversation
 about the cascading before design summit. I planned to stay at Paris from
 Oct.30 to Nov.8, if you have any doubt or question, please feel free to
 contact me. All the conversation is for clarification / idea exchange purpose,
 not for any secret agreement purpose. It is necessary before design summit,
 for design summit session, it's only 40 minutes, if all 40 minutes are spent 
 on
 basic question and clarification, then no valuable conclusion can be drawn in
 the meeting. So I want to work as client-server mode, anyone who is
 interested in talking cascading with me, just tell me when he will come to the
 hotel where I stay at Paris, then a chat could be made to reduce
 misunderstanding, get more clear picture, and focus on what need to be
 discussed and consensuses during the design summit session.
 
Sure, I'll certainly try and find some time to meet and talk.


 It kind of feels to me that if we just concentrated on the part of this 
 that
 is working out how to distribute/federate Neutron then we'd have a solution
 that could be mapped as easily cells and/or regions - and I wonder if then
 why really need yet another aggregation concept ?
 
 My answer is that it seems to be feasible but can not meet the muti-site
 cloud demand (that's the drive force for cascading):
 1) large cloud operator ask multi-vendor to build the distributed but unified
 multi-site cloud together and each vendor has his own OpenStack based
 solution. If shared Nova/Cinder with federated Neutron used, the cross data
 center integration through RPC message for multi-vendor infrastrcuture is
 very difficult, and no clear responsibility boundry, it leads to difficulty 
 for
 trouble shooting, upgrade, etc.

So if the scope of what you're doing to is to provide a single API across 
multiple clouds that are being built and operated independently then I'm not 
sure how you can impose enough consistency to guarantee any operations.What 
if one of those clouds has Nova AZs configured, and your using (from what I 
understand AZs to try and route to a specific cloud) ?   How do you get image 
and flavor consistency across the clouds ?

I picked up on the Network aspect because that seems to be something you've 
covered in some depth here 
https://docs.google.com/presentation/d/1wIqWgbZBS_EotaERV18xYYA99CXeAa4tv6v_3VlD2ik/edit?pli=1#slide=id.g390a1cf23_2_149
 so I'd assumed it was an intrinsic part of your proposal.  Now I'm even less 
clear on the scope of what you're trying to achieve ;-( 

If this is a federation layer for in effect arbitrary Openstack clouds then it 
kind of feels like it can't be anything other than an aggregator of queries 
(list the VMs in all of the clouds you know about, and show the results in one 
output).   If you have to make API calls into many clouds (when only one of 
them may have any results) then that feels like it would be a performance 
issue.  If you're going to cache the results somehow then in effect you needs 
the Cells approach for propogating up results, which means the sub-clouds have 
to be co-operating.

Maybe I missed it somewhere, but is there a clear write-up of the restrictions 
/ expectations of sub-clouds to work in this model ?

Kind Regards
Phil

 2) restful API /CLI is required for each site to make the cloud always 
 workable
 and manageable. If shared Nova/Cinder with federated Neutron, then some
 data center is not able to expose restful API/CLI for management purpose.
 3) the unified cloud need to expose open and standard api. If shared Nova /
 Cinder with federated Neutron, this point can be arhieved.
 
 Best Regards
 
 Chaoyi Huang ( joehuang )
 
 -Original Message-
 From: henry hly [mailto:henry4...@gmail.com]
 Sent: Thursday, October 23, 2014 3:13 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack
 cascading
 
 Hi Phil,
 
 Thanks for your feedback, and patience of this long history reading :) See
 comments inline.
 
 On Wed, Oct 22, 2014 at 5:59 PM, Day, Phil philip@hp.com wrote:
  -Original Message-
  From: henry hly [mailto:henry4...@gmail.com]
  Sent: 08 October 2014 09:16
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by
  OpenStack cascading
 
  Hi,
 
  Good questions: why not just keeping multiple endpoints, and leaving
  orchestration effort in the client side?
 
  From feedback of some large data center operators, they want the
  cloud

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-23 Thread joehuang
Hi, Phil,

I am sorry that no enough information for you to understand the cascading in 
the document.  If we can talk f2f, I can explain much more in detail. But in 
short, I will give a simplified picture how a virtual machine will be booted:

The general process to boot a VM is like this:
Nova API - Nova Scheduler - Nova Compute( manager ) - Nova Compute( Libvirt 
driver ) - Nova Compute( KVM )

After OpenStack cascading is introduced, the process is a little difference and 
can be divided into two parts:
1. inside cascading OpenStack: Nova API - Nova Scheduler - Nova Proxy -
2, inside cascaded OpenStack: Nova API - Nova Scheduler - Nova Compute( 
manager ) - Nova Compute( Libvirt driver ) - Nova Compute( KVM )
 
After schedule a Nova-Proxy, the instance object will be persisted in the DB in 
the cascading layer. And VM query to the cloud will be answered by the  
cascading Nova API from cascading layer DB. No need to touch cascaded Nova. 
(it's not bad thing to persist the data in the cascading layer, quota control, 
system healing and consistency correction, fast user experience, etc...)

All VM generation in the cascaded OpenStack has nothing different with the 
process of general VM boot process, and is a asynchronous process from the 
cascading layer.

How the scheduler in the cascading layer to select proper Nova Proxy. The 
answer is that if hosts the cascaded Nova was added to AZ1 (AZ: availability 
zone in short), then the Nova proxy (it's a host in the cascading layer) will 
also be added to AZ1 in the cascading layer, and this nova proxy will be 
configured to send all request to the endpoint of the regarding cascaded Nova. 
And scheduler will be configured to use availability zone filter only, we know 
all VM boot  request has AZ parameter, that's the key for scheduling in the 
cascading layer.  Host Aggregate could be done in the same way.

After Nova-proxy receive the RPC message from the Nova-scheduler, it will not 
work like libvirt driver to boot a VM in the local host, it'll pickup all 
request parameter and call python client, to send the restful nova-boot request 
to the cascaded Nova.

How the flavor will be synchronized to the cascaded Nova? The flavor will be 
synchronized to the cascaded Nova only if the flavor does not exist in the 
cascaded Nova, or the flavor is recently updated but not synchronized to the 
cascaded Nova. Because the VM boot request has been answered after scheduling, 
so all the things done in the nova-proxy is asynchronous operation just like a 
VM booted in a host, it'll take seconds to minutes in general host, but in the 
cascading, some API calling will be done by nova-proxy to cascaded Nova, or 
cascaded Cinder  Neutron. 

I wrote a few blogs to explain something in detail, but I am too busy, and not 
able to write all things we have done in the PoC. [ 1 ]

[1] blog about cascading:  http://www.linkedin.com/today/author/23841540

Best Regards

Chaoyi Huang


From: Day, Phil [philip@hp.com]
Sent: 23 October 2014 19:24
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

Hi,

 -Original Message-
 From: joehuang [mailto:joehu...@huawei.com]
 Sent: 23 October 2014 09:59
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack
 cascading

 Hi,

 Because I am not able to find a meeting room to have deep diving OpenStack
 cascading before design summit. You are welcome to have a f2f conversation
 about the cascading before design summit. I planned to stay at Paris from
 Oct.30 to Nov.8, if you have any doubt or question, please feel free to
 contact me. All the conversation is for clarification / idea exchange purpose,
 not for any secret agreement purpose. It is necessary before design summit,
 for design summit session, it's only 40 minutes, if all 40 minutes are spent 
 on
 basic question and clarification, then no valuable conclusion can be drawn in
 the meeting. So I want to work as client-server mode, anyone who is
 interested in talking cascading with me, just tell me when he will come to the
 hotel where I stay at Paris, then a chat could be made to reduce
 misunderstanding, get more clear picture, and focus on what need to be
 discussed and consensuses during the design summit session.

Sure, I'll certainly try and find some time to meet and talk.


 It kind of feels to me that if we just concentrated on the part of this 
 that
 is working out how to distribute/federate Neutron then we'd have a solution
 that could be mapped as easily cells and/or regions - and I wonder if then
 why really need yet another aggregation concept ?

 My answer is that it seems to be feasible but can not meet the muti-site
 cloud demand (that's the drive force for cascading):
 1) large cloud operator ask multi-vendor to build

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-23 Thread Adam Young

On 10/09/2014 03:36 PM, Duncan Thomas wrote:

On 9 October 2014 07:49, henry hly henry4...@gmail.com wrote:

Hi Joshua,

...in fact hierarchical scale
depends on square of single child scale. If a single child can deal
with 00's to 000's, cascading on it would then deal with 00,000's.

That is faulty logic - maybe the cascading solution needs to deal with
global quota and other aggregations that will rapidly break down your


There should not be Global quota in a cascading deployment.  If I own a 
cloud, I should manage my own Quota.


Keystone needs to be able to merge the authorization data across 
multiple OpenStack instances.  I have a spec proposal for this:


https://review.openstack.org/#/c/123782/

There are many issues to be resolved due to the organic growth nature 
of OpenStack deployments.  We see a recuring pattern where people need 
to span across multiple deployments, and not just for Bursting.


Quota then becomes essential:  it is the way of limiting what a user can 
do in one deployment ,separate from what they could do in a different 
one.  The quotas really reflect the contract between the user and the 
deployment.




scaling factor, or maybe there are few such problems can the cascade
part can scale way better than the underlying part. They are two
totally different scaling cases, so and suggestion that they are
anything other than an unknown multiplier is bogus.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-22 Thread Day, Phil
 -Original Message-
 From: henry hly [mailto:henry4...@gmail.com]
 Sent: 08 October 2014 09:16
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack
 cascading
 
 Hi,
 
 Good questions: why not just keeping multiple endpoints, and leaving
 orchestration effort in the client side?
 
 From feedback of some large data center operators, they want the cloud
 exposed to tenant as a single region with multiple AZs, while each AZ may be
 distributed in different/same locations, very similar with AZ concept of AWS.
 And the OpenStack API is indispensable for the cloud for eco-system
 friendly.
 
 The cascading is mainly doing one thing: map each standalone child
 Openstack to AZs in the parent Openstack, hide separated child endpoints,
 thus converge them into a single standard OS-API endpoint.
 
 One of the obvious benefit doing so is the networking: we can create a single
 Router/LB, with subnet/port member from different child, just like in a single
 OpenStack instance. Without the parent OpenStack working as the
 aggregation layer, it is not so easy to do so. Explicit VPN endpoint may be
 required in each child.

I've read through the thread and the various links, and to me this still sounds 
an awful lot like having multiple regions in Keystone.

First of all I think we're in danger of getting badly mixed up in terminology 
here around AZs which is an awfully overloaded term - esp when we make 
comparisons to AWS AZs.  Whether we think the current Openstack usage of these 
terms or not, lets at least stick to how they are currently defined and used in 
Openstack:

AZs - A scheduling concept in Nova and Cinder.Simply provides some 
isolation schemantic about a compute host or storage server.  Nothing to do 
with explicit physical or geographical location, although some degree of that 
(separate racks, power, etc) is usually implied.

Regions - A keystone concept for a collection of Openstack Endpoints.   They 
may be distinct (a completely isolated set of Openstack service) or overlap 
(some shared services).  Openstack clients support explicit user selection of a 
region.

Cells - A scalability / fault-isolation concept within Nova.  Because Cells 
aspires to provide all Nova features transparently across cells this kind or 
acts like multiple regions where only the Nova service is distinct (Networking 
has to be common, Glance has to be common or at least federated in a 
transparent way, etc).   The difference from regions is that the user doesn’t 
have to make an explicit region choice - they get a single Nova URL for all 
cells.   From what I remember Cells originally started out also using the 
existing APIs as the way to connect the Cells together, but had to move away 
from that because of the performance overhead of going through multiple layers.



Now with Cascading it seems that we're pretty much building on the Regions 
concept, wrapping it behind a single set of endpoints for user convenience, 
overloading the term AZ to re-expose those sets of services to allow the user 
to choose between them (doesn't this kind of negate the advantage of not having 
to specify the region in the client- is that really such a bit deal for users 
?) , and doing something to provide a sort of federated Neutron service - 
because as we all know the hard part in all of this is how you handle the 
Networking.

It kind of feels to me that if we just concentrated on the part of this that is 
working out how to distribute/federate Neutron then we'd have a solution that 
could be mapped as easily cells and/or regions - and I wonder if then why 
really need yet another aggregation concept ?

Phil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-09 Thread henry hly
 those same 
 existing inconsistencies also make cascading inconsistent (by the very nature 
 of the cascading model just being a combination of connected components, aka 
 your fractal), since it's typically very hard to create consistent  stable 
 out of components that are themselves not consistent and stable...


 Best Regards

 Chaoyi Huang ( joehuang )
 
 From: Joshua Harlow [harlo...@outlook.com]
 Sent: 07 October 2014 12:21
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
 OpenStack cascading

 On Oct 3, 2014, at 2:44 PM, Monty Taylor mord...@inaugust.com wrote:

 On 09/30/2014 12:07 PM, Tim Bell wrote:
 -Original Message-
 From: John Garbutt [mailto:j...@johngarbutt.com]
 Sent: 30 September 2014 15:35
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
 OpenStack
 cascading

 On 30 September 2014 14:04, joehuang joehu...@huawei.com wrote:
 Hello, Dear TC and all,

 Large cloud operators prefer to deploy multiple OpenStack instances(as
 different zones), rather than a single monolithic OpenStack instance 
 because of
 these reasons:

 1) Multiple data centers distributed geographically;
 2) Multi-vendor business policy;
 3) Server nodes scale up modularized from 00's up to million;
 4) Fault and maintenance isolation between zones (only REST
 interface);

 At the same time, they also want to integrate these OpenStack instances 
 into
 one cloud. Instead of proprietary orchestration layer, they want to use 
 standard
 OpenStack framework for Northbound API compatibility with HEAT/Horizon or
 other 3rd ecosystem apps.

 We call this pattern as OpenStack Cascading, with proposal described by
 [1][2]. PoC live demo video can be found[3][4].

 Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in 
 the
 OpenStack cascading.

 Kindly ask for cross program design summit session to discuss OpenStack
 cascading and the contribution to Kilo.

 Kindly invite those who are interested in the OpenStack cascading to work
 together and contribute it to OpenStack.

 (I applied for “other projects” track [5], but it would be better to
 have a discussion as a formal cross program session, because many core
 programs are involved )


 [1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
 [2] PoC source code: https://github.com/stackforge/tricircle
 [3] Live demo video at YouTube:
 https://www.youtube.com/watch?v=OSU6PYRz5qY
 [4] Live demo video at Youku (low quality, for those who can't access
 YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
 [5]
 http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395
 .html

 There are etherpads for suggesting cross project sessions here:
 https://wiki.openstack.org/wiki/Summit/Planning
 https://etherpad.openstack.org/p/kilo-crossproject-summit-topics

 I am interested at comparing this to Nova's cells concept:
 http://docs.openstack.org/trunk/config-reference/content/section_compute-
 cells.html

 Cells basically scales out a single datacenter region by aggregating 
 multiple child
 Nova installations with an API cell.

 Each child cell can be tested in isolation, via its own API, before 
 joining it up to
 an API cell, that adds it into the region. Each cell logically has its 
 own database
 and message queue, which helps get more independent failure domains. You 
 can
 use cell level scheduling to restrict people or types of instances to 
 particular
 subsets of the cloud, if required.

 It doesn't attempt to aggregate between regions, they are kept 
 independent.
 Except, the usual assumption that you have a common identity between all
 regions.

 It also keeps a single Cinder, Glance, Neutron deployment per region.

 It would be great to get some help hardening, testing, and building out 
 more of
 the cells vision. I suspect we may form a new Nova subteam to trying and 
 drive
 this work forward in kilo, if we can build up enough people wanting to 
 work on
 improving cells.


 At CERN, we've deployed cells at scale but are finding a number of 
 architectural issues that need resolution in the short term to attain 
 feature parity. A vision of we all run cells but some of us have only 
 one is not there yet. Typical examples are flavors, security groups and 
 server groups, all of which are not yet implemented to the necessary 
 levels for cell parent/child.

 We would be very keen on agreeing the strategy in Paris so that we can 
 ensure the gap is closed, test it in the gate and that future features 
 cannot 'wishlist' cell support.

 I agree with this. I know that there are folks who don't like cells -
 but I think that ship has sailed. It's there - which means we need to
 make it first class.

 Just out of curiosity, would you prioritize cells over split out unified 
 quotas, or a split out scheduler

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-09 Thread Duncan Thomas
On 8 October 2014 10:32, joehuang joehu...@huawei.com wrote:
 maybe we should just slap a REST api on it. The challenge of Node-pool REST 
 API is What it will look like for these API, totally new API? current OS-API 
 ?. From cloud operators' feed back, OS-API is preferred. If we developed 
 totally new API for Node-pool, it takes long time to grow API-ecosystem or 
 3rd party APP for it.

Oh, I don't think nodepool solves many of the problems being looked at
here - it is almost a side discussion - I just think that nodepool
would be way more useful with an API rather than requiring people to
install it themselves.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-09 Thread Duncan Thomas
On 9 October 2014 07:49, henry hly henry4...@gmail.com wrote:
 Hi Joshua,

 ...in fact hierarchical scale
 depends on square of single child scale. If a single child can deal
 with 00's to 000's, cascading on it would then deal with 00,000's.

That is faulty logic - maybe the cascading solution needs to deal with
global quota and other aggregations that will rapidly break down your
scaling factor, or maybe there are few such problems can the cascade
part can scale way better than the underlying part. They are two
totally different scaling cases, so and suggestion that they are
anything other than an unknown multiplier is bogus.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-09 Thread joehuang
Hello, Duncan,

You are right. It's not simple multiplier game. 

The performance or scalability bottleneck is mainly impacted from two aspect: 
request concurrency to the cloud, and the volume of the objects (including 
VM,Volume,Port,etc).

We can discuss that in a scenario: there are 1 million VMs in the cloud, and 
one cascading OpenStack manages 100 cascaded OpenStacks, the concurrency of 
request and data volume will be distributed evenly among cascaded OpenStacks. 
Let's suppose the concurrency of request is 1000 TPS.

For the cascaded layer: TPS is 10, and the VM instance object table contains 
10K VMs (or more, for some record deleted but stay there). It's much easier to 
install and tune performance in such scale.

For the cascading layer: TPS is 1000, and the VM instance object table contains 
1 million VMs (or more, for some record deleted but stay there). In the 
cascading layer, only 100 proxy nodes to be managed by the cascading OpenStack. 
If we scale one OpenStack to manage a 1 million VMs cloud, and suppose one 
compute nodes can run 20 VMs, then there are 50k compute nodes.

The challenge to scale the cascading OpenStack is smaller than one normal 
OpenStack instance to manage a cloud with 1 million VMs. The following 
performance advantage is obviously:

1. Reduce scheduling burden. Nova,Cinder scheduling only according to 
availability zone.
2. Nova,Cinder Host status and resources track task is much light weight
3. Reduce temporary status update to the DB. There are lots of internal state 
update messages during VM/volume creation. The number of exchanged message / DB 
access for one VM/Volume creation will be reduced greatly by batch periodic 
polling the VM/Volume stabl status from the cascaded OpenStack.
4. Less L2 population and L3 DVR router update nodes involved. Because the 
L2/L3 proxy delegates one cascaded OpenStack, and often VMs of one 
tenant/network will be limitedly located in one or two or three cascaded 
OpenStacks, the L2 population and L3 DVR population traffic will be greatly 
reduced in the cascading level
5. Ceilometer data will be collected in distributed ceilometers. I mentioned 
for 1 million level cloud, it's roughly estimated 20GB/minute data will be 
generated(based on current sampling way).   

But, The performance enhancement by cascading is not simple multiplier to a 
cascaded OpenStack scalability, although it provide a way to scale a cloud 
easier. 

That's because, the concurrency and latency for DB query is not easy to achieve 
linear growth with more resources to put into, even if we split DB and tables. 
For big table, RDBMS cannot realize very good CRUD performance. And also, the 
concurrency will heavily be affected by the ceiling of message bus.

Therefore, and as what we have discussed, the scalability of one OpenStack 
instance is always necessary and it's the fundamental for OpenStack cascading.

Best Regards
Chaoyi Huang ( Joe Huang )


-Original Message-
From: Duncan Thomas [mailto:duncan.tho...@gmail.com] 
Sent: Friday, October 10, 2014 3:37 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

On 9 October 2014 07:49, henry hly henry4...@gmail.com wrote:
 Hi Joshua,

 ...in fact hierarchical scale
 depends on square of single child scale. If a single child can deal 
 with 00's to 000's, cascading on it would then deal with 00,000's.

That is faulty logic - maybe the cascading solution needs to deal with global 
quota and other aggregations that will rapidly break down your scaling factor, 
or maybe there are few such problems can the cascade part can scale way better 
than the underlying part. They are two totally different scaling cases, so and 
suggestion that they are anything other than an unknown multiplier is bogus.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-08 Thread henry hly
Hi,

Good questions: why not just keeping multiple endpoints, and leaving
orchestration effort in the client side?

From feedback of some large data center operators, they want the cloud
exposed to tenant as a single region with multiple AZs, while each AZ
may be distributed in different/same locations, very similar with AZ
concept of AWS. And the OpenStack API is indispensable for the cloud
for eco-system friendly.

The cascading is mainly doing one thing: map each standalone child
Openstack to AZs in the parent Openstack, hide separated child
endpoints, thus converge them into a single standard OS-API endpoint.

One of the obvious benefit doing so is the networking: we can create a
single Router/LB, with subnet/port member from different child, just
like in a single OpenStack instance. Without the parent OpenStack
working as the aggregation layer, it is not so easy to do so. Explicit
VPN endpoint may be required in each child.

Best Regards
Wu Hongning

On Tue, Oct 7, 2014 at 11:30 PM, Monty Taylor mord...@inaugust.com wrote:
 On 10/07/2014 06:41 AM, Duncan Thomas wrote:
 My data consistency concerts would be around:

 1) Defining global state. You can of course hand wave away a lot of
 your issues by saying they are all local to the sub-unit, but then
 what benefit are you providing .v. just providing a list of endpoints
 and teaching the clients to talk to multiple endpoints, which is far
 easier to make reliable than a new service generally is. State that
 'ought' to be global: quota, usage, floating ips, cinder backups, and
 probably a bunch more

 BTW - since infra regularly talks to multiple clouds, I've been working
 on splitting supporting code for that into a couple of libraries. Next
 pass is to go add support for it to the clients, and it's not really a
 lot of work ... so let's assume that the vs. here is going to be
 accomplished soonish for the purposes of assessing the above question.

 Second BTW - you're certainly right about the first two in the global
 list - we keep track of quota and usage ourselves inside of nodepool.
 Actually - since nodepool already does a bunch of these things - maybe
 we should just slap a REST api on it...

 2) Data locality expectations. You have to be careful about what
 expectations .v. realty you're providing here. If the user experience
 is substantially different using your proxy .v. direct API, then I
 don't think you are providing a useful service - again, just teach the
 clients to be multi-cloud aware. This includes what can be connected
 to what (cinder volumes, snaps, backups, networks, etc), replication
 behaviours and speeds (swift) and probably a bunch more that I haven't
 thought of yet.



 On 7 October 2014 14:24, joehuang joehu...@huawei.com wrote:
 Hello, Joshua,

 Thank you for your concerns on OpenStack cascading. I am afraid that I am 
 not proper person to give comment on cells, but I would like to speak a 
 little about cascading for you mentioned with its own set of consistency 
 warts I'm sure .

 1. For small scale or a cloud within one data centers, one OpenStack 
 instance (including cells) without cascading feature can work just like it 
 work today. OpenStack cascading just introduces Nova-proxy, Cinder-proxy, 
 L2/L3 proxy... like other vendor specific agent/driver( for example, 
 vcenter driver, hyper-v driver, linux-agent.. ovs-agent ), and does not 
 change the current architecture for Nova/Cinder/Neutron..., and does not 
 affect the aleady developed features and deployment capability. The cloud 
 operators can skip the existence of OpenStack cascading if they don't want 
 to use it, just like they don't want to use some kinds of hypervisor / sdn 
 controller 

 2. Could you provide concrete inconsistency issues you are worried about in 
 OpenStack cascading? Although we did not implement inconsistency check in 
 the PoC source code completely, but because logical 
 VM/Volume/Port/Network... objects are stored in the cascading OpenStack, 
 and the physical objects are stored in the cascaded OpenStack, uuid mapping 
 between logical object and physical object had been built,  it's possible 
 and easy to solve the inconsistency issues. Even for flavor, host 
 aggregate, we have method to solve the inconsistency issue.

 Best Regards

 Chaoyi Huang ( joehuang )
 
 From: Joshua Harlow [harlo...@outlook.com]
 Sent: 07 October 2014 12:21
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
 OpenStack cascading

 On Oct 3, 2014, at 2:44 PM, Monty Taylor mord...@inaugust.com wrote:

 On 09/30/2014 12:07 PM, Tim Bell wrote:
 -Original Message-
 From: John Garbutt [mailto:j...@johngarbutt.com]
 Sent: 30 September 2014 15:35
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
 OpenStack
 cascading

 On 30 September

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-08 Thread joehuang
Hello, Duncan and Monty,

The discussion is more and more concrete, very good.

maybe we should just slap a REST api on it. The challenge of Node-pool REST 
API is What it will look like for these API, totally new API? current OS-API 
?. From cloud operators' feed back, OS-API is preferred. If we developed 
totally new API for Node-pool, it takes long time to grow API-ecosystem or 3rd 
party APP for it.

Best Regards
Chaoyi Huang ( joehuang )

-Original Message-
From: Duncan Thomas [mailto:duncan.tho...@gmail.com] 
Sent: Tuesday, October 07, 2014 11:44 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

On 7 October 2014 16:30, Monty Taylor mord...@inaugust.com wrote:

 Second BTW - you're certainly right about the first two in the global 
 list - we keep track of quota and usage ourselves inside of nodepool.
 Actually - since nodepool already does a bunch of these things - maybe 
 we should just slap a REST api on it...

Whether or not it does much for this use-case, Nodepool-aaS would definitely be 
useful I think, particularly if it was properly multi-tenant. It isn't /hard/ 
to set up, but it's effort and yet another cog to understand and debug.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-08 Thread joehuang
Hello, Duncan, 

The design goal is to keep the user experience under cascading as what they are 
using one OpenStack instance.


Locality objects: VM ( with availability zone attribute ) ,Volume ( with 
availability zone attribute ), VLAN Network, Port(follow VM vif)

 
Global objects: Quata,Usage, all KeyStone objects, Router, LB, SNAT, FIP, FW, 
VPN, Image ( public image will be available globally, project wide image will 
be global to where the project spreads ) 


Dependency on deployment policy: VxLAN network ( could be global or local ), 
Backup (depends on backup to local swift or global swift), Snapshot of Volume ( 
most deployment will store snapshot in locality, if the snapshot uploaded to 
the glance, it'll be global image, refer to image part ) 


Of course, we have only done the PoC, maybe there are still some unknown 
challenges. Whenever a new issue comes, we are able to find a way to solve it 
by the recursive self-similar mechanism ( please refer to 
https://www.linkedin.com/pulse/article/20140729022031-23841540-openstack-cascading-and-fractal?trk=prof-post
 )


Best Regards
Chaoyi Huang ( Joe Huang )

-Original Message-
From: Duncan Thomas [mailto:duncan.tho...@gmail.com] 
Sent: Tuesday, October 07, 2014 9:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

My data consistency concerts would be around:

1) Defining global state. You can of course hand wave away a lot of your issues 
by saying they are all local to the sub-unit, but then what benefit are you 
providing .v. just providing a list of endpoints and teaching the clients to 
talk to multiple endpoints, which is far easier to make reliable than a new 
service generally is. State that 'ought' to be global: quota, usage, floating 
ips, cinder backups, and probably a bunch more

2) Data locality expectations. You have to be careful about what expectations 
.v. realty you're providing here. If the user experience is substantially 
different using your proxy .v. direct API, then I don't think you are providing 
a useful service - again, just teach the clients to be multi-cloud aware. This 
includes what can be connected to what (cinder volumes, snaps, backups, 
networks, etc), replication behaviours and speeds (swift) and probably a bunch 
more that I haven't thought of yet.



On 7 October 2014 14:24, joehuang joehu...@huawei.com wrote:
 Hello, Joshua,

 Thank you for your concerns on OpenStack cascading. I am afraid that I am not 
 proper person to give comment on cells, but I would like to speak a little 
 about cascading for you mentioned with its own set of consistency warts I'm 
 sure .

 1. For small scale or a cloud within one data centers, one OpenStack instance 
 (including cells) without cascading feature can work just like it work today. 
 OpenStack cascading just introduces Nova-proxy, Cinder-proxy, L2/L3 proxy... 
 like other vendor specific agent/driver( for example, vcenter driver, hyper-v 
 driver, linux-agent.. ovs-agent ), and does not change the current 
 architecture for Nova/Cinder/Neutron..., and does not affect the aleady 
 developed features and deployment capability. The cloud operators can skip 
 the existence of OpenStack cascading if they don't want to use it, just like 
 they don't want to use some kinds of hypervisor / sdn controller 

 2. Could you provide concrete inconsistency issues you are worried about in 
 OpenStack cascading? Although we did not implement inconsistency check in the 
 PoC source code completely, but because logical VM/Volume/Port/Network... 
 objects are stored in the cascading OpenStack, and the physical objects are 
 stored in the cascaded OpenStack, uuid mapping between logical object and 
 physical object had been built,  it's possible and easy to solve the 
 inconsistency issues. Even for flavor, host aggregate, we have method to 
 solve the inconsistency issue.

 Best Regards

 Chaoyi Huang ( joehuang )
 
 From: Joshua Harlow [harlo...@outlook.com]
 Sent: 07 October 2014 12:21
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
 OpenStack cascading

 On Oct 3, 2014, at 2:44 PM, Monty Taylor mord...@inaugust.com wrote:

 On 09/30/2014 12:07 PM, Tim Bell wrote:
 -Original Message-
 From: John Garbutt [mailto:j...@johngarbutt.com]
 Sent: 30 September 2014 15:35
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
 OpenStack cascading

 On 30 September 2014 14:04, joehuang joehu...@huawei.com wrote:
 Hello, Dear TC and all,

 Large cloud operators prefer to deploy multiple OpenStack 
 instances(as
 different zones), rather than a single monolithic OpenStack 
 instance because of these reasons:

 1) Multiple data centers distributed

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-08 Thread Joshua Harlow
On Oct 7, 2014, at 6:24 AM, joehuang joehu...@huawei.com wrote:

 Hello, Joshua, 
 
 Thank you for your concerns on OpenStack cascading. I am afraid that I am not 
 proper person to give comment on cells, but I would like to speak a little 
 about cascading for you mentioned with its own set of consistency warts I'm 
 sure .
 
 1. For small scale or a cloud within one data centers, one OpenStack instance 
 (including cells) without cascading feature can work just like it work today. 
 OpenStack cascading just introduces Nova-proxy, Cinder-proxy, L2/L3 proxy... 
 like other vendor specific agent/driver( for example, vcenter driver, hyper-v 
 driver, linux-agent.. ovs-agent ), and does not change the current 
 architecture for Nova/Cinder/Neutron..., and does not affect the aleady 
 developed features and deployment capability. The cloud operators can skip 
 the existence of OpenStack cascading if they don't want to use it, just like 
 they don't want to use some kinds of hypervisor / sdn controller   

Sure, I understand the niceness that u can just connect clouds into other 
clouds and so-on (the prettyness of the fractal that results from this). That's 
a neat approach and its cool that openstack can do this (so +1 for that). The 
bigger question I have though is around 'should we' do this. This introduces a 
bunch of proxies that from what I can tell are just making it so that nova, 
cinder, neutron can scale by plugging more little cascading components 
together. This kind of connecting them together is very much what I guess could 
be called an 'external' scaling mechanism, one that plugs into the external 
API's of one service from the internal of another (and repeat). The question I 
have is why an 'external' solution in the first place, why not just work on 
scaling the projects internally first and when that ends up not being good 
enough switch to an 'external' scaling solution. Lets take an analogy, your 
queries to mysql are acting slow, do you first, add in X more mysql servers or 
do you instead try to tune your existing mysql server and queries before 
scaling out? I just want to make sure we are not prematurely adding in X more 
layers when we can gain scalability in a more solveable  manageable manner 
first...

 
 2. Could you provide concrete inconsistency issues you are worried about in 
 OpenStack cascading? Although we did not implement inconsistency check in the 
 PoC source code completely, but because logical VM/Volume/Port/Network... 
 objects are stored in the cascading OpenStack, and the physical objects are 
 stored in the cascaded OpenStack, uuid mapping between logical object and 
 physical object had been built,  it's possible and easy to solve the 
 inconsistency issues. Even for flavor, host aggregate, we have method to 
 solve the inconsistency issue.

When you add more levels/layers, by the very nature of adding in those levels 
the number of potential failure points has now increased (there is probably a 
theorem or proof somewhere in literature about this). If you want to see 
inconsistencies that already exists just watch the gate issues and bugs and 
so-on for a while, you will eventually see why it may not be the right time to 
add in more potential failure points instead of fixing the existing failure 
points we already have. I (and I think others) would rather see effort focused 
on those existing failure points vs. adding a set of new ones in (make what 
exists reliable and scalable *first* then move on to scaling things out via 
something like cascading, cells, other...). Overall those same existing 
inconsistencies also make cascading inconsistent (by the very nature of the 
cascading model just being a combination of connected components, aka your 
fractal), since it's typically very hard to create consistent  stable out of 
components that are themselves not consistent and stable...

 
 Best Regards
 
 Chaoyi Huang ( joehuang )
 
 From: Joshua Harlow [harlo...@outlook.com]
 Sent: 07 October 2014 12:21
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
 OpenStack cascading
 
 On Oct 3, 2014, at 2:44 PM, Monty Taylor mord...@inaugust.com wrote:
 
 On 09/30/2014 12:07 PM, Tim Bell wrote:
 -Original Message-
 From: John Garbutt [mailto:j...@johngarbutt.com]
 Sent: 30 September 2014 15:35
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
 OpenStack
 cascading
 
 On 30 September 2014 14:04, joehuang joehu...@huawei.com wrote:
 Hello, Dear TC and all,
 
 Large cloud operators prefer to deploy multiple OpenStack instances(as
 different zones), rather than a single monolithic OpenStack instance 
 because of
 these reasons:
 
 1) Multiple data centers distributed geographically;
 2) Multi-vendor business policy;
 3) Server nodes scale up

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-07 Thread joehuang
Hello, Joshua, 

Thank you for your concerns on OpenStack cascading. I am afraid that I am not 
proper person to give comment on cells, but I would like to speak a little 
about cascading for you mentioned with its own set of consistency warts I'm 
sure .

1. For small scale or a cloud within one data centers, one OpenStack instance 
(including cells) without cascading feature can work just like it work today. 
OpenStack cascading just introduces Nova-proxy, Cinder-proxy, L2/L3 proxy... 
like other vendor specific agent/driver( for example, vcenter driver, hyper-v 
driver, linux-agent.. ovs-agent ), and does not change the current architecture 
for Nova/Cinder/Neutron..., and does not affect the aleady developed features 
and deployment capability. The cloud operators can skip the existence of 
OpenStack cascading if they don't want to use it, just like they don't want to 
use some kinds of hypervisor / sdn controller   

2. Could you provide concrete inconsistency issues you are worried about in 
OpenStack cascading? Although we did not implement inconsistency check in the 
PoC source code completely, but because logical VM/Volume/Port/Network... 
objects are stored in the cascading OpenStack, and the physical objects are 
stored in the cascaded OpenStack, uuid mapping between logical object and 
physical object had been built,  it's possible and easy to solve the 
inconsistency issues. Even for flavor, host aggregate, we have method to solve 
the inconsistency issue.

Best Regards

Chaoyi Huang ( joehuang )

From: Joshua Harlow [harlo...@outlook.com]
Sent: 07 October 2014 12:21
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
OpenStack cascading

On Oct 3, 2014, at 2:44 PM, Monty Taylor mord...@inaugust.com wrote:

 On 09/30/2014 12:07 PM, Tim Bell wrote:
 -Original Message-
 From: John Garbutt [mailto:j...@johngarbutt.com]
 Sent: 30 September 2014 15:35
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
 OpenStack
 cascading

 On 30 September 2014 14:04, joehuang joehu...@huawei.com wrote:
 Hello, Dear TC and all,

 Large cloud operators prefer to deploy multiple OpenStack instances(as
 different zones), rather than a single monolithic OpenStack instance 
 because of
 these reasons:

 1) Multiple data centers distributed geographically;
 2) Multi-vendor business policy;
 3) Server nodes scale up modularized from 00's up to million;
 4) Fault and maintenance isolation between zones (only REST
 interface);

 At the same time, they also want to integrate these OpenStack instances 
 into
 one cloud. Instead of proprietary orchestration layer, they want to use 
 standard
 OpenStack framework for Northbound API compatibility with HEAT/Horizon or
 other 3rd ecosystem apps.

 We call this pattern as OpenStack Cascading, with proposal described by
 [1][2]. PoC live demo video can be found[3][4].

 Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the
 OpenStack cascading.

 Kindly ask for cross program design summit session to discuss OpenStack
 cascading and the contribution to Kilo.

 Kindly invite those who are interested in the OpenStack cascading to work
 together and contribute it to OpenStack.

 (I applied for “other projects” track [5], but it would be better to
 have a discussion as a formal cross program session, because many core
 programs are involved )


 [1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
 [2] PoC source code: https://github.com/stackforge/tricircle
 [3] Live demo video at YouTube:
 https://www.youtube.com/watch?v=OSU6PYRz5qY
 [4] Live demo video at Youku (low quality, for those who can't access
 YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
 [5]
 http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395
 .html

 There are etherpads for suggesting cross project sessions here:
 https://wiki.openstack.org/wiki/Summit/Planning
 https://etherpad.openstack.org/p/kilo-crossproject-summit-topics

 I am interested at comparing this to Nova's cells concept:
 http://docs.openstack.org/trunk/config-reference/content/section_compute-
 cells.html

 Cells basically scales out a single datacenter region by aggregating 
 multiple child
 Nova installations with an API cell.

 Each child cell can be tested in isolation, via its own API, before joining 
 it up to
 an API cell, that adds it into the region. Each cell logically has its own 
 database
 and message queue, which helps get more independent failure domains. You can
 use cell level scheduling to restrict people or types of instances to 
 particular
 subsets of the cloud, if required.

 It doesn't attempt to aggregate between regions, they are kept independent.
 Except, the usual assumption that you have a common identity between all
 regions.

 It also

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-07 Thread Duncan Thomas
My data consistency concerts would be around:

1) Defining global state. You can of course hand wave away a lot of
your issues by saying they are all local to the sub-unit, but then
what benefit are you providing .v. just providing a list of endpoints
and teaching the clients to talk to multiple endpoints, which is far
easier to make reliable than a new service generally is. State that
'ought' to be global: quota, usage, floating ips, cinder backups, and
probably a bunch more

2) Data locality expectations. You have to be careful about what
expectations .v. realty you're providing here. If the user experience
is substantially different using your proxy .v. direct API, then I
don't think you are providing a useful service - again, just teach the
clients to be multi-cloud aware. This includes what can be connected
to what (cinder volumes, snaps, backups, networks, etc), replication
behaviours and speeds (swift) and probably a bunch more that I haven't
thought of yet.



On 7 October 2014 14:24, joehuang joehu...@huawei.com wrote:
 Hello, Joshua,

 Thank you for your concerns on OpenStack cascading. I am afraid that I am not 
 proper person to give comment on cells, but I would like to speak a little 
 about cascading for you mentioned with its own set of consistency warts I'm 
 sure .

 1. For small scale or a cloud within one data centers, one OpenStack instance 
 (including cells) without cascading feature can work just like it work today. 
 OpenStack cascading just introduces Nova-proxy, Cinder-proxy, L2/L3 proxy... 
 like other vendor specific agent/driver( for example, vcenter driver, hyper-v 
 driver, linux-agent.. ovs-agent ), and does not change the current 
 architecture for Nova/Cinder/Neutron..., and does not affect the aleady 
 developed features and deployment capability. The cloud operators can skip 
 the existence of OpenStack cascading if they don't want to use it, just like 
 they don't want to use some kinds of hypervisor / sdn controller 

 2. Could you provide concrete inconsistency issues you are worried about in 
 OpenStack cascading? Although we did not implement inconsistency check in the 
 PoC source code completely, but because logical VM/Volume/Port/Network... 
 objects are stored in the cascading OpenStack, and the physical objects are 
 stored in the cascaded OpenStack, uuid mapping between logical object and 
 physical object had been built,  it's possible and easy to solve the 
 inconsistency issues. Even for flavor, host aggregate, we have method to 
 solve the inconsistency issue.

 Best Regards

 Chaoyi Huang ( joehuang )
 
 From: Joshua Harlow [harlo...@outlook.com]
 Sent: 07 October 2014 12:21
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
 OpenStack cascading

 On Oct 3, 2014, at 2:44 PM, Monty Taylor mord...@inaugust.com wrote:

 On 09/30/2014 12:07 PM, Tim Bell wrote:
 -Original Message-
 From: John Garbutt [mailto:j...@johngarbutt.com]
 Sent: 30 September 2014 15:35
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
 OpenStack
 cascading

 On 30 September 2014 14:04, joehuang joehu...@huawei.com wrote:
 Hello, Dear TC and all,

 Large cloud operators prefer to deploy multiple OpenStack instances(as
 different zones), rather than a single monolithic OpenStack instance 
 because of
 these reasons:

 1) Multiple data centers distributed geographically;
 2) Multi-vendor business policy;
 3) Server nodes scale up modularized from 00's up to million;
 4) Fault and maintenance isolation between zones (only REST
 interface);

 At the same time, they also want to integrate these OpenStack instances 
 into
 one cloud. Instead of proprietary orchestration layer, they want to use 
 standard
 OpenStack framework for Northbound API compatibility with HEAT/Horizon or
 other 3rd ecosystem apps.

 We call this pattern as OpenStack Cascading, with proposal described by
 [1][2]. PoC live demo video can be found[3][4].

 Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in 
 the
 OpenStack cascading.

 Kindly ask for cross program design summit session to discuss OpenStack
 cascading and the contribution to Kilo.

 Kindly invite those who are interested in the OpenStack cascading to work
 together and contribute it to OpenStack.

 (I applied for “other projects” track [5], but it would be better to
 have a discussion as a formal cross program session, because many core
 programs are involved )


 [1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
 [2] PoC source code: https://github.com/stackforge/tricircle
 [3] Live demo video at YouTube:
 https://www.youtube.com/watch?v=OSU6PYRz5qY
 [4] Live demo video at Youku (low quality, for those who can't access
 YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
 [5]
 http

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-07 Thread Monty Taylor
On 10/07/2014 06:41 AM, Duncan Thomas wrote:
 My data consistency concerts would be around:
 
 1) Defining global state. You can of course hand wave away a lot of
 your issues by saying they are all local to the sub-unit, but then
 what benefit are you providing .v. just providing a list of endpoints
 and teaching the clients to talk to multiple endpoints, which is far
 easier to make reliable than a new service generally is. State that
 'ought' to be global: quota, usage, floating ips, cinder backups, and
 probably a bunch more

BTW - since infra regularly talks to multiple clouds, I've been working
on splitting supporting code for that into a couple of libraries. Next
pass is to go add support for it to the clients, and it's not really a
lot of work ... so let's assume that the vs. here is going to be
accomplished soonish for the purposes of assessing the above question.

Second BTW - you're certainly right about the first two in the global
list - we keep track of quota and usage ourselves inside of nodepool.
Actually - since nodepool already does a bunch of these things - maybe
we should just slap a REST api on it...

 2) Data locality expectations. You have to be careful about what
 expectations .v. realty you're providing here. If the user experience
 is substantially different using your proxy .v. direct API, then I
 don't think you are providing a useful service - again, just teach the
 clients to be multi-cloud aware. This includes what can be connected
 to what (cinder volumes, snaps, backups, networks, etc), replication
 behaviours and speeds (swift) and probably a bunch more that I haven't
 thought of yet.
 
 
 
 On 7 October 2014 14:24, joehuang joehu...@huawei.com wrote:
 Hello, Joshua,

 Thank you for your concerns on OpenStack cascading. I am afraid that I am 
 not proper person to give comment on cells, but I would like to speak a 
 little about cascading for you mentioned with its own set of consistency 
 warts I'm sure .

 1. For small scale or a cloud within one data centers, one OpenStack 
 instance (including cells) without cascading feature can work just like it 
 work today. OpenStack cascading just introduces Nova-proxy, Cinder-proxy, 
 L2/L3 proxy... like other vendor specific agent/driver( for example, vcenter 
 driver, hyper-v driver, linux-agent.. ovs-agent ), and does not change the 
 current architecture for Nova/Cinder/Neutron..., and does not affect the 
 aleady developed features and deployment capability. The cloud operators can 
 skip the existence of OpenStack cascading if they don't want to use it, just 
 like they don't want to use some kinds of hypervisor / sdn controller 

 2. Could you provide concrete inconsistency issues you are worried about in 
 OpenStack cascading? Although we did not implement inconsistency check in 
 the PoC source code completely, but because logical 
 VM/Volume/Port/Network... objects are stored in the cascading OpenStack, and 
 the physical objects are stored in the cascaded OpenStack, uuid mapping 
 between logical object and physical object had been built,  it's possible 
 and easy to solve the inconsistency issues. Even for flavor, host aggregate, 
 we have method to solve the inconsistency issue.

 Best Regards

 Chaoyi Huang ( joehuang )
 
 From: Joshua Harlow [harlo...@outlook.com]
 Sent: 07 October 2014 12:21
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
 OpenStack cascading

 On Oct 3, 2014, at 2:44 PM, Monty Taylor mord...@inaugust.com wrote:

 On 09/30/2014 12:07 PM, Tim Bell wrote:
 -Original Message-
 From: John Garbutt [mailto:j...@johngarbutt.com]
 Sent: 30 September 2014 15:35
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
 OpenStack
 cascading

 On 30 September 2014 14:04, joehuang joehu...@huawei.com wrote:
 Hello, Dear TC and all,

 Large cloud operators prefer to deploy multiple OpenStack instances(as
 different zones), rather than a single monolithic OpenStack instance 
 because of
 these reasons:

 1) Multiple data centers distributed geographically;
 2) Multi-vendor business policy;
 3) Server nodes scale up modularized from 00's up to million;
 4) Fault and maintenance isolation between zones (only REST
 interface);

 At the same time, they also want to integrate these OpenStack instances 
 into
 one cloud. Instead of proprietary orchestration layer, they want to use 
 standard
 OpenStack framework for Northbound API compatibility with HEAT/Horizon or
 other 3rd ecosystem apps.

 We call this pattern as OpenStack Cascading, with proposal described by
 [1][2]. PoC live demo video can be found[3][4].

 Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in 
 the
 OpenStack cascading.

 Kindly ask for cross program design summit session to discuss OpenStack
 cascading

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-07 Thread Duncan Thomas
On 7 October 2014 16:30, Monty Taylor mord...@inaugust.com wrote:

 Second BTW - you're certainly right about the first two in the global
 list - we keep track of quota and usage ourselves inside of nodepool.
 Actually - since nodepool already does a bunch of these things - maybe
 we should just slap a REST api on it...

Whether or not it does much for this use-case, Nodepool-aaS would
definitely be useful I think, particularly if it was properly
multi-tenant. It isn't /hard/ to set up, but it's effort and yet
another cog to understand and debug.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-06 Thread Joshua Harlow
On Oct 3, 2014, at 2:44 PM, Monty Taylor mord...@inaugust.com wrote:

 On 09/30/2014 12:07 PM, Tim Bell wrote:
 -Original Message-
 From: John Garbutt [mailto:j...@johngarbutt.com]
 Sent: 30 September 2014 15:35
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
 OpenStack
 cascading
 
 On 30 September 2014 14:04, joehuang joehu...@huawei.com wrote:
 Hello, Dear TC and all,
 
 Large cloud operators prefer to deploy multiple OpenStack instances(as
 different zones), rather than a single monolithic OpenStack instance 
 because of
 these reasons:
 
 1) Multiple data centers distributed geographically;
 2) Multi-vendor business policy;
 3) Server nodes scale up modularized from 00's up to million;
 4) Fault and maintenance isolation between zones (only REST
 interface);
 
 At the same time, they also want to integrate these OpenStack instances 
 into
 one cloud. Instead of proprietary orchestration layer, they want to use 
 standard
 OpenStack framework for Northbound API compatibility with HEAT/Horizon or
 other 3rd ecosystem apps.
 
 We call this pattern as OpenStack Cascading, with proposal described by
 [1][2]. PoC live demo video can be found[3][4].
 
 Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the
 OpenStack cascading.
 
 Kindly ask for cross program design summit session to discuss OpenStack
 cascading and the contribution to Kilo.
 
 Kindly invite those who are interested in the OpenStack cascading to work
 together and contribute it to OpenStack.
 
 (I applied for “other projects” track [5], but it would be better to
 have a discussion as a formal cross program session, because many core
 programs are involved )
 
 
 [1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
 [2] PoC source code: https://github.com/stackforge/tricircle
 [3] Live demo video at YouTube:
 https://www.youtube.com/watch?v=OSU6PYRz5qY
 [4] Live demo video at Youku (low quality, for those who can't access
 YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
 [5]
 http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395
 .html
 
 There are etherpads for suggesting cross project sessions here:
 https://wiki.openstack.org/wiki/Summit/Planning
 https://etherpad.openstack.org/p/kilo-crossproject-summit-topics
 
 I am interested at comparing this to Nova's cells concept:
 http://docs.openstack.org/trunk/config-reference/content/section_compute-
 cells.html
 
 Cells basically scales out a single datacenter region by aggregating 
 multiple child
 Nova installations with an API cell.
 
 Each child cell can be tested in isolation, via its own API, before joining 
 it up to
 an API cell, that adds it into the region. Each cell logically has its own 
 database
 and message queue, which helps get more independent failure domains. You can
 use cell level scheduling to restrict people or types of instances to 
 particular
 subsets of the cloud, if required.
 
 It doesn't attempt to aggregate between regions, they are kept independent.
 Except, the usual assumption that you have a common identity between all
 regions.
 
 It also keeps a single Cinder, Glance, Neutron deployment per region.
 
 It would be great to get some help hardening, testing, and building out 
 more of
 the cells vision. I suspect we may form a new Nova subteam to trying and 
 drive
 this work forward in kilo, if we can build up enough people wanting to work 
 on
 improving cells.
 
 
 At CERN, we've deployed cells at scale but are finding a number of 
 architectural issues that need resolution in the short term to attain 
 feature parity. A vision of we all run cells but some of us have only one 
 is not there yet. Typical examples are flavors, security groups and server 
 groups, all of which are not yet implemented to the necessary levels for 
 cell parent/child.
 
 We would be very keen on agreeing the strategy in Paris so that we can 
 ensure the gap is closed, test it in the gate and that future features 
 cannot 'wishlist' cell support.
 
 I agree with this. I know that there are folks who don't like cells -
 but I think that ship has sailed. It's there - which means we need to
 make it first class.

Just out of curiosity, would you prioritize cells over split out unified 
quotas, or a split out scheduler, or split out virt drivers, or a split out 
..., or upgrades that work reliably or db quota consistency ([2]) or the other 
X things that need to be done to keep the 'openstack' ship afloat (neutron 
integration/migrations... the list can go on and on)?

To me that's the part that has always bugged me about cells, it seems like we 
have bigger 'fish to fry' to get the whole system working in a good manner 
instead of adding yet another fish in to the already overwhelmed fishery (this 
is an analogy, not reality, ha). It somehow didn't/doesn't feel right that we 
have so many other pieces of the puzzle that need

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-04 Thread henry hly
Hi Monty and Cellers,

I understand that there are installation base for Cells, these clouds
are still running and some issues needed to be addressed for the daily
operation. For sure the improvement on Cells are necessary to be done
in first class for the community's commitment.

The introduction of OpenStack cascading is not to divide the
community, ‍it is to address some other interests that Cell is not
designed for: heterogeneous cluster integration based on established
REST API, and total distributed scalability (not only Nova, but also
Cinder/Neutron/Ceilometer...). Total distribution is essential for
some large cloud operators who has many data centers distributed
geographically, and heterogeneous cluster integration‍ is the base
business policy (different version, different vendor, and even
none-Openstack like vcenter).

So Cascading is not an alternative game for cells, both solutions can
co-exist and complement to each other. Also I don't think cellers need
to shift their work to OpenStack cascading, they still focus on cells,
 and there would be not any conflicts between codes of cells and
cascading.

Best Regards,
Wu Hongning


On Sat, Oct 4, 2014 at 5:44 AM, Monty Taylor mord...@inaugust.com wrote:

 On 09/30/2014 12:07 PM, Tim Bell wrote:
  -Original Message-
  From: John Garbutt [mailto:j...@johngarbutt.com]
  Sent: 30 September 2014 15:35
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
  OpenStack
  cascading
 
  On 30 September 2014 14:04, joehuang joehu...@huawei.com wrote:
  Hello, Dear TC and all,
 
  Large cloud operators prefer to deploy multiple OpenStack instances(as
  different zones), rather than a single monolithic OpenStack instance 
  because of
  these reasons:
 
  1) Multiple data centers distributed geographically;
  2) Multi-vendor business policy;
  3) Server nodes scale up modularized from 00's up to million;
  4) Fault and maintenance isolation between zones (only REST
  interface);
 
  At the same time, they also want to integrate these OpenStack instances 
  into
  one cloud. Instead of proprietary orchestration layer, they want to use 
  standard
  OpenStack framework for Northbound API compatibility with HEAT/Horizon or
  other 3rd ecosystem apps.
 
  We call this pattern as OpenStack Cascading, with proposal described by
  [1][2]. PoC live demo video can be found[3][4].
 
  Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in 
  the
  OpenStack cascading.
 
  Kindly ask for cross program design summit session to discuss OpenStack
  cascading and the contribution to Kilo.
 
  Kindly invite those who are interested in the OpenStack cascading to work
  together and contribute it to OpenStack.
 
  (I applied for “other projects” track [5], but it would be better to
  have a discussion as a formal cross program session, because many core
  programs are involved )
 
 
  [1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
  [2] PoC source code: https://github.com/stackforge/tricircle
  [3] Live demo video at YouTube:
  https://www.youtube.com/watch?v=OSU6PYRz5qY
  [4] Live demo video at Youku (low quality, for those who can't access
  YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
  [5]
  http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395
  .html
 
  There are etherpads for suggesting cross project sessions here:
  https://wiki.openstack.org/wiki/Summit/Planning
  https://etherpad.openstack.org/p/kilo-crossproject-summit-topics
 
  I am interested at comparing this to Nova's cells concept:
  http://docs.openstack.org/trunk/config-reference/content/section_compute-
  cells.html
 
  Cells basically scales out a single datacenter region by aggregating 
  multiple child
  Nova installations with an API cell.
 
  Each child cell can be tested in isolation, via its own API, before 
  joining it up to
  an API cell, that adds it into the region. Each cell logically has its own 
  database
  and message queue, which helps get more independent failure domains. You 
  can
  use cell level scheduling to restrict people or types of instances to 
  particular
  subsets of the cloud, if required.
 
  It doesn't attempt to aggregate between regions, they are kept independent.
  Except, the usual assumption that you have a common identity between all
  regions.
 
  It also keeps a single Cinder, Glance, Neutron deployment per region.
 
  It would be great to get some help hardening, testing, and building out 
  more of
  the cells vision. I suspect we may form a new Nova subteam to trying and 
  drive
  this work forward in kilo, if we can build up enough people wanting to 
  work on
  improving cells.
 
 
  At CERN, we've deployed cells at scale but are finding a number of 
  architectural issues that need resolution in the short term to attain 
  feature parity. A vision of we all run cells but some of us have only one

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-03 Thread Monty Taylor
On 09/30/2014 12:07 PM, Tim Bell wrote:
 -Original Message-
 From: John Garbutt [mailto:j...@johngarbutt.com]
 Sent: 30 September 2014 15:35
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack
 cascading

 On 30 September 2014 14:04, joehuang joehu...@huawei.com wrote:
 Hello, Dear TC and all,

 Large cloud operators prefer to deploy multiple OpenStack instances(as
 different zones), rather than a single monolithic OpenStack instance because 
 of
 these reasons:

 1) Multiple data centers distributed geographically;
 2) Multi-vendor business policy;
 3) Server nodes scale up modularized from 00's up to million;
 4) Fault and maintenance isolation between zones (only REST
 interface);

 At the same time, they also want to integrate these OpenStack instances into
 one cloud. Instead of proprietary orchestration layer, they want to use 
 standard
 OpenStack framework for Northbound API compatibility with HEAT/Horizon or
 other 3rd ecosystem apps.

 We call this pattern as OpenStack Cascading, with proposal described by
 [1][2]. PoC live demo video can be found[3][4].

 Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the
 OpenStack cascading.

 Kindly ask for cross program design summit session to discuss OpenStack
 cascading and the contribution to Kilo.

 Kindly invite those who are interested in the OpenStack cascading to work
 together and contribute it to OpenStack.

 (I applied for “other projects” track [5], but it would be better to
 have a discussion as a formal cross program session, because many core
 programs are involved )


 [1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
 [2] PoC source code: https://github.com/stackforge/tricircle
 [3] Live demo video at YouTube:
 https://www.youtube.com/watch?v=OSU6PYRz5qY
 [4] Live demo video at Youku (low quality, for those who can't access
 YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
 [5]
 http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395
 .html

 There are etherpads for suggesting cross project sessions here:
 https://wiki.openstack.org/wiki/Summit/Planning
 https://etherpad.openstack.org/p/kilo-crossproject-summit-topics

 I am interested at comparing this to Nova's cells concept:
 http://docs.openstack.org/trunk/config-reference/content/section_compute-
 cells.html

 Cells basically scales out a single datacenter region by aggregating 
 multiple child
 Nova installations with an API cell.

 Each child cell can be tested in isolation, via its own API, before joining 
 it up to
 an API cell, that adds it into the region. Each cell logically has its own 
 database
 and message queue, which helps get more independent failure domains. You can
 use cell level scheduling to restrict people or types of instances to 
 particular
 subsets of the cloud, if required.

 It doesn't attempt to aggregate between regions, they are kept independent.
 Except, the usual assumption that you have a common identity between all
 regions.

 It also keeps a single Cinder, Glance, Neutron deployment per region.

 It would be great to get some help hardening, testing, and building out more 
 of
 the cells vision. I suspect we may form a new Nova subteam to trying and 
 drive
 this work forward in kilo, if we can build up enough people wanting to work 
 on
 improving cells.

 
 At CERN, we've deployed cells at scale but are finding a number of 
 architectural issues that need resolution in the short term to attain feature 
 parity. A vision of we all run cells but some of us have only one is not 
 there yet. Typical examples are flavors, security groups and server groups, 
 all of which are not yet implemented to the necessary levels for cell 
 parent/child.
 
 We would be very keen on agreeing the strategy in Paris so that we can ensure 
 the gap is closed, test it in the gate and that future features cannot 
 'wishlist' cell support.

I agree with this. I know that there are folks who don't like cells -
but I think that ship has sailed. It's there - which means we need to
make it first class.

 Resources can be made available if we can agree the direction but current 
 reviews are not progressing (such as 
 https://bugs.launchpad.net/nova/+bug/1211011)
 
 Tim
 
 Thanks,
 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-02 Thread Duncan Thomas
So I have substantial concerns about hierarchy based designs and data
mass - the interconnect between leaves in the hierarchy are often
going to be fairly thin, particularly if they are geographically
distributed, so the semantics of what is allowed to access what data
resource (glance, swift, cinder, manilla) need some very careful
thought, and the way those restrictions are portrayed to the user to
avoid confusion needs even more thought.

On 30 September 2014 14:04, joehuang joehu...@huawei.com wrote:
 Hello, Dear TC and all,

 Large cloud operators prefer to deploy multiple OpenStack instances(as 
 different zones), rather than a single monolithic OpenStack instance because 
 of these reasons:

 1) Multiple data centers distributed geographically;
 2) Multi-vendor business policy;
 3) Server nodes scale up modularized from 00's up to million;
 4) Fault and maintenance isolation between zones (only REST interface);

 At the same time, they also want to integrate these OpenStack instances into 
 one cloud. Instead of proprietary orchestration layer, they want to use 
 standard OpenStack framework for Northbound API compatibility with 
 HEAT/Horizon or other 3rd ecosystem apps.

 We call this pattern as OpenStack Cascading, with proposal described by 
 [1][2]. PoC live demo video can be found[3][4].

 Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the 
 OpenStack cascading.

 Kindly ask for cross program design summit session to discuss OpenStack 
 cascading and the contribution to Kilo.

 Kindly invite those who are interested in the OpenStack cascading to work 
 together and contribute it to OpenStack.

 (I applied for “other projects” track [5], but it would be better to have a 
 discussion as a formal cross program session, because many core programs are 
 involved )


 [1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
 [2] PoC source code: https://github.com/stackforge/tricircle
 [3] Live demo video at YouTube: https://www.youtube.com/watch?v=OSU6PYRz5qY
 [4] Live demo video at Youku (low quality, for those who can't access 
 YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
 [5] 
 http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395.html

 Best Regards
 Chaoyi Huang ( Joe Huang )
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-02 Thread joehuang
Hello, Duncan, 

Your substantial concerns are warmly welcome and very important.

Agree with you that the interconnect between leaves should be faily thin: 

During the PoC, all Nova/Cinder/Ceilometer/Neutron/Glance (Glance is optional 
to be located in leave) in the leave work independently from other leaves. The 
only interconnect between two leaves is the L2/L3 network across OpenStack for 
the tenant. But it will be done by the L2 proxy/L3 proxy located in the 
cascading level, and the instrcution will only be issued by the corresponding 
L2/L3 proxy one way.

And also, from Ceilometer perspective, it must work as distributed service. We 
roughly estimated how much meter data volume will be generated for 1 million 
level cloud, if we use current Ceilometer (not include Gnocchi), and sampling 
period is 1 minutes, it's about 20 GB / minute (quite roughly estimated). Using 
single Ceilometer instance is almost impossible for the large scale distributed 
cloud. Therefore, Ceilometer cascading must be designed very carefully.

In our PoC design principle, the cascaded OpenStack should work passively, and 
has no kowledge whether it is running under cascading senario or not to and 
whether there is sibling OpenStack or not, to reduce interconnect between 
cascaded OpenStacks as much as possible. And one level cascading is enough for 
foreseeable future.

PoC team planned to stay at Paris from Oct.29 to Nov.8, are you interested in a 
f2f workshop for deep diving in the OpenStack cascading?

Best Regards

Chaoyi Huang ( joehuang )


From: Duncan Thomas [duncan.tho...@gmail.com]
Sent: 02 October 2014 18:59
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
OpenStack cascading

So I have substantial concerns about hierarchy based designs and data
mass - the interconnect between leaves in the hierarchy are often
going to be fairly thin, particularly if they are geographically
distributed, so the semantics of what is allowed to access what data
resource (glance, swift, cinder, manilla) need some very careful
thought, and the way those restrictions are portrayed to the user to
avoid confusion needs even more thought.

On 30 September 2014 14:04, joehuang joehu...@huawei.com wrote:
 Hello, Dear TC and all,

 Large cloud operators prefer to deploy multiple OpenStack instances(as 
 different zones), rather than a single monolithic OpenStack instance because 
 of these reasons:

 1) Multiple data centers distributed geographically;
 2) Multi-vendor business policy;
 3) Server nodes scale up modularized from 00's up to million;
 4) Fault and maintenance isolation between zones (only REST interface);

 At the same time, they also want to integrate these OpenStack instances into 
 one cloud. Instead of proprietary orchestration layer, they want to use 
 standard OpenStack framework for Northbound API compatibility with 
 HEAT/Horizon or other 3rd ecosystem apps.

 We call this pattern as OpenStack Cascading, with proposal described by 
 [1][2]. PoC live demo video can be found[3][4].

 Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the 
 OpenStack cascading.

 Kindly ask for cross program design summit session to discuss OpenStack 
 cascading and the contribution to Kilo.

 Kindly invite those who are interested in the OpenStack cascading to work 
 together and contribute it to OpenStack.

 (I applied for “other projects” track [5], but it would be better to have a 
 discussion as a formal cross program session, because many core programs are 
 involved )


 [1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
 [2] PoC source code: https://github.com/stackforge/tricircle
 [3] Live demo video at YouTube: https://www.youtube.com/watch?v=OSU6PYRz5qY
 [4] Live demo video at Youku (low quality, for those who can't access 
 YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
 [5] 
 http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395.html

 Best Regards
 Chaoyi Huang ( Joe Huang )
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-02 Thread Duncan Thomas
On 2 October 2014 14:30, joehuang joehu...@huawei.com wrote:

 In our PoC design principle, the cascaded OpenStack should work passively, 
 and has no kowledge whether it is running under cascading senario or not to 
 and whether there is sibling OpenStack or not, to reduce interconnect 
 between cascaded OpenStacks as much as possible.
 And one level cascading is enough for foreseeable future.

The transparency is what worries me, e.g. at the moment I can attach
any volume to any vm (* depending on cinder AZ policy), which is going
to be broken in a cascaded scenario if the volume and vm are in
different leaves.


 PoC team planned to stay at Paris from Oct.29 to Nov.8, are you interested in 
 a f2f workshop for deep diving in the OpenStack cascading?

Definitely interested, yes please.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-02 Thread Tiwari, Arvind
Hi Huang,

Thanks for looking in to my proposal.

Yes, Alliance is will be utilizing/retain all Northbound service APIs, in 
addition it will expose APIs for inter Alliance (inter cloud) communication. 
Alliance will be running at topmost layer on each individual OpenStack Cloud of 
multi-site distributed cloud setup. Additionally Alliance will provide loosely 
coupled integration among multiple clouds or cloudyfied data center.

In case of multi regions setup “regional Alliance” (RA) will orchestrate the 
resource (project, VMs, volumes, network ….) provisioning and state 
synchronization through its peers RA. In case cross enterprise integration 
(Enterprise/VPC/bursting like scenario) - multi site public cloud) 
“global Alliance” (GA) will be interface for external integration point and 
communicating with individual RAs.  I will update the wiki to make it more 
clear.

I will love to coordinate with your team and solve this issue together,  I will 
be reaching there in Paris on 1 Nov and we can site f2f before session. Let’s 
plan a time to meet, Monday will be easy for me.


Thanks,
Arvind



From: joehuang [mailto:joehu...@huawei.com]
Sent: Wednesday, October 01, 2014 5:01 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading


Hi, Tiwari,



Great to know you are also trying to address similar issues. For sure we are 
happy to work out a common solution for these issues.



I just go through the wiki page, the question for me is will the Alliance 
provide/retain current north bound OpenStack API ?. It's very important for 
the cloud still expose OpenStack API so that the OpenStack API ecosystem will 
not be lost.



And currently OpenStack cascading has not covered the hybrid cloud (private 
cloud and public cloud federation), so your project will be a good supplement.



May we have a f2f workshop before the formal Paris design summit, so that we 
can exchange ideas completely. 40 minutes design summit session is not enough 
for deep diving. PoC team will stay at Paris from Oct.29 to Nov.8.



Best Regards



Chaoyi Huang ( joehuang )




From: Tiwari, Arvind [arvind.tiw...@hp.com]
Sent: 02 October 2014 0:42
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading
Hi Chaoyi,

Thanks for sharing these information.

Sometime back I have stared a project called “Alliance” which trying to address 
the same concerns (see the link below). Alliance service is designed to provide 
Inter-Cloud Resource Federation which will enable resource sharing across 
cloud in distributed multi-site OpenStack clouds deployments. This service will 
run on top of OpenStack Cloud and fabricate different cloud (or data centers) 
instances in distributed cloud setup. This service will work closely with 
OpenStack components (Keystone, Nova, Cinder) to manage and provision 
different resources (token, VM, images, network .). Alliance service will 
provide abstraction to hide interoperability and integration complexities from 
underpinning cloud instance and enable following business use cases.

- Multi Region Capability
- Virtual Private Cloud
- Cloud Bursting

This service will provide true plug  play model for region expansion, VPC like 
use case, conceptual design can be found at  
https://wiki.openstack.org/wiki/Inter_Cloud_Resource_Federation. We are working 
on POC using this concept which is in WIP.

I will be happy to coordinate with you on this and try to come up with common 
solution, seems we both are trying to address same issues.

Thoughts?

Thanks,
Arvind

From: joehuang [mailto:joehu...@huawei.com]
Sent: Wednesday, October 01, 2014 6:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

Hello,  Alex,

Thank you very much for your mail about remote cluster hypervisor.

One of the inspiration for OpenStack cascading is from the remote clustered 
hypervisor like vCenter. The difference between the remote clustered hypervisor 
and OpenStack cascading is that not only Nova involved in the cascading, but 
also Cinder, Neutron, Ceilometer, and even Glance(optional).

Please refer to 
https://wiki.openstack.org/wiki/OpenStack_cascading_solution#Inspiration,
https://wiki.openstack.org/wiki/OpenStack_cascading_solution#Architecture for 
more detail information.

Best Regards

Chaoyi Huang ( joehuang )


From: Alex Glikson [glik...@il.ibm.com]
Sent: 01 October 2014 12:51
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading
This sounds related to the discussion on the 'Nova clustered hypervisor driver' 
which started at Juno design summit [1

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-02 Thread joehuang
Hello, Duncan, 

Good questions. Currently, the availability zone (AZ in short) terms are not 
applied to both Cinder and Nova together, but seperately. That is to say, the 
AZ for  Cinder can has no relationship to the AZ for Nova. 

Under OpenStack cascading scenario, we would like to make each cascaded 
OpenStack function as fault isolation AZ, therefore, the AZ meaning for Cinder 
and Nova would be kept same. Now it's done by configuration. And if a volume 
located in another AZ2 (cascaded OpenStack)  was attached to a VM located in 
AZ1, it'll be failed, and should not be allowed. 

It's good to add AZ enforcement check in the source code of proxy (no need to 
be done on the trunk source code) to make sure the volume and VM located in the 
same cascaded OpenStack.

That's great you are interested in deep diving before design summit. Please 
follow this thread for the venue and date-time.

Best Regards

Chaoyi Huang ( joehuang )


From: Duncan Thomas [duncan.tho...@gmail.com]
Sent: 02 October 2014 22:33
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
OpenStack cascading

On 2 October 2014 14:30, joehuang joehu...@huawei.com wrote:

 In our PoC design principle, the cascaded OpenStack should work passively, 
 and has no kowledge whether it is running under cascading senario or not to 
 and whether there is sibling OpenStack or not, to reduce interconnect 
 between cascaded OpenStacks as much as possible.
 And one level cascading is enough for foreseeable future.

The transparency is what worries me, e.g. at the moment I can attach
any volume to any vm (* depending on cinder AZ policy), which is going
to be broken in a cascaded scenario if the volume and vm are in
different leaves.


 PoC team planned to stay at Paris from Oct.29 to Nov.8, are you interested in 
 a f2f workshop for deep diving in the OpenStack cascading?

Definitely interested, yes please.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-02 Thread joehuang
Hello, Tiwari,



Thanks for your interesting. We have tried to adress multi-site cloud 
integration in fully distributed manner. We found that it's ok if all OpenStack 
instances work with no association,  but if we want to introduce L2/L3 
networking across OpenStack, then it's very hard to track and adress recouses 
corelationship. For example, tenant A has VM1 in OpenStack 1 and VM 2 in 
OpenStack2 with network N1, tennat B has VM3 in OpenStack 2 and VM4 in 
OpenStack 3 with network N2..., the relationship track and data synchronization 
is very hard to address for fully distributed way.



Could you come to Paris a little early, I am afraid we have to prepare live 
demo on Nov.2, and Nov.3 is a very busy day. The f2f deep diving would be 
better to have before Nov.2.



Please follow this thread for the venue and date-time.

Best Regards.



Chaoyi Huang ( joehuang )




From: Tiwari, Arvind [arvind.tiw...@hp.com]
Sent: 02 October 2014 23:13
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

Hi Huang,

Thanks for looking in to my proposal.

Yes, Alliance is will be utilizing/retain all Northbound service APIs, in 
addition it will expose APIs for inter Alliance (inter cloud) communication. 
Alliance will be running at topmost layer on each individual OpenStack Cloud of 
multi-site distributed cloud setup. Additionally Alliance will provide loosely 
coupled integration among multiple clouds or cloudyfied data center.

In case of multi regions setup “regional Alliance” (RA) will orchestrate the 
resource (project, VMs, volumes, network ….) provisioning and state 
synchronization through its peers RA. In case cross enterprise integration 
(Enterprise/VPC/bursting like scenario) - multi site public cloud) 
“global Alliance” (GA) will be interface for external integration point and 
communicating with individual RAs.  I will update the wiki to make it more 
clear.

I will love to coordinate with your team and solve this issue together,  I will 
be reaching there in Paris on 1 Nov and we can site f2f before session. Let’s 
plan a time to meet, Monday will be easy for me.


Thanks,
Arvind



From: joehuang [mailto:joehu...@huawei.com]
Sent: Wednesday, October 01, 2014 5:01 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading


Hi, Tiwari,



Great to know you are also trying to address similar issues. For sure we are 
happy to work out a common solution for these issues.



I just go through the wiki page, the question for me is will the Alliance 
provide/retain current north bound OpenStack API ?. It's very important for 
the cloud still expose OpenStack API so that the OpenStack API ecosystem will 
not be lost.



And currently OpenStack cascading has not covered the hybrid cloud (private 
cloud and public cloud federation), so your project will be a good supplement.



May we have a f2f workshop before the formal Paris design summit, so that we 
can exchange ideas completely. 40 minutes design summit session is not enough 
for deep diving. PoC team will stay at Paris from Oct.29 to Nov.8.



Best Regards



Chaoyi Huang ( joehuang )




From: Tiwari, Arvind [arvind.tiw...@hp.com]
Sent: 02 October 2014 0:42
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading
Hi Chaoyi,

Thanks for sharing these information.

Sometime back I have stared a project called “Alliance” which trying to address 
the same concerns (see the link below). Alliance service is designed to provide 
Inter-Cloud Resource Federation which will enable resource sharing across 
cloud in distributed multi-site OpenStack clouds deployments. This service will 
run on top of OpenStack Cloud and fabricate different cloud (or data centers) 
instances in distributed cloud setup. This service will work closely with 
OpenStack components (Keystone, Nova, Cinder) to manage and provision 
different resources (token, VM, images, network .). Alliance service will 
provide abstraction to hide interoperability and integration complexities from 
underpinning cloud instance and enable following business use cases.

- Multi Region Capability
- Virtual Private Cloud
- Cloud Bursting

This service will provide true plug  play model for region expansion, VPC like 
use case, conceptual design can be found at  
https://wiki.openstack.org/wiki/Inter_Cloud_Resource_Federation. We are working 
on POC using this concept which is in WIP.

I will be happy to coordinate with you on this and try to come up with common 
solution, seems we both are trying to address same issues.

Thoughts?

Thanks,
Arvind

From: joehuang [mailto:joehu...@huawei.com]
Sent: Wednesday, October 01, 2014 6:56 AM

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-01 Thread Tom Fifield
Hi Joe,

On 01/10/14 09:10, joehuang wrote:
 OpenStack cascading: to integrate multi-site / multi-vendor OpenStack 
 instances into one cloud with OpenStack API exposed.
 Cells: a single OpenStack instance scale up methodology

Just to let you know - there are actually some users out there that use
cells to integrate multi-site / multi-vendor OpenStack instances into
one cloud with OpenStack API exposed., and this is their main reason
for using cells - not as a scale up methodology.


Regards,

Tom

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-01 Thread loy wolfe
Hi Joe and Cellers,

I've tried to understand relationship between Cell and Cascading. If
Cell has been designed as below, would it be the same as Cascading?

1) Besides Nova, Neutron/Ceilometer.. is also hierarchically
structured for scalability.

2) Child-parent interaction is based on REST OS-API, but not internal
rpc message.

By my understanding, core idea of Cascading is that each resource
building block(like child cell) is a clearly separated autonomous
system, with the already defined REST OS-API as the NB integration
interface of each block, is that right?

So, what's the OAM and business value? Is it easy to add a building
block POD into the running production cloud, while this POD is from a
different Openstack packager and has its own deployment choice:
Openstack version release(J/K/L...), MQ/DB type(mysql/pg,
rabbitmq/zeromq..), backend drivers, Nova/Neutron/Cinder/Ceilometer
controller-node / api-server config options...?

Best Regards
Loy


On Wed, Oct 1, 2014 at 3:19 PM, Tom Fifield t...@openstack.org wrote:

 Hi Joe,

 On 01/10/14 09:10, joehuang wrote:
  OpenStack cascading: to integrate multi-site / multi-vendor OpenStack 
  instances into one cloud with OpenStack API exposed.
  Cells: a single OpenStack instance scale up methodology

 Just to let you know - there are actually some users out there that use
 cells to integrate multi-site / multi-vendor OpenStack instances into
 one cloud with OpenStack API exposed., and this is their main reason
 for using cells - not as a scale up methodology.


 Regards,

 Tom

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-01 Thread joehuang
Hello, Tom, 

Thanks for your mail to mention that some users out there that use
cells to integrate multi-site / multi-vendor OpenStack instances into
one cloud with OpenStack API exposed.,.

Why do I think Cells using scale up methodology?

1, Use case 1: All cells using shared Cinder, Neutron, Glance as John Garbutt 
mentioned in the mail. In this use case, ,one Cinder, Neutron, Galnce intance 
to scale up for multi-sites, no multi-vendor cinder, neutron, glance, althoth 
it can integrate different vendor's driver/agent/plugin. This use case has 
unified north-bound OpenStack API

2, Use case 2, each child cells with seperate Cinder, Nova-Network. For this 
use case, No unified north-bound OpenStack API, multi-endpoints for upper layer.

3, Until now, only RPC used for inter-cell or api-cell communication. For 
multi-data center deployment, it leads to the risk of out of management: if one 
parent cell faild, no API or CLI is available to manage the child cells. RPC 
message interface is imposible being used to manage child cells.

4. Ok, suppose that Cells will add new driver to use REST API for  inter-cell 
or api-cell communication, then should it be done on Cinder/Neutron/Glance too? 
or still using shared Cinder/Neutron/Glance. If it's the the first choice, it 
is the same design as OpenStack cascading solution wants to do. If it's still 
using shared service, the RPC message acorss different data centers still 
exists, and it's still scale up methodology.

Best Regards

Chaoyi Huang ( joehuang )

From: Tom Fifield [t...@openstack.org]
Sent: 01 October 2014 15:19
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

Hi Joe,

On 01/10/14 09:10, joehuang wrote:
 OpenStack cascading: to integrate multi-site / multi-vendor OpenStack 
 instances into one cloud with OpenStack API exposed.
 Cells: a single OpenStack instance scale up methodology

Just to let you know - there are actually some users out there that use
cells to integrate multi-site / multi-vendor OpenStack instances into
one cloud with OpenStack API exposed., and this is their main reason
for using cells - not as a scale up methodology.


Regards,

Tom

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-01 Thread joehuang
Hello, Loy,

Thank you very much. You have already grasped the core design idea for 
OpenStack cascading:

By my understanding, core idea of Cascading is that each resource
building block(like child cell) is a clearly separated autonomous
system, with the already defined REST OS-API as the NB integration
interface of each block, is that right?

Yes, you are right. the cascading OpenStack (the parent) using already defined 
REST OS-API as the NB integration for each building block
(we called cascaded OpenStack).

So, what's the OAM and business value? Is it easy to add a building
block POD into the running production cloud, while this POD is from a
different Openstack packager and has its own deployment choice:
Openstack version release(J/K/L...), MQ/DB type(mysql/pg,
rabbitmq/zeromq..), backend drivers, Nova/Neutron/Cinder/Ceilometer
controller-node / api-server config options...?

In the lab, we have already dynamicly added new block POD (cascaded OpenStack)
 into the cloud with OpenStack cascading introduced. And each cascaded OpenStack
 version can be different because we use pythonclient and OpenStack itself 
support 
multiple API version compatibility co-exist. For sure DB/messagebus/backend 
drivers/controller 
node configuration can be different for different cascaded OpenStack.

Best regards

Chaoyi Huang ( joehuang )


From: loy wolfe [loywo...@gmail.com]
Sent: 01 October 2014 16:13
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
OpenStack cascading

Hi Joe and Cellers,

I've tried to understand relationship between Cell and Cascading. If
Cell has been designed as below, would it be the same as Cascading?

1) Besides Nova, Neutron/Ceilometer.. is also hierarchically
structured for scalability.

2) Child-parent interaction is based on REST OS-API, but not internal
rpc message.

By my understanding, core idea of Cascading is that each resource
building block(like child cell) is a clearly separated autonomous
system, with the already defined REST OS-API as the NB integration
interface of each block, is that right?

So, what's the OAM and business value? Is it easy to add a building
block POD into the running production cloud, while this POD is from a
different Openstack packager and has its own deployment choice:
Openstack version release(J/K/L...), MQ/DB type(mysql/pg,
rabbitmq/zeromq..), backend drivers, Nova/Neutron/Cinder/Ceilometer
controller-node / api-server config options...?

Best Regards
Loy


On Wed, Oct 1, 2014 at 3:19 PM, Tom Fifield t...@openstack.org wrote:

 Hi Joe,

 On 01/10/14 09:10, joehuang wrote:
  OpenStack cascading: to integrate multi-site / multi-vendor OpenStack 
  instances into one cloud with OpenStack API exposed.
  Cells: a single OpenStack instance scale up methodology

 Just to let you know - there are actually some users out there that use
 cells to integrate multi-site / multi-vendor OpenStack instances into
 one cloud with OpenStack API exposed., and this is their main reason
 for using cells - not as a scale up methodology.


 Regards,

 Tom

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-01 Thread joehuang
Hello,  Alex,

Thank you very much for your mail about remote cluster hypervisor.

One of the inspiration for OpenStack cascading is from the remote clustered 
hypervisor like vCenter. The difference between the remote clustered hypervisor 
and OpenStack cascading is that not only Nova involved in the cascading, but 
also Cinder, Neutron, Ceilometer, and even Glance(optional).

Please refer to 
https://wiki.openstack.org/wiki/OpenStack_cascading_solution#Inspiration,
https://wiki.openstack.org/wiki/OpenStack_cascading_solution#Architecture for 
more detail information.

Best Regards

Chaoyi Huang ( joehuang )


From: Alex Glikson [glik...@il.ibm.com]
Sent: 01 October 2014 12:51
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

This sounds related to the discussion on the 'Nova clustered hypervisor driver' 
which started at Juno design summit [1]. Talking to another OpenStack should be 
similar to talking to vCenter. The idea was that the Cells support could be 
refactored around this notion as well.
Not sure whether there have been any active progress with this in Juno, though.

Regards,
Alex


[1] http://junodesignsummit.sched.org/event/a0d38e1278182eb09f06e22457d94c0c#
[2] https://etherpad.openstack.org/p/juno-nova-clustered-hypervisor-support




From:joehuang joehu...@huawei.com
To:OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date:30/09/2014 04:08 PM
Subject:[openstack-dev] [all] [tc] Multi-clouds integration by 
OpenStack cascading




Hello, Dear TC and all,

Large cloud operators prefer to deploy multiple OpenStack instances(as 
different zones), rather than a single monolithic OpenStack instance because of 
these reasons:

1) Multiple data centers distributed geographically;
2) Multi-vendor business policy;
3) Server nodes scale up modularized from 00's up to million;
4) Fault and maintenance isolation between zones (only REST interface);

At the same time, they also want to integrate these OpenStack instances into 
one cloud. Instead of proprietary orchestration layer, they want to use 
standard OpenStack framework for Northbound API compatibility with HEAT/Horizon 
or other 3rd ecosystem apps.

We call this pattern as OpenStack Cascading, with proposal described by 
[1][2]. PoC live demo video can be found[3][4].

Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the 
OpenStack cascading.

Kindly ask for cross program design summit session to discuss OpenStack 
cascading and the contribution to Kilo.

Kindly invite those who are interested in the OpenStack cascading to work 
together and contribute it to OpenStack.

(I applied for “other projects” track [5], but it would be better to have a 
discussion as a formal cross program session, because many core programs are 
involved )


[1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
[2] PoC source code: https://github.com/stackforge/tricircle
[3] Live demo video at YouTube: https://www.youtube.com/watch?v=OSU6PYRz5qY
[4] Live demo video at Youku (low quality, for those who can't access 
YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
[5] http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395.html

Best Regards
Chaoyi Huang ( Joe Huang )
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-01 Thread Tiwari, Arvind
Hi Chaoyi,

Thanks for sharing these information.

Sometime back I have stared a project called “Alliance” which trying to address 
the same concerns (see the link below). Alliance service is designed to provide 
Inter-Cloud Resource Federation which will enable resource sharing across 
cloud in distributed multi-site OpenStack clouds deployments. This service will 
run on top of OpenStack Cloud and fabricate different cloud (or data centers) 
instances in distributed cloud setup. This service will work closely with 
OpenStack components (Keystone, Nova, Cinder) to manage and provision 
different resources (token, VM, images, network .). Alliance service will 
provide abstraction to hide interoperability and integration complexities from 
underpinning cloud instance and enable following business use cases.

- Multi Region Capability
- Virtual Private Cloud
- Cloud Bursting

This service will provide true plug  play model for region expansion, VPC like 
use case, conceptual design can be found at  
https://wiki.openstack.org/wiki/Inter_Cloud_Resource_Federation. We are working 
on POC using this concept which is in WIP.

I will be happy to coordinate with you on this and try to come up with common 
solution, seems we both are trying to address same issues.

Thoughts?

Thanks,
Arvind

From: joehuang [mailto:joehu...@huawei.com]
Sent: Wednesday, October 01, 2014 6:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

Hello,  Alex,

Thank you very much for your mail about remote cluster hypervisor.

One of the inspiration for OpenStack cascading is from the remote clustered 
hypervisor like vCenter. The difference between the remote clustered hypervisor 
and OpenStack cascading is that not only Nova involved in the cascading, but 
also Cinder, Neutron, Ceilometer, and even Glance(optional).

Please refer to 
https://wiki.openstack.org/wiki/OpenStack_cascading_solution#Inspiration,
https://wiki.openstack.org/wiki/OpenStack_cascading_solution#Architecture for 
more detail information.

Best Regards

Chaoyi Huang ( joehuang )


From: Alex Glikson [glik...@il.ibm.com]
Sent: 01 October 2014 12:51
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading
This sounds related to the discussion on the 'Nova clustered hypervisor driver' 
which started at Juno design summit [1]. Talking to another OpenStack should be 
similar to talking to vCenter. The idea was that the Cells support could be 
refactored around this notion as well.
Not sure whether there have been any active progress with this in Juno, though.

Regards,
Alex


[1] 
http://junodesignsummit.sched.org/event/a0d38e1278182eb09f06e22457d94c0c#http://junodesignsummit.sched.org/event/a0d38e1278182eb09f06e22457d94c0c
[2] https://etherpad.openstack.org/p/juno-nova-clustered-hypervisor-support




From:joehuang joehu...@huawei.commailto:joehu...@huawei.com
To:OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date:30/09/2014 04:08 PM
Subject:[openstack-dev] [all] [tc] Multi-clouds integration by 
OpenStack cascading




Hello, Dear TC and all,

Large cloud operators prefer to deploy multiple OpenStack instances(as 
different zones), rather than a single monolithic OpenStack instance because of 
these reasons:

1) Multiple data centers distributed geographically;
2) Multi-vendor business policy;
3) Server nodes scale up modularized from 00's up to million;
4) Fault and maintenance isolation between zones (only REST interface);

At the same time, they also want to integrate these OpenStack instances into 
one cloud. Instead of proprietary orchestration layer, they want to use 
standard OpenStack framework for Northbound API compatibility with HEAT/Horizon 
or other 3rd ecosystem apps.

We call this pattern as OpenStack Cascading, with proposal described by 
[1][2]. PoC live demo video can be found[3][4].

Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the 
OpenStack cascading.

Kindly ask for cross program design summit session to discuss OpenStack 
cascading and the contribution to Kilo.

Kindly invite those who are interested in the OpenStack cascading to work 
together and contribute it to OpenStack.

(I applied for “other projects” track [5], but it would be better to have a 
discussion as a formal cross program session, because many core programs are 
involved )


[1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
[2] PoC source code: https://github.com/stackforge/tricircle
[3] Live demo video at YouTube: https://www.youtube.com/watch?v=OSU6PYRz5qY
[4] Live demo video at Youku (low quality, for those who can't access 
YouTube):http

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-01 Thread joehuang
Hi, Tiwari,



Great to know you are also trying to address similar issues. For sure we are 
happy to work out a common solution for these issues.



I just go through the wiki page, the question for me is will the Alliance 
provide/retain current north bound OpenStack API ?. It's very important for 
the cloud still expose OpenStack API so that the OpenStack API ecosystem will 
not be lost.



And currently OpenStack cascading has not covered the hybrid cloud (private 
cloud and public cloud federation), so your project will be a good supplement.



May we have a f2f workshop before the formal Paris design summit, so that we 
can exchange ideas completely. 40 minutes design summit session is not enough 
for deep diving. PoC team will stay at Paris from Oct.29 to Nov.8.



Best Regards



Chaoyi Huang ( joehuang )




From: Tiwari, Arvind [arvind.tiw...@hp.com]
Sent: 02 October 2014 0:42
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

Hi Chaoyi,

Thanks for sharing these information.

Sometime back I have stared a project called “Alliance” which trying to address 
the same concerns (see the link below). Alliance service is designed to provide 
Inter-Cloud Resource Federation which will enable resource sharing across 
cloud in distributed multi-site OpenStack clouds deployments. This service will 
run on top of OpenStack Cloud and fabricate different cloud (or data centers) 
instances in distributed cloud setup. This service will work closely with 
OpenStack components (Keystone, Nova, Cinder) to manage and provision 
different resources (token, VM, images, network .). Alliance service will 
provide abstraction to hide interoperability and integration complexities from 
underpinning cloud instance and enable following business use cases.

- Multi Region Capability
- Virtual Private Cloud
- Cloud Bursting

This service will provide true plug  play model for region expansion, VPC like 
use case, conceptual design can be found at  
https://wiki.openstack.org/wiki/Inter_Cloud_Resource_Federation. We are working 
on POC using this concept which is in WIP.

I will be happy to coordinate with you on this and try to come up with common 
solution, seems we both are trying to address same issues.

Thoughts?

Thanks,
Arvind

From: joehuang [mailto:joehu...@huawei.com]
Sent: Wednesday, October 01, 2014 6:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

Hello,  Alex,

Thank you very much for your mail about remote cluster hypervisor.

One of the inspiration for OpenStack cascading is from the remote clustered 
hypervisor like vCenter. The difference between the remote clustered hypervisor 
and OpenStack cascading is that not only Nova involved in the cascading, but 
also Cinder, Neutron, Ceilometer, and even Glance(optional).

Please refer to 
https://wiki.openstack.org/wiki/OpenStack_cascading_solution#Inspiration,
https://wiki.openstack.org/wiki/OpenStack_cascading_solution#Architecture for 
more detail information.

Best Regards

Chaoyi Huang ( joehuang )


From: Alex Glikson [glik...@il.ibm.com]
Sent: 01 October 2014 12:51
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading
This sounds related to the discussion on the 'Nova clustered hypervisor driver' 
which started at Juno design summit [1]. Talking to another OpenStack should be 
similar to talking to vCenter. The idea was that the Cells support could be 
refactored around this notion as well.
Not sure whether there have been any active progress with this in Juno, though.

Regards,
Alex


[1] 
http://junodesignsummit.sched.org/event/a0d38e1278182eb09f06e22457d94c0c#http://junodesignsummit.sched.org/event/a0d38e1278182eb09f06e22457d94c0c
[2] https://etherpad.openstack.org/p/juno-nova-clustered-hypervisor-support




From:joehuang joehu...@huawei.commailto:joehu...@huawei.com
To:OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date:30/09/2014 04:08 PM
Subject:[openstack-dev] [all] [tc] Multi-clouds integration by 
OpenStack cascading




Hello, Dear TC and all,

Large cloud operators prefer to deploy multiple OpenStack instances(as 
different zones), rather than a single monolithic OpenStack instance because of 
these reasons:

1) Multiple data centers distributed geographically;
2) Multi-vendor business policy;
3) Server nodes scale up modularized from 00's up to million;
4) Fault and maintenance isolation between zones (only REST interface);

At the same time, they also want to integrate these OpenStack instances into 
one cloud

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-09-30 Thread John Garbutt
On 30 September 2014 14:04, joehuang joehu...@huawei.com wrote:
 Hello, Dear TC and all,

 Large cloud operators prefer to deploy multiple OpenStack instances(as 
 different zones), rather than a single monolithic OpenStack instance because 
 of these reasons:

 1) Multiple data centers distributed geographically;
 2) Multi-vendor business policy;
 3) Server nodes scale up modularized from 00's up to million;
 4) Fault and maintenance isolation between zones (only REST interface);

 At the same time, they also want to integrate these OpenStack instances into 
 one cloud. Instead of proprietary orchestration layer, they want to use 
 standard OpenStack framework for Northbound API compatibility with 
 HEAT/Horizon or other 3rd ecosystem apps.

 We call this pattern as OpenStack Cascading, with proposal described by 
 [1][2]. PoC live demo video can be found[3][4].

 Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the 
 OpenStack cascading.

 Kindly ask for cross program design summit session to discuss OpenStack 
 cascading and the contribution to Kilo.

 Kindly invite those who are interested in the OpenStack cascading to work 
 together and contribute it to OpenStack.

 (I applied for “other projects” track [5], but it would be better to have a 
 discussion as a formal cross program session, because many core programs are 
 involved )


 [1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
 [2] PoC source code: https://github.com/stackforge/tricircle
 [3] Live demo video at YouTube: https://www.youtube.com/watch?v=OSU6PYRz5qY
 [4] Live demo video at Youku (low quality, for those who can't access 
 YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
 [5] 
 http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395.html

There are etherpads for suggesting cross project sessions here:
https://wiki.openstack.org/wiki/Summit/Planning
https://etherpad.openstack.org/p/kilo-crossproject-summit-topics

I am interested at comparing this to Nova's cells concept:
http://docs.openstack.org/trunk/config-reference/content/section_compute-cells.html

Cells basically scales out a single datacenter region by aggregating
multiple child Nova installations with an API cell.

Each child cell can be tested in isolation, via its own API, before
joining it up to an API cell, that adds it into the region. Each cell
logically has its own database and message queue, which helps get more
independent failure domains. You can use cell level scheduling to
restrict people or types of instances to particular subsets of the
cloud, if required.

It doesn't attempt to aggregate between regions, they are kept
independent. Except, the usual assumption that you have a common
identity between all regions.

It also keeps a single Cinder, Glance, Neutron deployment per region.

It would be great to get some help hardening, testing, and building
out more of the cells vision. I suspect we may form a new Nova subteam
to trying and drive this work forward in kilo, if we can build up
enough people wanting to work on improving cells.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-09-30 Thread John Griffith
On Tue, Sep 30, 2014 at 7:35 AM, John Garbutt j...@johngarbutt.com wrote:

 On 30 September 2014 14:04, joehuang joehu...@huawei.com wrote:
  Hello, Dear TC and all,
 
  Large cloud operators prefer to deploy multiple OpenStack instances(as
 different zones), rather than a single monolithic OpenStack instance
 because of these reasons:
 
  1) Multiple data centers distributed geographically;
  2) Multi-vendor business policy;
  3) Server nodes scale up modularized from 00's up to million;
  4) Fault and maintenance isolation between zones (only REST interface);
 
  At the same time, they also want to integrate these OpenStack instances
 into one cloud. Instead of proprietary orchestration layer, they want to
 use standard OpenStack framework for Northbound API compatibility with
 HEAT/Horizon or other 3rd ecosystem apps.
 
  We call this pattern as OpenStack Cascading, with proposal described
 by [1][2]. PoC live demo video can be found[3][4].
 
  Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in
 the OpenStack cascading.
 
  Kindly ask for cross program design summit session to discuss OpenStack
 cascading and the contribution to Kilo.
 
  Kindly invite those who are interested in the OpenStack cascading to
 work together and contribute it to OpenStack.
 
  (I applied for “other projects” track [5], but it would be better to
 have a discussion as a formal cross program session, because many core
 programs are involved )
 
 
  [1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
  [2] PoC source code: https://github.com/stackforge/tricircle
  [3] Live demo video at YouTube:
 https://www.youtube.com/watch?v=OSU6PYRz5qY
  [4] Live demo video at Youku (low quality, for those who can't access
 YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
  [5]
 http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395.html

 There are etherpads for suggesting cross project sessions here:
 https://wiki.openstack.org/wiki/Summit/Planning
 https://etherpad.openstack.org/p/kilo-crossproject-summit-topics

 I am interested at comparing this to Nova's cells concept:

 http://docs.openstack.org/trunk/config-reference/content/section_compute-cells.html

 Cells basically scales out a single datacenter region by aggregating
 multiple child Nova installations with an API cell.

 Each child cell can be tested in isolation, via its own API, before
 joining it up to an API cell, that adds it into the region. Each cell
 logically has its own database and message queue, which helps get more
 independent failure domains. You can use cell level scheduling to
 restrict people or types of instances to particular subsets of the
 cloud, if required.

 It doesn't attempt to aggregate between regions, they are kept
 independent. Except, the usual assumption that you have a common
 identity between all regions.

 It also keeps a single Cinder, Glance, Neutron deployment per region.

 It would be great to get some help hardening, testing, and building
 out more of the cells vision. I suspect we may form a new Nova subteam
 to trying and drive this work forward in kilo, if we can build up
 enough people wanting to work on improving cells.

 Thanks,
 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


​Interesting idea, to be honest when TripleO was first announced what you
have here is more along the lines of what I envisioned.  It seems that this
would have some interesting wins in terms of upgrades, migrations and
scaling in general.  Anyway, you should propose it to the etherpad as John
G ( the other John G :) ) recommended, I'd love to dig deeper into this.

Thanks,
John​
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-09-30 Thread Joe Gordon
On Tue, Sep 30, 2014 at 6:04 AM, joehuang joehu...@huawei.com wrote:

 Hello, Dear TC and all,

 Large cloud operators prefer to deploy multiple OpenStack instances(as
 different zones), rather than a single monolithic OpenStack instance
 because of these reasons:

 1) Multiple data centers distributed geographically;
 2) Multi-vendor business policy;
 3) Server nodes scale up modularized from 00's up to million;
 4) Fault and maintenance isolation between zones (only REST interface);

 At the same time, they also want to integrate these OpenStack instances
 into one cloud. Instead of proprietary orchestration layer, they want to
 use standard OpenStack framework for Northbound API compatibility with
 HEAT/Horizon or other 3rd ecosystem apps.

 We call this pattern as OpenStack Cascading, with proposal described by
 [1][2]. PoC live demo video can be found[3][4].

 Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in
 the OpenStack cascading.

 Kindly ask for cross program design summit session to discuss OpenStack
 cascading and the contribution to Kilo.


Cross program design summit sessions should be used for things that we are
unable to make progress on via this mailing list, and not as a way to begin
new conversations. With that in mind, I think this thread is a good place
to get initial feedback on the idea and possible make a plan for how to
tackle this.



 Kindly invite those who are interested in the OpenStack cascading to work
 together and contribute it to OpenStack.

 (I applied for “other projects” track [5], but it would be better to have
 a discussion as a formal cross program session, because many core programs
 are involved )


 [1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
 [2] PoC source code: https://github.com/stackforge/tricircle
 [3] Live demo video at YouTube:
 https://www.youtube.com/watch?v=OSU6PYRz5qY
 [4] Live demo video at Youku (low quality, for those who can't access
 YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
 [5]
 http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395.html

 Best Regards
 Chaoyi Huang ( Joe Huang )
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-09-30 Thread Tim Bell
 -Original Message-
 From: John Garbutt [mailto:j...@johngarbutt.com]
 Sent: 30 September 2014 15:35
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack
 cascading
 
 On 30 September 2014 14:04, joehuang joehu...@huawei.com wrote:
  Hello, Dear TC and all,
 
  Large cloud operators prefer to deploy multiple OpenStack instances(as
 different zones), rather than a single monolithic OpenStack instance because 
 of
 these reasons:
 
  1) Multiple data centers distributed geographically;
  2) Multi-vendor business policy;
  3) Server nodes scale up modularized from 00's up to million;
  4) Fault and maintenance isolation between zones (only REST
  interface);
 
  At the same time, they also want to integrate these OpenStack instances into
 one cloud. Instead of proprietary orchestration layer, they want to use 
 standard
 OpenStack framework for Northbound API compatibility with HEAT/Horizon or
 other 3rd ecosystem apps.
 
  We call this pattern as OpenStack Cascading, with proposal described by
 [1][2]. PoC live demo video can be found[3][4].
 
  Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the
 OpenStack cascading.
 
  Kindly ask for cross program design summit session to discuss OpenStack
 cascading and the contribution to Kilo.
 
  Kindly invite those who are interested in the OpenStack cascading to work
 together and contribute it to OpenStack.
 
  (I applied for “other projects” track [5], but it would be better to
  have a discussion as a formal cross program session, because many core
  programs are involved )
 
 
  [1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
  [2] PoC source code: https://github.com/stackforge/tricircle
  [3] Live demo video at YouTube:
  https://www.youtube.com/watch?v=OSU6PYRz5qY
  [4] Live demo video at Youku (low quality, for those who can't access
  YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
  [5]
  http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395
  .html
 
 There are etherpads for suggesting cross project sessions here:
 https://wiki.openstack.org/wiki/Summit/Planning
 https://etherpad.openstack.org/p/kilo-crossproject-summit-topics
 
 I am interested at comparing this to Nova's cells concept:
 http://docs.openstack.org/trunk/config-reference/content/section_compute-
 cells.html
 
 Cells basically scales out a single datacenter region by aggregating multiple 
 child
 Nova installations with an API cell.
 
 Each child cell can be tested in isolation, via its own API, before joining 
 it up to
 an API cell, that adds it into the region. Each cell logically has its own 
 database
 and message queue, which helps get more independent failure domains. You can
 use cell level scheduling to restrict people or types of instances to 
 particular
 subsets of the cloud, if required.
 
 It doesn't attempt to aggregate between regions, they are kept independent.
 Except, the usual assumption that you have a common identity between all
 regions.
 
 It also keeps a single Cinder, Glance, Neutron deployment per region.
 
 It would be great to get some help hardening, testing, and building out more 
 of
 the cells vision. I suspect we may form a new Nova subteam to trying and drive
 this work forward in kilo, if we can build up enough people wanting to work on
 improving cells.
 

At CERN, we've deployed cells at scale but are finding a number of 
architectural issues that need resolution in the short term to attain feature 
parity. A vision of we all run cells but some of us have only one is not 
there yet. Typical examples are flavors, security groups and server groups, all 
of which are not yet implemented to the necessary levels for cell parent/child.

We would be very keen on agreeing the strategy in Paris so that we can ensure 
the gap is closed, test it in the gate and that future features cannot 
'wishlist' cell support.

Resources can be made available if we can agree the direction but current 
reviews are not progressing (such as 
https://bugs.launchpad.net/nova/+bug/1211011)

Tim

 Thanks,
 John
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-09-30 Thread Joshua Harlow
So this does seem a-lot like cells but makes cells appear in the other projects.

IMHO the same problems that occur in cells appear here in that we are 
sacrificing consistency of the already problematic systems that already exist 
to gain scale (and to gain more inconsistency). Every time I see a 'the parent 
OpenStack manage many child OpenStacks by using standard OpenStack API' in that 
wiki I wonder how the parent will resolve inconsistencies that exist in 
children (likely it can't). How do quotas work across parent/children, how do 
race conditions get resolved...

IMHO I'd rather stick with the less scalable distributed system we have, iron 
out its quirks, fix the quota (via whatever that project is named now), split 
out the nova/... drivers so they can be maintainable in various projects, fix 
the various already inconsistent state machines that exist, split out the 
scheduler into its own project so that can be shared... All of the mentioned 
things improve scale and improve tolerance to individual failures rather than 
create a whole new level of 'pain' via a tightly bound set of proxies, 
cascading hierarchies Managing this whole cascading clusters and such also 
would seem to be operational management nightmare that I'm not sure is 
justified at the current time being (when operators already have enough trouble 
with the current code bases).

How I imagine this working out (in my view):

* Split out the shared services (gantt, scheduler, quotas...) into real SOA 
services that everyone can use.
* Have cinder-api, nova-api, neutron-api integrate with the split out services 
to obtain consistent views of the world when performing API operations.
* Have cinder, nova, neutron provide 'workers' (nova-compute is a basic worker) 
that can be scaled out across all your clusters and interconnected to a type of 
conductor node in some manner (mq?), and have the outcome of cinder-api, 
nova-api, neutron-api be a workflow that some service (conductor/s?) ensures 
occurs reliably (or aborts). This makes it so that cinder-api, nova-api... can 
scale at will, conductors can scale at will and so can worker nodes...
* Profit!

TLDR; It would seem like this adds more complexity, not less, and I'm not sure 
complexity is what openstack needs more of right now...

-Josh

On Sep 30, 2014, at 6:04 AM, joehuang joehu...@huawei.com wrote:

 Hello, Dear TC and all, 
 
 Large cloud operators prefer to deploy multiple OpenStack instances(as 
 different zones), rather than a single monolithic OpenStack instance because 
 of these reasons:
 
 1) Multiple data centers distributed geographically;
 2) Multi-vendor business policy;
 3) Server nodes scale up modularized from 00's up to million;
 4) Fault and maintenance isolation between zones (only REST interface);
 
 At the same time, they also want to integrate these OpenStack instances into 
 one cloud. Instead of proprietary orchestration layer, they want to use 
 standard OpenStack framework for Northbound API compatibility with 
 HEAT/Horizon or other 3rd ecosystem apps.
 
 We call this pattern as OpenStack Cascading, with proposal described by 
 [1][2]. PoC live demo video can be found[3][4].
 
 Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the 
 OpenStack cascading. 
 
 Kindly ask for cross program design summit session to discuss OpenStack 
 cascading and the contribution to Kilo. 
 
 Kindly invite those who are interested in the OpenStack cascading to work 
 together and contribute it to OpenStack. 
 
 (I applied for “other projects” track [5], but it would be better to have a 
 discussion as a formal cross program session, because many core programs are 
 involved )
 
 
 [1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
 [2] PoC source code: https://github.com/stackforge/tricircle
 [3] Live demo video at YouTube: https://www.youtube.com/watch?v=OSU6PYRz5qY
 [4] Live demo video at Youku (low quality, for those who can't access 
 YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
 [5] 
 http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395.html
 
 Best Regards
 Chaoyi Huang ( Joe Huang )
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-09-30 Thread Andrew Laski


On 09/30/2014 03:07 PM, Tim Bell wrote:

-Original Message-
From: John Garbutt [mailto:j...@johngarbutt.com]
Sent: 30 September 2014 15:35
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack
cascading

On 30 September 2014 14:04, joehuang joehu...@huawei.com wrote:

Hello, Dear TC and all,

Large cloud operators prefer to deploy multiple OpenStack instances(as

different zones), rather than a single monolithic OpenStack instance because of
these reasons:

1) Multiple data centers distributed geographically;
2) Multi-vendor business policy;
3) Server nodes scale up modularized from 00's up to million;
4) Fault and maintenance isolation between zones (only REST
interface);

At the same time, they also want to integrate these OpenStack instances into

one cloud. Instead of proprietary orchestration layer, they want to use standard
OpenStack framework for Northbound API compatibility with HEAT/Horizon or
other 3rd ecosystem apps.

We call this pattern as OpenStack Cascading, with proposal described by

[1][2]. PoC live demo video can be found[3][4].

Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the

OpenStack cascading.

Kindly ask for cross program design summit session to discuss OpenStack

cascading and the contribution to Kilo.

Kindly invite those who are interested in the OpenStack cascading to work

together and contribute it to OpenStack.

(I applied for “other projects” track [5], but it would be better to
have a discussion as a formal cross program session, because many core
programs are involved )


[1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
[2] PoC source code: https://github.com/stackforge/tricircle
[3] Live demo video at YouTube:
https://www.youtube.com/watch?v=OSU6PYRz5qY
[4] Live demo video at Youku (low quality, for those who can't access
YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
[5]
http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395
.html

There are etherpads for suggesting cross project sessions here:
https://wiki.openstack.org/wiki/Summit/Planning
https://etherpad.openstack.org/p/kilo-crossproject-summit-topics

I am interested at comparing this to Nova's cells concept:
http://docs.openstack.org/trunk/config-reference/content/section_compute-
cells.html

Cells basically scales out a single datacenter region by aggregating multiple 
child
Nova installations with an API cell.

Each child cell can be tested in isolation, via its own API, before joining it 
up to
an API cell, that adds it into the region. Each cell logically has its own 
database
and message queue, which helps get more independent failure domains. You can
use cell level scheduling to restrict people or types of instances to particular
subsets of the cloud, if required.

It doesn't attempt to aggregate between regions, they are kept independent.
Except, the usual assumption that you have a common identity between all
regions.

It also keeps a single Cinder, Glance, Neutron deployment per region.

It would be great to get some help hardening, testing, and building out more of
the cells vision. I suspect we may form a new Nova subteam to trying and drive
this work forward in kilo, if we can build up enough people wanting to work on
improving cells.


At CERN, we've deployed cells at scale but are finding a number of architectural issues 
that need resolution in the short term to attain feature parity. A vision of we all 
run cells but some of us have only one is not there yet. Typical examples are 
flavors, security groups and server groups, all of which are not yet implemented to the 
necessary levels for cell parent/child.

We would be very keen on agreeing the strategy in Paris so that we can ensure 
the gap is closed, test it in the gate and that future features cannot 
'wishlist' cell support.

Resources can be made available if we can agree the direction but current 
reviews are not progressing (such as 
https://bugs.launchpad.net/nova/+bug/1211011)


I am working on putting together this strategy so we can discuss it in 
Paris.  I, and perhaps a few others, will be spending time on this in 
Kilo so that these thing do progress.


There are some good ideas in this thread and scaling out is a concern we 
need to continually work on.  But we do have a solution that addresses 
this to an extent so I think the conversation should be about how we 
scale past cells, not replicate it.





Tim


Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-09-30 Thread Adam Young

On 09/30/2014 12:10 PM, John Griffith wrote:



On Tue, Sep 30, 2014 at 7:35 AM, John Garbutt j...@johngarbutt.com 
mailto:j...@johngarbutt.com wrote:


On 30 September 2014 14:04, joehuang joehu...@huawei.com
mailto:joehu...@huawei.com wrote:
 Hello, Dear TC and all,

 Large cloud operators prefer to deploy multiple OpenStack
instances(as different zones), rather than a single monolithic
OpenStack instance because of these reasons:

 1) Multiple data centers distributed geographically;
 2) Multi-vendor business policy;
 3) Server nodes scale up modularized from 00's up to million;
 4) Fault and maintenance isolation between zones (only REST
interface);

 At the same time, they also want to integrate these OpenStack
instances into one cloud. Instead of proprietary orchestration
layer, they want to use standard OpenStack framework for
Northbound API compatibility with HEAT/Horizon or other 3rd
ecosystem apps.

 We call this pattern as OpenStack Cascading, with proposal
described by [1][2]. PoC live demo video can be found[3][4].

 Nova, Cinder, Neutron, Ceilometer and Glance (optional) are
involved in the OpenStack cascading.

 Kindly ask for cross program design summit session to discuss
OpenStack cascading and the contribution to Kilo.

 Kindly invite those who are interested in the OpenStack
cascading to work together and contribute it to OpenStack.

 (I applied for “other projects” track [5], but it would be
better to have a discussion as a formal cross program session,
because many core programs are involved )


 [1] wiki:
https://wiki.openstack.org/wiki/OpenStack_cascading_solution
 [2] PoC source code: https://github.com/stackforge/tricircle
 [3] Live demo video at YouTube:
https://www.youtube.com/watch?v=OSU6PYRz5qY
 [4] Live demo video at Youku (low quality, for those who can't
access YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
 [5]
http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395.html

There are etherpads for suggesting cross project sessions here:
https://wiki.openstack.org/wiki/Summit/Planning
https://etherpad.openstack.org/p/kilo-crossproject-summit-topics

I am interested at comparing this to Nova's cells concept:

http://docs.openstack.org/trunk/config-reference/content/section_compute-cells.html

Cells basically scales out a single datacenter region by aggregating
multiple child Nova installations with an API cell.

Each child cell can be tested in isolation, via its own API, before
joining it up to an API cell, that adds it into the region. Each cell
logically has its own database and message queue, which helps get more
independent failure domains. You can use cell level scheduling to
restrict people or types of instances to particular subsets of the
cloud, if required.

It doesn't attempt to aggregate between regions, they are kept
independent. Except, the usual assumption that you have a common
identity between all regions.

It also keeps a single Cinder, Glance, Neutron deployment per region.



I'm starting on work to support a comparable mechanism to share data 
between Keystone servers.


http://adam.younglogic.com/2014/09/multiple-signers/



It would be great to get some help hardening, testing, and building
out more of the cells vision. I suspect we may form a new Nova subteam
to trying and drive this work forward in kilo, if we can build up
enough people wanting to work on improving cells.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


​Interesting idea, to be honest when TripleO was first announced what 
you have here is more along the lines of what I envisioned.  It seems 
that this would have some interesting wins in terms of upgrades, 
migrations and scaling in general.  Anyway, you should propose it to 
the etherpad as John G ( the other John G :) ) recommended, I'd love 
to dig deeper into this.








Thanks,
John​



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-09-30 Thread joehuang
Hello, John Garbutt

Thank you for your message, I will register cross project topic following the 
link.

The major difference between Cells and OpenStack cascading is the  problem 
domain:

OpenStack cascading: to integrate multi-site / multi-vendor OpenStack instances 
into one cloud with OpenStack API exposed.
Cells: a single OpenStack instance scale up methodology

Therefore, no conflict between Cells and OpenStack cascading. They can be used 
for different scenario, and Cells can also be used as the cascaded OpenStack 
(the child OpenStack).

Best Regards
 
Chaoyi Huang ( joehuang)


From: John Garbutt [j...@johngarbutt.com]
Sent: 30 September 2014 21:35
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
OpenStack cascading

On 30 September 2014 14:04, joehuang joehu...@huawei.com wrote:
 Hello, Dear TC and all,

 Large cloud operators prefer to deploy multiple OpenStack instances(as 
 different zones), rather than a single monolithic OpenStack instance because 
 of these reasons:

 1) Multiple data centers distributed geographically;
 2) Multi-vendor business policy;
 3) Server nodes scale up modularized from 00's up to million;
 4) Fault and maintenance isolation between zones (only REST interface);

 At the same time, they also want to integrate these OpenStack instances into 
 one cloud. Instead of proprietary orchestration layer, they want to use 
 standard OpenStack framework for Northbound API compatibility with 
 HEAT/Horizon or other 3rd ecosystem apps.

 We call this pattern as OpenStack Cascading, with proposal described by 
 [1][2]. PoC live demo video can be found[3][4].

 Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the 
 OpenStack cascading.

 Kindly ask for cross program design summit session to discuss OpenStack 
 cascading and the contribution to Kilo.

 Kindly invite those who are interested in the OpenStack cascading to work 
 together and contribute it to OpenStack.

 (I applied for “other projects” track [5], but it would be better to have a 
 discussion as a formal cross program session, because many core programs are 
 involved )


 [1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
 [2] PoC source code: https://github.com/stackforge/tricircle
 [3] Live demo video at YouTube: https://www.youtube.com/watch?v=OSU6PYRz5qY
 [4] Live demo video at Youku (low quality, for those who can't access 
 YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
 [5] 
 http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395.html

There are etherpads for suggesting cross project sessions here:
https://wiki.openstack.org/wiki/Summit/Planning
https://etherpad.openstack.org/p/kilo-crossproject-summit-topics

I am interested at comparing this to Nova's cells concept:
http://docs.openstack.org/trunk/config-reference/content/section_compute-cells.html

Cells basically scales out a single datacenter region by aggregating
multiple child Nova installations with an API cell.

Each child cell can be tested in isolation, via its own API, before
joining it up to an API cell, that adds it into the region. Each cell
logically has its own database and message queue, which helps get more
independent failure domains. You can use cell level scheduling to
restrict people or types of instances to particular subsets of the
cloud, if required.

It doesn't attempt to aggregate between regions, they are kept
independent. Except, the usual assumption that you have a common
identity between all regions.

It also keeps a single Cinder, Glance, Neutron deployment per region.

It would be great to get some help hardening, testing, and building
out more of the cells vision. I suspect we may form a new Nova subteam
to trying and drive this work forward in kilo, if we can build up
enough people wanting to work on improving cells.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-09-30 Thread joehuang
Hello, John Griffith,



Thank you very much for your funny mail. Now I see 2 John G ;)



I would like to say that TrippleO is the pioneer to handle the correlationship 
among the OpenStack instances. Cheer.



The problem domain for OpenStack cascading is multi-site / multi-vendor 
OpenStack instances integration. Based on this, the large scale cloud can be 
distributed in many data centers, and fault isolation / trouble shooting / 
configuration change / upgrade / patch /... can be done seperated by different 
OpenStack instance.



For example, a cloud includes 2 data center, vendor A sold their OpenStack 
solution in data center A, vendor B sold their OpenStack solution in data 
center B. If a criticle bug found in data center B, then the vendor B is 
reponsible for the bug fix and patch update. Clear duty responsiblity with 
independent OpenStack instances, even for the integration of software / 
hardware  .



Best Regards

Chaoyi Huang ( joehuang)





From: John Griffith [john.griff...@solidfire.com]
Sent: 01 October 2014 0:10
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading



On Tue, Sep 30, 2014 at 7:35 AM, John Garbutt 
j...@johngarbutt.commailto:j...@johngarbutt.com wrote:
On 30 September 2014 14:04, joehuang 
joehu...@huawei.commailto:joehu...@huawei.com wrote:
 Hello, Dear TC and all,

 Large cloud operators prefer to deploy multiple OpenStack instances(as 
 different zones), rather than a single monolithic OpenStack instance because 
 of these reasons:

 1) Multiple data centers distributed geographically;
 2) Multi-vendor business policy;
 3) Server nodes scale up modularized from 00's up to million;
 4) Fault and maintenance isolation between zones (only REST interface);

 At the same time, they also want to integrate these OpenStack instances into 
 one cloud. Instead of proprietary orchestration layer, they want to use 
 standard OpenStack framework for Northbound API compatibility with 
 HEAT/Horizon or other 3rd ecosystem apps.

 We call this pattern as OpenStack Cascading, with proposal described by 
 [1][2]. PoC live demo video can be found[3][4].

 Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the 
 OpenStack cascading.

 Kindly ask for cross program design summit session to discuss OpenStack 
 cascading and the contribution to Kilo.

 Kindly invite those who are interested in the OpenStack cascading to work 
 together and contribute it to OpenStack.

 (I applied for “other projects” track [5], but it would be better to have a 
 discussion as a formal cross program session, because many core programs are 
 involved )


 [1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
 [2] PoC source code: https://github.com/stackforge/tricircle
 [3] Live demo video at YouTube: https://www.youtube.com/watch?v=OSU6PYRz5qY
 [4] Live demo video at Youku (low quality, for those who can't access 
 YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
 [5] 
 http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395.html

There are etherpads for suggesting cross project sessions here:
https://wiki.openstack.org/wiki/Summit/Planning
https://etherpad.openstack.org/p/kilo-crossproject-summit-topics

I am interested at comparing this to Nova's cells concept:
http://docs.openstack.org/trunk/config-reference/content/section_compute-cells.html

Cells basically scales out a single datacenter region by aggregating
multiple child Nova installations with an API cell.

Each child cell can be tested in isolation, via its own API, before
joining it up to an API cell, that adds it into the region. Each cell
logically has its own database and message queue, which helps get more
independent failure domains. You can use cell level scheduling to
restrict people or types of instances to particular subsets of the
cloud, if required.

It doesn't attempt to aggregate between regions, they are kept
independent. Except, the usual assumption that you have a common
identity between all regions.

It also keeps a single Cinder, Glance, Neutron deployment per region.

It would be great to get some help hardening, testing, and building
out more of the cells vision. I suspect we may form a new Nova subteam
to trying and drive this work forward in kilo, if we can build up
enough people wanting to work on improving cells.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

​Interesting idea, to be honest when TripleO was first announced what you have 
here is more along the lines of what I envisioned.  It seems that this would 
have some interesting wins in terms of upgrades, migrations and scaling in 
general.  Anyway, you should propose it to the etherpad

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-09-30 Thread joehuang
Hello, Joe,



Thank your encourage and good suggestion. That means this thread is a good 
start.



So if anyone has any doubts about OpenStack cascading, please following this 
thread, so that we can colloect all things could not be solved in the mail, and 
then discussed in the design summit session.



Best Regards



Chaoyi Hunag ( joehuang )




From: Joe Gordon [joe.gord...@gmail.com]
Sent: 01 October 2014 2:06
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading



On Tue, Sep 30, 2014 at 6:04 AM, joehuang 
joehu...@huawei.commailto:joehu...@huawei.com wrote:
Hello, Dear TC and all,

Large cloud operators prefer to deploy multiple OpenStack instances(as 
different zones), rather than a single monolithic OpenStack instance because of 
these reasons:

1) Multiple data centers distributed geographically;
2) Multi-vendor business policy;
3) Server nodes scale up modularized from 00's up to million;
4) Fault and maintenance isolation between zones (only REST interface);

At the same time, they also want to integrate these OpenStack instances into 
one cloud. Instead of proprietary orchestration layer, they want to use 
standard OpenStack framework for Northbound API compatibility with HEAT/Horizon 
or other 3rd ecosystem apps.

We call this pattern as OpenStack Cascading, with proposal described by 
[1][2]. PoC live demo video can be found[3][4].

Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the 
OpenStack cascading.

Kindly ask for cross program design summit session to discuss OpenStack 
cascading and the contribution to Kilo.

Cross program design summit sessions should be used for things that we are 
unable to make progress on via this mailing list, and not as a way to begin new 
conversations. With that in mind, I think this thread is a good place to get 
initial feedback on the idea and possible make a plan for how to tackle this.


Kindly invite those who are interested in the OpenStack cascading to work 
together and contribute it to OpenStack.

(I applied for “other projects” track [5], but it would be better to have a 
discussion as a formal cross program session, because many core programs are 
involved )


[1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
[2] PoC source code: https://github.com/stackforge/tricircle
[3] Live demo video at YouTube: https://www.youtube.com/watch?v=OSU6PYRz5qY
[4] Live demo video at Youku (low quality, for those who can't access 
YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
[5] http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395.html

Best Regards
Chaoyi Huang ( Joe Huang )
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-09-30 Thread joehuang
Hello, Joshua,

Thank you very much for your deep thinking.

1. Quite different with cells. I have to copy the content from the mail to John 
Garbutt:

The major difference between Cells and OpenStack cascading is the  problem 
domain:
OpenStack cascading: to integrate multi-site / multi-vendor OpenStack instances 
into one cloud with OpenStack API exposed.
Cells: a single OpenStack instance scale up methodology

2. For quota, it is controlled by the cascading OpenStack (the parent 
OpenStack). Because the cascading OpenStack has all logical objects.

3. Race condition: what's the concrete race condition issue.

4. Inconsistency. Because there are object uuid mapping between the cascading 
OpenStack and cascaded OpenStack, so to track the consistency is possible and 
easy to solve, although we did not implement in the PoC source code.

5. I'd rather stick with the less scalable distributed system we have, no 
conflict, no matter OpenStack cascading introduced or not, we need a solid, 
stable and scalable OpenStack.

6. How I imagine this working out (in my view), all these things are good, I 
also like it.

Best Regards

Chaoyi Hunag ( joehuang )


From: Joshua Harlow [harlo...@outlook.com]
Sent: 01 October 2014 3:17
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
OpenStack cascading

So this does seem a-lot like cells but makes cells appear in the other projects.

IMHO the same problems that occur in cells appear here in that we are 
sacrificing consistency of the already problematic systems that already exist 
to gain scale (and to gain more inconsistency). Every time I see a 'the parent 
OpenStack manage many child OpenStacks by using standard OpenStack API' in that 
wiki I wonder how the parent will resolve inconsistencies that exist in 
children (likely it can't). How do quotas work across parent/children, how do 
race conditions get resolved...

IMHO I'd rather stick with the less scalable distributed system we have, iron 
out its quirks, fix the quota (via whatever that project is named now), split 
out the nova/... drivers so they can be maintainable in various projects, fix 
the various already inconsistent state machines that exist, split out the 
scheduler into its own project so that can be shared... All of the mentioned 
things improve scale and improve tolerance to individual failures rather than 
create a whole new level of 'pain' via a tightly bound set of proxies, 
cascading hierarchies Managing this whole cascading clusters and such also 
would seem to be operational management nightmare that I'm not sure is 
justified at the current time being (when operators already have enough trouble 
with the current code bases).

How I imagine this working out (in my view):

* Split out the shared services (gantt, scheduler, quotas...) into real SOA 
services that everyone can use.
* Have cinder-api, nova-api, neutron-api integrate with the split out services 
to obtain consistent views of the world when performing API operations.
* Have cinder, nova, neutron provide 'workers' (nova-compute is a basic worker) 
that can be scaled out across all your clusters and interconnected to a type of 
conductor node in some manner (mq?), and have the outcome of cinder-api, 
nova-api, neutron-api be a workflow that some service (conductor/s?) ensures 
occurs reliably (or aborts). This makes it so that cinder-api, nova-api... can 
scale at will, conductors can scale at will and so can worker nodes...
* Profit!

TLDR; It would seem like this adds more complexity, not less, and I'm not sure 
complexity is what openstack needs more of right now...

-Josh

On Sep 30, 2014, at 6:04 AM, joehuang joehu...@huawei.com wrote:

 Hello, Dear TC and all,

 Large cloud operators prefer to deploy multiple OpenStack instances(as 
 different zones), rather than a single monolithic OpenStack instance because 
 of these reasons:

 1) Multiple data centers distributed geographically;
 2) Multi-vendor business policy;
 3) Server nodes scale up modularized from 00's up to million;
 4) Fault and maintenance isolation between zones (only REST interface);

 At the same time, they also want to integrate these OpenStack instances into 
 one cloud. Instead of proprietary orchestration layer, they want to use 
 standard OpenStack framework for Northbound API compatibility with 
 HEAT/Horizon or other 3rd ecosystem apps.

 We call this pattern as OpenStack Cascading, with proposal described by 
 [1][2]. PoC live demo video can be found[3][4].

 Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the 
 OpenStack cascading.

 Kindly ask for cross program design summit session to discuss OpenStack 
 cascading and the contribution to Kilo.

 Kindly invite those who are interested in the OpenStack cascading to work 
 together and contribute it to OpenStack.

 (I applied for “other projects

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-09-30 Thread joehuang
Hello, Andrew and Tim,

I understand CERN has Cells solution installation and there is a subteam to 
solve Cells challenge.

I copy the reply to John Garbutt to clarify the difference:

The major difference between Cells and OpenStack cascading is the  problem 
domain:
OpenStack cascading: to integrate multi-site / multi-vendor OpenStack instances 
into one cloud with OpenStack API exposed.
Cells: a single OpenStack instance scale up methodology
Therefore, no conflict between Cells and OpenStack cascading. They can be used 
for different scenario, and Cells can also be used as the cascaded OpenStack 
(the child OpenStack).

And OpenStack cascading also provide the capability for cross data center L2/L3 
networking for a tennat.

Flavor, Server Group (Host Aggregate?), Security Group(not clear the 
concrete problem) issue could be solved in OpenStack cascading solution from 
architecure point of view.

Best Regards

Chaoyi Huang ( joehuang )

From: Andrew Laski [andrew.la...@rackspace.com]
Sent: 01 October 2014 3:49
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

On 09/30/2014 03:07 PM, Tim Bell wrote:
 -Original Message-
 From: John Garbutt [mailto:j...@johngarbutt.com]
 Sent: 30 September 2014 15:35
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack
 cascading

 On 30 September 2014 14:04, joehuang joehu...@huawei.com wrote:
 Hello, Dear TC and all,

 Large cloud operators prefer to deploy multiple OpenStack instances(as
 different zones), rather than a single monolithic OpenStack instance because 
 of
 these reasons:
 1) Multiple data centers distributed geographically;
 2) Multi-vendor business policy;
 3) Server nodes scale up modularized from 00's up to million;
 4) Fault and maintenance isolation between zones (only REST
 interface);

 At the same time, they also want to integrate these OpenStack instances into
 one cloud. Instead of proprietary orchestration layer, they want to use 
 standard
 OpenStack framework for Northbound API compatibility with HEAT/Horizon or
 other 3rd ecosystem apps.
 We call this pattern as OpenStack Cascading, with proposal described by
 [1][2]. PoC live demo video can be found[3][4].
 Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the
 OpenStack cascading.
 Kindly ask for cross program design summit session to discuss OpenStack
 cascading and the contribution to Kilo.
 Kindly invite those who are interested in the OpenStack cascading to work
 together and contribute it to OpenStack.
 (I applied for “other projects” track [5], but it would be better to
 have a discussion as a formal cross program session, because many core
 programs are involved )


 [1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
 [2] PoC source code: https://github.com/stackforge/tricircle
 [3] Live demo video at YouTube:
 https://www.youtube.com/watch?v=OSU6PYRz5qY
 [4] Live demo video at Youku (low quality, for those who can't access
 YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
 [5]
 http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395
 .html
 There are etherpads for suggesting cross project sessions here:
 https://wiki.openstack.org/wiki/Summit/Planning
 https://etherpad.openstack.org/p/kilo-crossproject-summit-topics

 I am interested at comparing this to Nova's cells concept:
 http://docs.openstack.org/trunk/config-reference/content/section_compute-
 cells.html

 Cells basically scales out a single datacenter region by aggregating 
 multiple child
 Nova installations with an API cell.

 Each child cell can be tested in isolation, via its own API, before joining 
 it up to
 an API cell, that adds it into the region. Each cell logically has its own 
 database
 and message queue, which helps get more independent failure domains. You can
 use cell level scheduling to restrict people or types of instances to 
 particular
 subsets of the cloud, if required.

 It doesn't attempt to aggregate between regions, they are kept independent.
 Except, the usual assumption that you have a common identity between all
 regions.

 It also keeps a single Cinder, Glance, Neutron deployment per region.

 It would be great to get some help hardening, testing, and building out more 
 of
 the cells vision. I suspect we may form a new Nova subteam to trying and 
 drive
 this work forward in kilo, if we can build up enough people wanting to work 
 on
 improving cells.

 At CERN, we've deployed cells at scale but are finding a number of 
 architectural issues that need resolution in the short term to attain feature 
 parity. A vision of we all run cells but some of us have only one is not 
 there yet. Typical examples are flavors, security groups and server groups, 
 all of which are not yet implemented to the necessary levels

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-09-30 Thread joehuang
Hello, Adam,



Nice post. With KeyStone federation and multiple-signers, and plus OpenStack 
cascading, it would be helpful to delivery hybrid cloud for which both private 
cloud and public cloud are built upon OpenStack instances.



It would be a great picture.



Best Regards



Chaoyi Hunag ( joehuang )




From: Adam Young [ayo...@redhat.com]
Sent: 01 October 2014 4:25
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

On 09/30/2014 12:10 PM, John Griffith wrote:


On Tue, Sep 30, 2014 at 7:35 AM, John Garbutt 
j...@johngarbutt.commailto:j...@johngarbutt.com wrote:
On 30 September 2014 14:04, joehuang 
joehu...@huawei.commailto:joehu...@huawei.com wrote:
 Hello, Dear TC and all,

 Large cloud operators prefer to deploy multiple OpenStack instances(as 
 different zones), rather than a single monolithic OpenStack instance because 
 of these reasons:

 1) Multiple data centers distributed geographically;
 2) Multi-vendor business policy;
 3) Server nodes scale up modularized from 00's up to million;
 4) Fault and maintenance isolation between zones (only REST interface);

 At the same time, they also want to integrate these OpenStack instances into 
 one cloud. Instead of proprietary orchestration layer, they want to use 
 standard OpenStack framework for Northbound API compatibility with 
 HEAT/Horizon or other 3rd ecosystem apps.

 We call this pattern as OpenStack Cascading, with proposal described by 
 [1][2]. PoC live demo video can be found[3][4].

 Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the 
 OpenStack cascading.

 Kindly ask for cross program design summit session to discuss OpenStack 
 cascading and the contribution to Kilo.

 Kindly invite those who are interested in the OpenStack cascading to work 
 together and contribute it to OpenStack.

 (I applied for “other projects” track [5], but it would be better to have a 
 discussion as a formal cross program session, because many core programs are 
 involved )


 [1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
 [2] PoC source code: https://github.com/stackforge/tricircle
 [3] Live demo video at YouTube: https://www.youtube.com/watch?v=OSU6PYRz5qY
 [4] Live demo video at Youku (low quality, for those who can't access 
 YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
 [5] 
 http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395.html

There are etherpads for suggesting cross project sessions here:
https://wiki.openstack.org/wiki/Summit/Planning
https://etherpad.openstack.org/p/kilo-crossproject-summit-topics

I am interested at comparing this to Nova's cells concept:
http://docs.openstack.org/trunk/config-reference/content/section_compute-cells.html

Cells basically scales out a single datacenter region by aggregating
multiple child Nova installations with an API cell.

Each child cell can be tested in isolation, via its own API, before
joining it up to an API cell, that adds it into the region. Each cell
logically has its own database and message queue, which helps get more
independent failure domains. You can use cell level scheduling to
restrict people or types of instances to particular subsets of the
cloud, if required.

It doesn't attempt to aggregate between regions, they are kept
independent. Except, the usual assumption that you have a common
identity between all regions.

It also keeps a single Cinder, Glance, Neutron deployment per region.


I'm starting on work to support a comparable mechanism to share data between 
Keystone servers.

http://adam.younglogic.com/2014/09/multiple-signers/


It would be great to get some help hardening, testing, and building
out more of the cells vision. I suspect we may form a new Nova subteam
to trying and drive this work forward in kilo, if we can build up
enough people wanting to work on improving cells.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

​Interesting idea, to be honest when TripleO was first announced what you have 
here is more along the lines of what I envisioned.  It seems that this would 
have some interesting wins in terms of upgrades, migrations and scaling in 
general.  Anyway, you should propose it to the etherpad as John G ( the other 
John G :) ) recommended, I'd love to dig deeper into this.






Thanks,
John​




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-09-30 Thread Alex Glikson
This sounds related to the discussion on the 'Nova clustered hypervisor 
driver' which started at Juno design summit [1]. Talking to another 
OpenStack should be similar to talking to vCenter. The idea was that the 
Cells support could be refactored around this notion as well. 
Not sure whether there have been any active progress with this in Juno, 
though.

Regards,
Alex


[1] 
http://junodesignsummit.sched.org/event/a0d38e1278182eb09f06e22457d94c0c#
[2] 
https://etherpad.openstack.org/p/juno-nova-clustered-hypervisor-support




From:   joehuang joehu...@huawei.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date:   30/09/2014 04:08 PM
Subject:[openstack-dev] [all] [tc] Multi-clouds integration by 
OpenStack cascading



Hello, Dear TC and all, 

Large cloud operators prefer to deploy multiple OpenStack instances(as 
different zones), rather than a single monolithic OpenStack instance 
because of these reasons:
 
1) Multiple data centers distributed geographically;
2) Multi-vendor business policy;
3) Server nodes scale up modularized from 00's up to million;
4) Fault and maintenance isolation between zones (only REST interface);
 
At the same time, they also want to integrate these OpenStack instances 
into one cloud. Instead of proprietary orchestration layer, they want to 
use standard OpenStack framework for Northbound API compatibility with 
HEAT/Horizon or other 3rd ecosystem apps.
 
We call this pattern as OpenStack Cascading, with proposal described by 
[1][2]. PoC live demo video can be found[3][4].
 
Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in 
the OpenStack cascading. 
 
Kindly ask for cross program design summit session to discuss OpenStack 
cascading and the contribution to Kilo. 

Kindly invite those who are interested in the OpenStack cascading to work 
together and contribute it to OpenStack. 
 
(I applied for “other projects” track [5], but it would be better to 
have a discussion as a formal cross program session, because many core 
programs are involved )
 
 
[1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
[2] PoC source code: https://github.com/stackforge/tricircle
[3] Live demo video at YouTube: 
https://www.youtube.com/watch?v=OSU6PYRz5qY
[4] Live demo video at Youku (low quality, for those who can't access 
YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
[5] 
http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395.html

 
Best Regards
Chaoyi Huang ( Joe Huang )
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev