Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-02-21 Thread Ishimoto, Ryu
+extensions?
 
  Regards,
  --
  Romain Lenglet
 
  2011/2/16 Romain Lengletrom...@midokura.jp
 
  Hi Erik,
 
  Thanks for your comments.
 
  There doesn't seem to be a consensus to use core API + extensions vs.
  multiple APIs?
  Anyway, I don't see any issues with specifying a core API for network
  services, and a core API for network agents, corresponding exactly to
  NTT's Ishii-san's generic APIs, and specifying all the non-generic,
  plugin-specific operations in extensions.
  If the norm becomes to have a core API + extensions, then the network
  service spec will be modified to follow that norm. No problem.
 
  The important point we need to agree on is what goes into the API, and
 what
  goes into extensions.
 
  Let me rephrase the criteria that I proposed, using the API and
  extensions terms:
  1) any operation called by the compute service (Nova) directly MUST be
  specified in the API;
  2) any operation called by users / admin tools MAY be specified in the
 API,
  but not necessarily;
  3) any operation specified in the API MUST be independent from details
 of
  specific network service plugins (e.g. specific network models, specific
  supported protocols, etc.), i.e. that operation can be supported by
 every
  network service plugin imaginable, which means that:
  4) any operation that cannot be implemented by all plugins MUST be
  specified in an extension, i.e. if one comes up with a counter-example
  plugin that cannot implement that operation, then the operation cannot
 be
  specified in the API and MUST be specified in an extension.
 
  Do we agree on those criteria?
 
  I think Ishii-san's proposal meets those criteria.
  Do you see any issues with Ishii-san's proposal regarding the split
 between
  core operations and extension operations?
  If you think that some operations that are currently defined as
 extensions
  in Ishii-san's proposal should be in the API, I'll be happy to try to
 give
  counter-examples of network service plugins that can't implement them.
 :)
 
  Regards,
  --
  Romain Lenglet
 
 
  2011/2/16 Erik Carlinerik.car...@rackspace.com
 
My understanding is that we want a single, canonical OS network
 service
  API.  That API can then be implemented by different service engines
 on
  that back end via a plug-in/driver model.  The way additional features
 are
  added to the canonical API that may not be core or for widespread
 adoption
  (e.g. something vendor specific) is via extensions.  You can take a
 look at
  the proposed OS compute API spec
 http://wiki.openstack.org/OpenStackAPI_1-1to see how extensions are
 implemented there.  Also, Jorge Williams has done
  a good write up of the concept here
 http://wiki.openstack.org/JorgeWilliams?action=AttachFiledo=viewtarget=Extensions.pdf
 
  .
 
Erik
 
 From: Romain Lengletrom...@midokura.jp
  Date: Tue, 15 Feb 2011 17:03:57 +0900
  To: 石井 久治ishii.hisah...@lab.ntt.co.jp
  Cc:openstack@lists.launchpad.net
 
  Subject: Re: [Openstack] Network Service for L2/L3 Network
 Infrastructure
  blueprint
 
 Hi Ishii-san,
 
  On Tuesday, February 15, 2011 at 16:28, 石井 久治 wrote:
 
Hello Hiroshi-san
 
  Do you mean that the former API is an interface that is
  defined in OpenStack project, and the latter API is
  a vendor specific API?
  My understanding is that yes, that's what he means.
 
  I also think so.
 
  In addition, I feel it is issue that what network functions should be
  defined as generic API, and what network functions should be defined as
  plugin specific API.
  How do you think ?
 
  I propose to apply the following criteria to determine which operations
  belong to the generic API:
  - any operation called by the compute service (Nova) directly MUST
 belong
  to the generic API;
  - any operation called by users (REST API, etc.) MAY belong to the
 generic
  API;
  - any operation belonging to the generic API MUST be independent from
  details of specific network service plugins (e.g. specific network
 models,
  specific supported protocols, etc.), i.e. the operation can be
 supported by
  every network service plugin imaginable, which means that if one can
 come up
  with a counter-example plugin that cannot implement that operation,
 then the
  operation cannot belong to the generic API.
 
How about that?
 
Regards,
  --
  Romain Lenglet
 
 
 
  Thanks
  Hisaharu Ishii
 
 
  (2011/02/15 16:18), Romain Lenglet wrote:
 
  Hi Hiroshi,
  On Tuesday, February 15, 2011 at 15:47, Hiroshi DEMPO wrote:
  Hello Hisaharu san
 
 
  I am not sure about the differences between generic network API and
  plugin X specific network service API.
 
  Do you mean that the former API is an interface that is
  defined in OpenStack project, and the latter API is
  a vendor specific API?
 
 
  My understanding is that yes, that's what he means.
 
  --
  Romain Lenglet
 
 
 
  Thanks
  Hiroshi
 
-Original Message-
  From: openstack-bounces+dem=ah.jp.nec@lists.launchpad.net
  [mailto:openstack

Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-02-15 Thread Romain Lenglet
Hi Erik,

Thanks for your comments.

There doesn't seem to be a consensus to use core API + extensions vs.
multiple APIs?
Anyway, I don't see any issues with specifying a core API for network
services, and a core API for network agents, corresponding exactly to
NTT's Ishii-san's generic APIs, and specifying all the non-generic,
plugin-specific operations in extensions.
If the norm becomes to have a core API + extensions, then the network
service spec will be modified to follow that norm. No problem.

The important point we need to agree on is what goes into the API, and what
goes into extensions.

Let me rephrase the criteria that I proposed, using the API and
extensions terms:
1) any operation called by the compute service (Nova) directly MUST be
specified in the API;
2) any operation called by users / admin tools MAY be specified in the API,
but not necessarily;
3) any operation specified in the API MUST be independent from details of
specific network service plugins (e.g. specific network models, specific
supported protocols, etc.), i.e. that operation can be supported by every
network service plugin imaginable, which means that:
4) any operation that cannot be implemented by all plugins MUST be specified
in an extension, i.e. if one comes up with a counter-example plugin that
cannot implement that operation, then the operation cannot be specified in
the API and MUST be specified in an extension.

Do we agree on those criteria?

I think Ishii-san's proposal meets those criteria.
Do you see any issues with Ishii-san's proposal regarding the split between
core operations and extension operations?
If you think that some operations that are currently defined as extensions
in Ishii-san's proposal should be in the API, I'll be happy to try to give
counter-examples of network service plugins that can't implement them. :)

Regards,
--
Romain Lenglet


2011/2/16 Erik Carlin erik.car...@rackspace.com

  My understanding is that we want a single, canonical OS network service
 API.  That API can then be implemented by different service engines on
 that back end via a plug-in/driver model.  The way additional features are
 added to the canonical API that may not be core or for widespread adoption
 (e.g. something vendor specific) is via extensions.  You can take a look at
 the proposed OS compute API 
 spechttp://wiki.openstack.org/OpenStackAPI_1-1to see how extensions are 
 implemented there.  Also, Jorge Williams has done
 a good write up of the concept 
 herehttp://wiki.openstack.org/JorgeWilliams?action=AttachFiledo=viewtarget=Extensions.pdf
 .

  Erik

   From: Romain Lenglet rom...@midokura.jp
 Date: Tue, 15 Feb 2011 17:03:57 +0900
 To: 石井 久治 ishii.hisah...@lab.ntt.co.jp
 Cc: openstack@lists.launchpad.net

 Subject: Re: [Openstack] Network Service for L2/L3 Network Infrastructure
 blueprint

   Hi Ishii-san,

 On Tuesday, February 15, 2011 at 16:28, 石井 久治 wrote:

  Hello Hiroshi-san

  Do you mean that the former API is an interface that is
  defined in OpenStack project, and the latter API is
  a vendor specific API?
  My understanding is that yes, that's what he means.

 I also think so.

 In addition, I feel it is issue that what network functions should be
 defined as generic API, and what network functions should be defined as
 plugin specific API.
 How do you think ?

 I propose to apply the following criteria to determine which operations
 belong to the generic API:
 - any operation called by the compute service (Nova) directly MUST belong
 to the generic API;
 - any operation called by users (REST API, etc.) MAY belong to the generic
 API;
 - any operation belonging to the generic API MUST be independent from
 details of specific network service plugins (e.g. specific network models,
 specific supported protocols, etc.), i.e. the operation can be supported by
 every network service plugin imaginable, which means that if one can come up
 with a counter-example plugin that cannot implement that operation, then the
 operation cannot belong to the generic API.

  How about that?

  Regards,
 --
 Romain Lenglet



 Thanks
 Hisaharu Ishii


 (2011/02/15 16:18), Romain Lenglet wrote:

 Hi Hiroshi,
 On Tuesday, February 15, 2011 at 15:47, Hiroshi DEMPO wrote:
 Hello Hisaharu san


 I am not sure about the differences between generic network API and
 plugin X specific network service API.

 Do you mean that the former API is an interface that is
 defined in OpenStack project, and the latter API is
 a vendor specific API?


 My understanding is that yes, that's what he means.

 --
 Romain Lenglet



 Thanks
 Hiroshi

  -Original Message-
 From: openstack-bounces+dem=ah.jp.nec@lists.launchpad.net
 [mailto:openstack-bounces openstack-bounces+dem=
 ah.jp.nec@lists.launchpad.ne
 t] On Behalf Of 石井 久治
 Sent: Thursday, February 10, 2011 8:48 PM
 To: openstack@lists.launchpad.net
 Subject: Re: [Openstack] Network Service for L2/L3 Network
 Infrastructure blueprint

 Hi, all

 As we have said before, we

Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-02-15 Thread Romain Lenglet
 service plugins that can't implement them. :)

 Regards,
 --
 Romain Lenglet


 2011/2/16 Erik Carlin erik.car...@rackspace.com

  My understanding is that we want a single, canonical OS network service
 API.  That API can then be implemented by different service engines on
 that back end via a plug-in/driver model.  The way additional features are
 added to the canonical API that may not be core or for widespread adoption
 (e.g. something vendor specific) is via extensions.  You can take a look at
 the proposed OS compute API 
 spechttp://wiki.openstack.org/OpenStackAPI_1-1to see how extensions are 
 implemented there.  Also, Jorge Williams has done
 a good write up of the concept 
 herehttp://wiki.openstack.org/JorgeWilliams?action=AttachFiledo=viewtarget=Extensions.pdf
 .

  Erik

   From: Romain Lenglet rom...@midokura.jp
 Date: Tue, 15 Feb 2011 17:03:57 +0900
 To: 石井 久治 ishii.hisah...@lab.ntt.co.jp
 Cc: openstack@lists.launchpad.net

 Subject: Re: [Openstack] Network Service for L2/L3 Network Infrastructure
 blueprint

   Hi Ishii-san,

 On Tuesday, February 15, 2011 at 16:28, 石井 久治 wrote:

  Hello Hiroshi-san

  Do you mean that the former API is an interface that is
  defined in OpenStack project, and the latter API is
  a vendor specific API?
  My understanding is that yes, that's what he means.

 I also think so.

 In addition, I feel it is issue that what network functions should be
 defined as generic API, and what network functions should be defined as
 plugin specific API.
 How do you think ?

 I propose to apply the following criteria to determine which operations
 belong to the generic API:
 - any operation called by the compute service (Nova) directly MUST belong
 to the generic API;
 - any operation called by users (REST API, etc.) MAY belong to the generic
 API;
 - any operation belonging to the generic API MUST be independent from
 details of specific network service plugins (e.g. specific network models,
 specific supported protocols, etc.), i.e. the operation can be supported by
 every network service plugin imaginable, which means that if one can come up
 with a counter-example plugin that cannot implement that operation, then the
 operation cannot belong to the generic API.

  How about that?

  Regards,
 --
 Romain Lenglet



 Thanks
 Hisaharu Ishii


 (2011/02/15 16:18), Romain Lenglet wrote:

 Hi Hiroshi,
 On Tuesday, February 15, 2011 at 15:47, Hiroshi DEMPO wrote:
 Hello Hisaharu san


 I am not sure about the differences between generic network API and
 plugin X specific network service API.

 Do you mean that the former API is an interface that is
 defined in OpenStack project, and the latter API is
 a vendor specific API?


 My understanding is that yes, that's what he means.

 --
 Romain Lenglet



 Thanks
 Hiroshi

  -Original Message-
 From: openstack-bounces+dem=ah.jp.nec@lists.launchpad.net
 [mailto:openstack-bounces openstack-bounces+dem=
 ah.jp.nec@lists.launchpad.ne
 t] On Behalf Of 石井 久治
 Sent: Thursday, February 10, 2011 8:48 PM
 To: openstack@lists.launchpad.net
 Subject: Re: [Openstack] Network Service for L2/L3 Network
 Infrastructure blueprint

 Hi, all

 As we have said before, we have started designing and writing
 POC codes of network service.

  - I know that there were several documents on the new network
 service issue that were locally exchanged so far.
 Why not collecting them into one place and share them

 publicly?

 Based on these documents, I created an image of
 implementation (attached). And I propose the following set of
 methods as the generic network service APIs.
 - create_vnic(): vnic_id
 Create a VNIC and return the ID of the created VNIC.
 - list__vnics(vm_id): [vnic_id]
 Return the list of vnic_id, where vnic_id is the ID of a VNIC.
 - destroy_vnic(vnic_id)
 Remove a VNIC from its VM, given its ID, and destroy it.
 - plug(vnic_id, port_id)
 Plug the VNIC with ID vnic_id into the port with ID
 port_id managed by this network service.
 - unplug(vnic_id)
 Unplug the VNIC from its port, previously plugged by
 calling plug().
 - create_network(): network_id
 Create a new logical network.
 - list_networks(project_id): [network_id]
 Return the list of logical networks available for
 project with ID project_id.
 - destroy_network(network_id)
 Destroy the logical network with ID network_id.
 - create_port(network_id): port_id
 Create a port in the logical network with ID
 network_id, and return the port's ID.
 - list_ports(network_id): [port_id]
 Return the list of IDs of ports in a network given its ID.
 - destroy_port(port_id)
 Destroy port with ID port_id.

 This design is a first draft.
 So we would appreciate it if you would give us some comments.

 In parallel with it, we are writing POC codes and uploading
 it to lp:~ntt-pf-lab/nova/network-service.

 Thanks,
 Hisaharu Ishii


 (2011/02/02 19:02), Koji IIDA wrote:

 Hi, all


 We, NTT PF Lab., also agree to discuss about network service at the
 Diablo DS.

 However, we would really

Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-02-15 Thread Hiroshi DEMPO
Hisaharu-san, Romain and Eric

Thank you for your reply. I try to refer the doc Eric has given us.

Initially, there would be some ways to define a set of generic APIs. 
My idea is to make categories to have an overviews. Each category,
for example, would be linked a use case in the blue print. Then, 
we can go down to details in each category. 

As for general criteria,

 - any operation called by the compute service (Nova) directly 
 MUST belong to the generic API;

I have the same understanding. Because the generic APIs drawn 
with green box are called by Nova APIs. 

Hiroshi

 -Original Message-
 From: openstack-bounces+dem=ah.jp.nec@lists.launchpad.net 
 [mailto:openstack-bounces+dem=ah.jp.nec@lists.launchpad.ne
 t] On Behalf Of Erik Carlin
 Sent: Wednesday, February 16, 2011 1:02 AM
 To: Romain Lenglet; 石井 久治
 Cc: openstack@lists.launchpad.net
 Subject: Re: [Openstack] Network Service for L2/L3 Network 
 Infrastructure blueprint
 
 My understanding is that we want a single, canonical OS 
 network service API.  That API can then be implemented by 
 different service engines on that back end via a 
 plug-in/driver model.  The way additional features are added 
 to the canonical API that may not be core or for widespread 
 adoption (e.g. something vendor specific) is via extensions.  
 You can take a look at the proposed OS compute API spec 
 http://wiki.openstack.org/OpenStackAPI_1-1  to see how 
 extensions are implemented there.  Also, Jorge Williams has 
 done a good write up of the concept here 
 http://wiki.openstack.org/JorgeWilliams?action=AttachFiledo=
viewtarget=Extensions.pdf .
 
 Erik
 
 From: Romain Lenglet rom...@midokura.jp
 Date: Tue, 15 Feb 2011 17:03:57 +0900
 To: 石井 久治 ishii.hisah...@lab.ntt.co.jp
 Cc: openstack@lists.launchpad.net
 Subject: Re: [Openstack] Network Service for L2/L3 Network 
 Infrastructure blueprint
 
 
 
 Hi Ishii-san,
 
 On Tuesday, February 15, 2011 at 16:28, 石井 久治 wrote:
 
   
   Hello Hiroshi-san
   
Do you mean that the former API is an interface that is
defined in OpenStack project, and the latter API is
a vendor specific API?
My understanding is that yes, that's what he means.
   
   I also think so.
   
   In addition, I feel it is issue that what network 
 functions should be 
   defined as generic API, and what network functions 
 should be defined as 
   plugin specific API.
   How do you think ?
 
 I propose to apply the following criteria to determine which 
 operations belong to the generic API:
 - any operation called by the compute service (Nova) directly 
 MUST belong to the generic API;
 - any operation called by users (REST API, etc.) MAY belong 
 to the generic API;
 - any operation belonging to the generic API MUST be 
 independent from details of specific network service plugins 
 (e.g. specific network models, specific supported protocols, 
 etc.), i.e. the operation can be supported by every network 
 service plugin imaginable, which means that if one can come 
 up with a counter-example plugin that cannot implement that 
 operation, then the operation cannot belong to the generic API.
 
 How about that?
 
 Regards,
 --
 Romain Lenglet
 
   
   
   
   Thanks
   Hisaharu Ishii
   
   
   (2011/02/15 16:18), Romain Lenglet wrote:
   
 
   Hi Hiroshi,
   On Tuesday, February 15, 2011 at 15:47, Hiroshi 
 DEMPO wrote:
   Hello Hisaharu san
   
 
 
   I am not sure about the differences 
 between generic network API and
   plugin X specific network service API.
   
   Do you mean that the former API is an 
 interface that is
   defined in OpenStack project, and the 
 latter API is
   a vendor specific API?
   
 
 
   My understanding is that yes, that's what he means.
   
   --
   Romain Lenglet
   
 
 
 
   Thanks
   Hiroshi
   
   
 
   -Original Message-
   From: 
 openstack-bounces+dem=ah.jp.nec@lists.launchpad.net
   
 [mailto:openstack-bounces+dem=ah.jp.nec@lists.launchpad.ne
   t] On Behalf Of 石井 久治
   Sent: Thursday, February 10, 
 2011 8:48 PM
   To: openstack@lists.launchpad.net
   Subject: Re: [Openstack] 
 Network Service for L2/L3 Network
   Infrastructure blueprint
   
   Hi, all
   
   As we have said before, we have 
 started designing

Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-02-10 Thread Salvatore Orlando
Hi Hisaharu,

Thanks for sharing this design proposal and the POC code.
I will have a look at the code as soon as possible.
At a first glance, I think the design that you are proposing is in line with 
the goals of the network service blueprint 
(http://wiki.openstack.org/NetworkService)

If I got your design right, the network manager in the current nova 
implementation (Flat  VLAN) will become plugins. And a plugin can be divided 
into a management component (the one which runs on the network node) and an 
agent (the component running on the compute node). Is that correct?
Also, from your design it seems it should be possible to have different plugins 
running together in the same deployment. This makes a lot of sense to me, and 
IMHO, implies that there should be an association between the network entity 
and the plugin type. When a network is created, the user should be allowed to 
specify which type of plugin should be handling that network. For this reason I 
think maybe the create_network API should accept the type of plugin as an 
optional parameter, in order to route the request to the appropriate network 
node. If no parameter is provided then the request would be routed to a 
'default' network node.
I also noticed you are introducing the concepts of logical switch and 
virtual port. While I totally agree on the logical switch concept, I'm not 
totally sure about the virtual port concept. Do we really need it? Wouldn't be 
easier to have a model in which VIF are directly connected to logical switches 
and virtual port are implicitly assigned?

Finally, have you already got some design ideas concerning how to provide L4/L7 
services (for instance firewall, DHCP, DNS, load balancing, etc.) to nova 
networks?

Cheers, 
Salvatore

-Original Message-
From: openstack-bounces+salvatore.orlando=eu.citrix@lists.launchpad.net 
[mailto:openstack-bounces+salvatore.orlando=eu.citrix@lists.launchpad.net] 
On Behalf Of ?? ??
Sent: 10 February 2011 11:48
To: openstack@lists.launchpad.net
Subject: Re: [Openstack] Network Service for L2/L3 Network Infrastructure 
blueprint

Hi, all

As we have said before, we have started designing and writing POC codes of 
network service.

 - I know that there were several documents on the new network
   service issue that were locally exchanged so far.
   Why not collecting them into one place and share them publicly?

Based on these documents, I created an image of implementation (attached). And 
I propose the following set of methods as the generic network service APIs.
- create_vnic(): vnic_id
   Create a VNIC and return the ID of the created VNIC.
- list_vnics(vm_id): [vnic_id]
   Return the list of vnic_id, where vnic_id is the ID of a VNIC.
- destroy_vnic(vnic_id)
   Remove a VNIC from its VM, given its ID, and destroy it.
- plug(vnic_id, port_id)
   Plug the VNIC with ID vnic_id into the port with ID port_id managed by 
this network service.
- unplug(vnic_id)
   Unplug the VNIC from its port, previously plugged by calling plug().
- create_network(): network_id
  Create a new logical network.
- list_networks(project_id): [network_id]
  Return the list of logical networks available for project with ID 
project_id.
- destroy_network(network_id)
  Destroy the logical network with ID network_id.
- create_port(network_id): port_id
  Create a port in the logical network with ID network_id, and return the 
port's ID.
- list_ports(network_id): [port_id]
  Return the list of IDs of ports in a network given its ID.
- destroy_port(port_id)
  Destroy port with ID port_id.

This design is a first draft.
So we would appreciate it if you would give us some comments.

In parallel with it, we are writing POC codes and uploading it to 
lp:~ntt-pf-lab/nova/network-service.

Thanks,
Hisaharu Ishii


(2011/02/02 19:02), Koji IIDA wrote:
 Hi, all


 We, NTT PF Lab., also agree to discuss about network service at the 
 Diablo DS.

 However, we would really like to include network service in the Diablo 
 release because our customers strongly demand this feature.  And we 
 think that it is quite important to merge new network service to trunk 
 soon after Diablo DS so that every developer can contribute their 
 effort based on the new code.

 We are planning to provide source code for network service in a couple 
 of weeks.  We would appreciate it if you would review it and give us 
 some feedback before the next design summit.

 Ewan, thanks for your making new entry at wiki page (*1). We will also 
 post our comments soon.

 (*1) http://wiki.openstack.org/NetworkService


 Thanks,
 Koji Iida


 (2011/01/31 21:19), Ewan Mellor wrote:
 I will collect the documents together as you suggest, and I agree that we 
 need to get the requirements laid out again.

 Please subscribe to the blueprint on Launchpad -- that way you will be 
 notified of updates.

 https://blueprints.launchpad.net/nova/+spec/bexar-network-service

 Thanks

Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-02-03 Thread Jay Pipes
On Thu, Feb 3, 2011 at 8:46 AM, Ewan Mellor ewan.mel...@eu.citrix.com wrote:
 -Original Message-
 From: Jay Pipes [mailto:jaypi...@gmail.com]
 Sent: 03 February 2011 13:40
 To: Armando Migliaccio
 Cc: Ewan Mellor; Andy Smith; Rick Clark; Søren Hansen;
 openstack@lists.launchpad.net
 Subject: Re: [Openstack] Network Service for L2/L3 Network
 Infrastructure blueprint

 On Thu, Feb 3, 2011 at 4:28 AM, Armando Migliaccio
 armando.migliac...@eu.citrix.com wrote:
  I second what Ewan said about the coding style in nova.virt.xenapi. I
 was
  responsible for part of refactoring and I am no longer fond of it
 either. I
  still think that it was good to break xenapi.py down as we did, but
 with
  hindsight I would like to revise some of the choices made, and make
 the code
  a bit more Pythonic.

 Nothing wrong with proposing for merging a branch that does
 refactoring. It doesn't need to be tied to a bug or blueprint, but if
 you wait until late in the Cactus cycle, it would have a smaller
 chance of making it into Cactus since the priority is not refactoring
 but instead stability and feature parity.

 So, nothing stopping anyone from proposing refactoring branches.  :)

 Absolutely not, as long as we're not trying to merge conflicting branches.  
 That was the problem last time -- I18N and the logging changes in particular 
 were such pervasive pieces of work that it was hard work merging all the 
 time.  Hopefully we won't see the likes of those again for a little while!

Hehe, understood. I did 6 or 7 merge trunks while dealing with i18n,
so I feel you :)  But, luckily, we don't look to have any of those
super-invasive blueprints on deck for Cactus...but you never know ;)

-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-02-03 Thread Ewan Mellor
 -Original Message-
 From: openstack-bounces+ewan.mellor=citrix@lists.launchpad.net
 [mailto:openstack-bounces+ewan.mellor=citrix@lists.launchpad.net]
 On Behalf Of Ed Leafe
 Sent: 03 February 2011 14:18
 To: openstack@lists.launchpad.net
 Subject: Re: [Openstack] Network Service for L2/L3 Network
 Infrastructure blueprint
 
 On Feb 3, 2011, at 8:52 AM, Jay Pipes wrote:
 
  Absolutely not, as long as we're not trying to merge conflicting
 branches.  That was the problem last time -- I18N and the logging
 changes in particular were such pervasive pieces of work that it was
 hard work merging all the time.  Hopefully we won't see the likes of
 those again for a little while!
 
  Hehe, understood. I did 6 or 7 merge trunks while dealing with i18n,
  so I feel you :)  But, luckily, we don't look to have any of those
  super-invasive blueprints on deck for Cactus...but you never know ;)
 
 
   Is there any proscription about merging a partial change? IOW, if
 something like the logging change affected 100 files, would it be
 acceptable to merge, say, 20 at a time? As long as tests continue to
 pass, of course, and the merge prop is labeled as a partial
 implementation, and everything else continues to work without problem.
 This way any individual merge will only conflict with a few branches,
 while huge mega-merges will conflict with just about everything.

I'd much rather do small merges.  I'm a commit-once, commit-often man, for 
exactly this reason.

I think the objection was that it would be difficult to peer-review stuff if it 
was coming in piecemeal, because you don't get to see the big picture of the 
change.  That's a reasonable comment, and commit-once, commit-often does rely 
on the fact that you trust the person making all those commits.

Maybe we should normally do big-picture merges normally, but have an exception 
procedure for when we'd like them piecemeal.

Ewan.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-02-03 Thread Ed Leafe
On Feb 3, 2011, at 9:33 AM, Ewan Mellor wrote:

 Maybe we should normally do big-picture merges normally, but have an 
 exception procedure for when we'd like them piecemeal.


I think the main differentiator should be if the partial merge can 
stand on its own. IOW, with something like the logging change, the first merge 
would include any code to support the new logging changes, but after that, 
going through all the files that need to be updated to use this change is 
mainly exhaustive busy work. Effectively, the single project could be merged as:

a) add changed logging code
b) change a bunch of files to use new logging
c) change a bunch more files to use new logging
d) change a bunch more files to use new logging
... repeat in bite-size chunks
z) final commit of changes.

This way, at any point, everything will continue to work; the only 
problem is that some code will be using the new logging, and other will 
continue to use the old logging. 

The main advantage is that reviewers will avoid having to wade through 
miles-long diffs. Having to go through diffs that are extremely long increases 
the chance that the reviewers eyes may miss a typo or some other problem.



-- Ed Leafe




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-02-03 Thread Jay Pipes
++ on all below points :) There are few bugs or blueprints that
actually require a huge patch. Most things can be done in small
chunks, making sure each chunk doesn't break tests...

-jay

On Thu, Feb 3, 2011 at 10:03 AM, Ed Leafe e...@leafe.com wrote:
 On Feb 3, 2011, at 9:33 AM, Ewan Mellor wrote:

 Maybe we should normally do big-picture merges normally, but have an 
 exception procedure for when we'd like them piecemeal.


        I think the main differentiator should be if the partial merge can 
 stand on its own. IOW, with something like the logging change, the first 
 merge would include any code to support the new logging changes, but after 
 that, going through all the files that need to be updated to use this change 
 is mainly exhaustive busy work. Effectively, the single project could be 
 merged as:

 a) add changed logging code
 b) change a bunch of files to use new logging
 c) change a bunch more files to use new logging
 d) change a bunch more files to use new logging
 ... repeat in bite-size chunks
 z) final commit of changes.

        This way, at any point, everything will continue to work; the only 
 problem is that some code will be using the new logging, and other will 
 continue to use the old logging.

        The main advantage is that reviewers will avoid having to wade through 
 miles-long diffs. Having to go through diffs that are extremely long 
 increases the chance that the reviewers eyes may miss a typo or some other 
 problem.



 -- Ed Leafe




 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-02-02 Thread Koji IIDA
Hi, all


We, NTT PF Lab., also agree to discuss about network service at the
Diablo DS.

However, we would really like to include network service in the Diablo
release because our customers strongly demand this feature.  And we
think that it is quite important to merge new network service to trunk
soon after Diablo DS so that every developer can contribute their effort
based on the new code.

We are planning to provide source code for network service in a couple
of weeks.  We would appreciate it if you would review it and give us
some feedback before the next design summit.

Ewan, thanks for your making new entry at wiki page (*1). We will also
post our comments soon.

(*1) http://wiki.openstack.org/NetworkService


Thanks,
Koji Iida


(2011/01/31 21:19), Ewan Mellor wrote:
 I will collect the documents together as you suggest, and I agree that we 
 need to get the requirements laid out again.
 
 Please subscribe to the blueprint on Launchpad -- that way you will be 
 notified of updates.
 
 https://blueprints.launchpad.net/nova/+spec/bexar-network-service
 
 Thanks,
 
 Ewan.
 
 -Original Message-
 From: openstack-bounces+ewan.mellor=citrix@lists.launchpad.net
 [mailto:openstack-bounces+ewan.mellor=citrix@lists.launchpad.net]
 On Behalf Of Masanori ITOH
 Sent: 31 January 2011 10:31
 To: openstack@lists.launchpad.net
 Subject: Re: [Openstack] Network Service for L2/L3 Network
 Infrastructure blueprint

 Hello,

 We, NTT DATA, also agree with majority of folks.
 It's realistic shooting for the the Diablo time frame to have
 the new network service.

 Here are my suggestions:

  - I know that there were several documents on the new network service
 issue
that were locally exchanged so far.
Why not collecting them into one place and share them publicly?

  - I know that the discussion went into a bit implementation details.
But now, what about starting the discussion from the higher level
design things (again)?  Especially, from the requirements level.

 Any thoughts?

 Masanori


 From: John Purrier j...@openstack.org
 Subject: Re: [Openstack] Network Service for L2/L3 Network
 Infrastructure blueprint
 Date: Sat, 29 Jan 2011 06:06:26 +0900

 You are correct, the networking service will be more complex than the
 volume
 service. The existing blueprint is pretty comprehensive, not only
 encompassing the functionality that exists in today's network service
 in
 Nova, but also forward looking functionality around flexible
 networking/openvswitch and layer 2 network bridging between cloud
 deployments.

 This will be a longer term project and will serve as the bedrock for
 many
 future OpenStack capabilities.

 John

 -Original Message-
 From: openstack-bounces+john=openstack@lists.launchpad.net
 [mailto:openstack-bounces+john=openstack@lists.launchpad.net] On
 Behalf
 Of Thierry Carrez
 Sent: Friday, January 28, 2011 1:52 PM
 To: openstack@lists.launchpad.net
 Subject: Re: [Openstack] Network Service for L2/L3 Network
 Infrastructure
 blueprint

 John Purrier wrote:
 Here is the suggestion. It is clear from the response on the list
 that
 refactoring Nova in the Cactus timeframe will be too risky,
 particularly as
 we are focusing Cactus on Stability, Reliability, and Deployability
 (along
 with a complete OpenStack API). For Cactus we should leave the
 network and
 volume services alone in Nova to minimize destabilizing the code
 base. In
 parallel, we can initiate the Network and Volume Service projects in
 Launchpad and allow the teams that form around these efforts to move
 in
 parallel, perhaps seeding their projects from the existing Nova code.

 Once we complete Cactus we can have discussions at the Diablo DS
 about
 progress these efforts have made and how best to move forward with
 Nova
 integration and determine release targets.

 I agree that there is value in starting the proof-of-concept work
 around
 the network services, without sacrificing too many developers to it,
 so
 that a good plan can be presented and discussed at the Diablo Summit.

 If volume sounds relatively simple to me, network sounds
 significantly
 more complex (just looking at the code ,network manager code is
 currently used both by nova-compute and nova-network to modify the
 local
 networking stack, so it's more than just handing out IP addresses
 through an API).

 Cheers,

 --
 Thierry Carrez (ttx)
 Release Manager, OpenStack

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

 ___
 Mailing list: https

Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-02-02 Thread Ewan Mellor
 Try as we might there is still not a real consensus on high level coding 
 style, for example the Xen-related code is radically different in shape and 
 style from
 the libvirt code as is the rackspace api from the ec2 api, and having 
 projects split off only worsens the problem as individual developers have 
 fewer eyes on them.

For what it’s worth, I’m not entirely happy with the coding style in 
nova.virt.xenapi either, so we might not be as far from consensus as you think. 
 Some of the “Java-ish” code was allowed through code review for the sake of 
expedience, because it was a big improvement over what was there, even if it 
wasn’t perfect.  I’d like to rework this whenever there’s a sensible time to do 
so.

Also, I’d love for us to be using the same code paths as much as possible, and 
whatever help you need getting off KVM and onto a proper hypervisor, I’m more 
than happy to help ;-)

Ewan.

From: Andy Smith [mailto:andys...@gmail.com]
Sent: 28 January 2011 15:40
To: Rick Clark
Cc: Jay Pipes; Ewan Mellor; Søren Hansen; openstack@lists.launchpad.net
Subject: Re: [Openstack] Network Service for L2/L3 Network Infrastructure 
blueprint

I'd second a bit of what Jay says and toss in that I don't think the code is 
ready to be splitting services off:

- There have already been significant problems dealing with glance, the nasa 
people and the rackspace people have effectively completely different code 
paths (nasa: ec2, objectstore, libvirt; rackspace: rackspace, glance, xenapi) 
and that needs to be aligned a bit more before we can create more separations 
if we want everybody to be working towards the same goals.
- Try as we might there is still not a real consensus on high level coding 
style, for example the Xen-related code is radically different in shape and 
style from the libvirt code as is the rackspace api from the ec2 api, and 
having projects split off only worsens the problem as individual developers 
have fewer eyes on them.

My goal and as far as I can tell most of my team's goals are to rectify a lot 
of that situation over the course of the next release by:

- setting up and working through the rackspace side of the code paths (as 
mentioned above) enough that we can start generalizing its utility for the 
entire project
- actual deprecation of the majority of objectstore
- more thorough code reviews to ensure that code is meeting the overall style 
of the project, and probably a document describing the code review process

After Cactus if the idea makes sense to split off then it can be pursued then, 
but at the moment it is much too early to consider it.

On Fri, Jan 28, 2011 at 7:06 AM, Rick Clark 
r...@openstack.orgmailto:r...@openstack.org wrote:
On 01/28/2011 08:55 AM, Jay Pipes wrote:
 On Fri, Jan 28, 2011 at 8:47 AM, Rick Clark 
 r...@openstack.orgmailto:r...@openstack.org wrote:
 I recognise the desire to do this for Cactus, but I feel that pulling
 out the network controller (and/or volume controller) into their own
 separate OpenStack subprojects is not a good idea for Cactus.  Looking
 at the (dozens of) blueprints slated for Cactus, doing this kind of
 major rework will mean that most (if not all) of those blueprints will
 have to be delayed while this pulling out of code occurs. This will
 definitely jeopardise the Cactus release.

 My vote is to delay this at a minimum to the Diablo release.

 And, for the record, I haven't seen any blueprints for the network as
 a service or volume as a service projects. Can someone point us to
 them?

 Thanks!
 jay
Whew, Jay I thought you were advocating major changes in Cactus.  That
would completely mess up my view of the world :)

https://blueprints.launchpad.net/nova/+spec/bexar-network-service
https://blueprints.launchpad.net/nova/+spec/bexar-extend-network-model
https://blueprints.launchpad.net/nova/+spec/bexar-network-service


It was discussed at ODS, but I have not seen any code or momentum, to date.

I think it is worth while to have an open discussion about what if any
of this can be safely done in Cactus.  I like you, Jay, feel a bit
conservative.  I think we lost focus of the reason we chose time based
releases. It is time to focus on nova being a solid trustworthy
platform.  Features land when they are of sufficient quality, releases
contain only the features that passed muster.

I will be sending an email about the focus and theme of Cactus in a
little while.

Rick



___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-01-31 Thread Masanori ITOH
Hello,

We, NTT DATA, also agree with majority of folks.
It's realistic shooting for the the Diablo time frame to have
the new network service.

Here are my suggestions:

 - I know that there were several documents on the new network service issue
   that were locally exchanged so far.
   Why not collecting them into one place and share them publicly?

 - I know that the discussion went into a bit implementation details.
   But now, what about starting the discussion from the higher level
   design things (again)?  Especially, from the requirements level.

Any thoughts?

Masanori


From: John Purrier j...@openstack.org
Subject: Re: [Openstack] Network Service for L2/L3 Network Infrastructure 
blueprint
Date: Sat, 29 Jan 2011 06:06:26 +0900

 You are correct, the networking service will be more complex than the volume
 service. The existing blueprint is pretty comprehensive, not only
 encompassing the functionality that exists in today's network service in
 Nova, but also forward looking functionality around flexible
 networking/openvswitch and layer 2 network bridging between cloud
 deployments.
 
 This will be a longer term project and will serve as the bedrock for many
 future OpenStack capabilities.
 
 John
 
 -Original Message-
 From: openstack-bounces+john=openstack@lists.launchpad.net
 [mailto:openstack-bounces+john=openstack@lists.launchpad.net] On Behalf
 Of Thierry Carrez
 Sent: Friday, January 28, 2011 1:52 PM
 To: openstack@lists.launchpad.net
 Subject: Re: [Openstack] Network Service for L2/L3 Network Infrastructure
 blueprint
 
 John Purrier wrote:
  Here is the suggestion. It is clear from the response on the list that
 refactoring Nova in the Cactus timeframe will be too risky, particularly as
 we are focusing Cactus on Stability, Reliability, and Deployability (along
 with a complete OpenStack API). For Cactus we should leave the network and
 volume services alone in Nova to minimize destabilizing the code base. In
 parallel, we can initiate the Network and Volume Service projects in
 Launchpad and allow the teams that form around these efforts to move in
 parallel, perhaps seeding their projects from the existing Nova code.
  
  Once we complete Cactus we can have discussions at the Diablo DS about
 progress these efforts have made and how best to move forward with Nova
 integration and determine release targets.
 
 I agree that there is value in starting the proof-of-concept work around
 the network services, without sacrificing too many developers to it, so
 that a good plan can be presented and discussed at the Diablo Summit.
 
 If volume sounds relatively simple to me, network sounds significantly
 more complex (just looking at the code ,network manager code is
 currently used both by nova-compute and nova-network to modify the local
 networking stack, so it's more than just handing out IP addresses
 through an API).
 
 Cheers,
 
 -- 
 Thierry Carrez (ttx)
 Release Manager, OpenStack
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-01-31 Thread Ewan Mellor
I will collect the documents together as you suggest, and I agree that we need 
to get the requirements laid out again.

Please subscribe to the blueprint on Launchpad -- that way you will be notified 
of updates.

https://blueprints.launchpad.net/nova/+spec/bexar-network-service

Thanks,

Ewan.

 -Original Message-
 From: openstack-bounces+ewan.mellor=citrix@lists.launchpad.net
 [mailto:openstack-bounces+ewan.mellor=citrix@lists.launchpad.net]
 On Behalf Of Masanori ITOH
 Sent: 31 January 2011 10:31
 To: openstack@lists.launchpad.net
 Subject: Re: [Openstack] Network Service for L2/L3 Network
 Infrastructure blueprint
 
 Hello,
 
 We, NTT DATA, also agree with majority of folks.
 It's realistic shooting for the the Diablo time frame to have
 the new network service.
 
 Here are my suggestions:
 
  - I know that there were several documents on the new network service
 issue
that were locally exchanged so far.
Why not collecting them into one place and share them publicly?
 
  - I know that the discussion went into a bit implementation details.
But now, what about starting the discussion from the higher level
design things (again)?  Especially, from the requirements level.
 
 Any thoughts?
 
 Masanori
 
 
 From: John Purrier j...@openstack.org
 Subject: Re: [Openstack] Network Service for L2/L3 Network
 Infrastructure blueprint
 Date: Sat, 29 Jan 2011 06:06:26 +0900
 
  You are correct, the networking service will be more complex than the
 volume
  service. The existing blueprint is pretty comprehensive, not only
  encompassing the functionality that exists in today's network service
 in
  Nova, but also forward looking functionality around flexible
  networking/openvswitch and layer 2 network bridging between cloud
  deployments.
 
  This will be a longer term project and will serve as the bedrock for
 many
  future OpenStack capabilities.
 
  John
 
  -Original Message-
  From: openstack-bounces+john=openstack@lists.launchpad.net
  [mailto:openstack-bounces+john=openstack@lists.launchpad.net] On
 Behalf
  Of Thierry Carrez
  Sent: Friday, January 28, 2011 1:52 PM
  To: openstack@lists.launchpad.net
  Subject: Re: [Openstack] Network Service for L2/L3 Network
 Infrastructure
  blueprint
 
  John Purrier wrote:
   Here is the suggestion. It is clear from the response on the list
 that
  refactoring Nova in the Cactus timeframe will be too risky,
 particularly as
  we are focusing Cactus on Stability, Reliability, and Deployability
 (along
  with a complete OpenStack API). For Cactus we should leave the
 network and
  volume services alone in Nova to minimize destabilizing the code
 base. In
  parallel, we can initiate the Network and Volume Service projects in
  Launchpad and allow the teams that form around these efforts to move
 in
  parallel, perhaps seeding their projects from the existing Nova code.
  
   Once we complete Cactus we can have discussions at the Diablo DS
 about
  progress these efforts have made and how best to move forward with
 Nova
  integration and determine release targets.
 
  I agree that there is value in starting the proof-of-concept work
 around
  the network services, without sacrificing too many developers to it,
 so
  that a good plan can be presented and discussed at the Diablo Summit.
 
  If volume sounds relatively simple to me, network sounds
 significantly
  more complex (just looking at the code ,network manager code is
  currently used both by nova-compute and nova-network to modify the
 local
  networking stack, so it's more than just handing out IP addresses
  through an API).
 
  Cheers,
 
  --
  Thierry Carrez (ttx)
  Release Manager, OpenStack
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-01-31 Thread John Purrier
In order to bring this discussion to a close and get everyone on the same page 
for Cactus development, here is where we have landed:

 

1.   We will *not* be separating the network and volume controllers and API 
servers from the Nova project.

 

2.   On-going work to extend the Nova capabilities in these areas will be 
done within the existing project and be based on extending the existing 
implementation. The folks working on these projects will determine the best 
approach for code re-use, extending functionality, and potential integration of 
additional community contributions in each area.

 

3.   Like all efforts for Cactus, correct trade-offs must be made to 
maintain deployability, stability, and reliability (key themes of the release).

 

4.   Core design concepts allowing each service to horizontally scale 
independently, present public/management/event interfaces through a documented 
OpenStack API, and allow services to be deployed independently of each other 
must be maintained. If issues arise that do not allow the current code 
structure to support these concepts the teams should raise the issues and open 
discussions on how to best address.

 

We will target the Diablo design summit to discuss and review the progress made 
on these services and determine if the best approach to the project has been 
made.

 

Thoughts?

 

John

 

From: Andy Smith [mailto:andys...@gmail.com] 
Sent: Friday, January 28, 2011 4:06 PM
To: John Purrier
Cc: Rick Clark; Jay Pipes; Ewan Mellor; Søren Hansen; 
openstack@lists.launchpad.net
Subject: Re: [Openstack] Network Service for L2/L3 Network Infrastructure 
blueprint

 

 

On Fri, Jan 28, 2011 at 1:19 PM, John Purrier j...@openstack.org wrote:

Thanks for the response, Andy. I think we actually agree on this J.

 

You said:

 

This statement is invalid, nova is already broken into services, each of which 
can be dealt with individually and scaled as such, whether the code is part of 
the same repository has little bearing on that. The goals of scaling are 
orthogonal to the location of the code and are much more related to separation 
of concerns in the code, making sure that volume code does not rely on compute 
code for example (which at this point it doesn't particularly).

 

The fact that the volume code and the compute code are not coupled make the 
separation easy. One factor that I did not mention is that each service will 
present public, management, and optional extension APIs, allowing each service 
to be deployed independently.

 

So far that is all possible under the existing auspices of Nova. DirectAPI will 
happily sit in front of any of the services independently, any of the services 
when run can be configured with different instances of RabbitMQ to point at, 
DirectAPI supports a large amount of extensibility and pluggable 
managers/drivers support a bunch more.

 

Decoupling of the code has always been a goal, as have been providing public, 
management, and extension APIs and we aren't doing so bad.

 

I don't think we disagree about wanting to run things independently, but for 
the moment I have seen no convincing arguments for separating the codebase.

 

 

 

You said:

 

That suggestion is contradictory, first you say not to separate then you 
suggest creating separate projects. I am against creating separate projects, 
the development is part of Nova until at least Cactus.

 

This is exactly my suggestion below. Keep Nova monolithic until Cactus, then 
integrate the new services once Cactus is shipped. There is work to be done to 
create the service frameworks, API engines, extension mechanisms, and porting 
the existing functionality. All of this can be done in parallel to the 
stability work being done in the Nova code base. As far as I know there are not 
major updates coming in either the volume or network management code for this 
milestone.

 

Where is this parallel work being done if not in a separate project?

 

--andy

 

 

 

John

 

From: Andy Smith [mailto:andys...@gmail.com] 
Sent: Friday, January 28, 2011 12:45 PM
To: John Purrier
Cc: Rick Clark; Jay Pipes; Ewan Mellor; Søren Hansen; 
openstack@lists.launchpad.net


Subject: Re: [Openstack] Network Service for L2/L3 Network Infrastructure 
blueprint

 

 

On Fri, Jan 28, 2011 at 10:18 AM, John Purrier j...@openstack.org wrote:

Some clarification and a suggestion regarding Nova and the two new proposed 
services (Network/Volume).

To be clear, Nova today contains both volume and network services. We can 
specify, attach, and manage block devices and also specify network related 
items, such as IP assignment and VLAN creation. I have heard there is some 
confusion on this, since we started talking about creating OpenStack services 
around these areas that will be separate from the cloud controller (Nova).

The driving factors to consider creating independent services for VM, Images, 
Network, and Volumes are 1) To allow deployment

Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-01-31 Thread Devin Carlen
This has my support.  For our time frame and the goal of robustness and 
stability for the upcoming release, this is the most reasonable course of 
action.  



Devin



On Jan 31, 2011, at 10:40 AM, John Purrier wrote:

 In order to bring this discussion to a close and get everyone on the same 
 page for Cactus development, here is where we have landed:
  
 1.   We will *not* be separating the network and volume controllers and 
 API servers from the Nova project.
  
 2.   On-going work to extend the Nova capabilities in these areas will be 
 done within the existing project and be based on extending the existing 
 implementation. The folks working on these projects will determine the best 
 approach for code re-use, extending functionality, and potential integration 
 of additional community contributions in each area.
  
 3.   Like all efforts for Cactus, correct trade-offs must be made to 
 maintain deployability, stability, and reliability (key themes of the 
 release).
  
 4.   Core design concepts allowing each service to horizontally scale 
 independently, present public/management/event interfaces through a 
 documented OpenStack API, and allow services to be deployed independently of 
 each other must be maintained. If issues arise that do not allow the current 
 code structure to support these concepts the teams should raise the issues 
 and open discussions on how to best address.
  
 We will target the Diablo design summit to discuss and review the progress 
 made on these services and determine if the best approach to the project has 
 been made.
  
 Thoughts?
  
 John
  
 From: Andy Smith [mailto:andys...@gmail.com] 
 Sent: Friday, January 28, 2011 4:06 PM
 To: John Purrier
 Cc: Rick Clark; Jay Pipes; Ewan Mellor; Søren Hansen; 
 openstack@lists.launchpad.net
 Subject: Re: [Openstack] Network Service for L2/L3 Network Infrastructure 
 blueprint
  
  
 
 On Fri, Jan 28, 2011 at 1:19 PM, John Purrier j...@openstack.org wrote:
 Thanks for the response, Andy. I think we actually agree on this J.
  
 You said:
  
 This statement is invalid, nova is already broken into services, each of 
 which can be dealt with individually and scaled as such, whether the code is 
 part of the same repository has little bearing on that. The goals of scaling 
 are orthogonal to the location of the code and are much more related to 
 separation of concerns in the code, making sure that volume code does not 
 rely on compute code for example (which at this point it doesn't 
 particularly).
  
 The fact that the volume code and the compute code are not coupled make the 
 separation easy. One factor that I did not mention is that each service will 
 present public, management, and optional extension APIs, allowing each 
 service to be deployed independently.
  
 So far that is all possible under the existing auspices of Nova. DirectAPI 
 will happily sit in front of any of the services independently, any of the 
 services when run can be configured with different instances of RabbitMQ to 
 point at, DirectAPI supports a large amount of extensibility and pluggable 
 managers/drivers support a bunch more.
  
 Decoupling of the code has always been a goal, as have been providing public, 
 management, and extension APIs and we aren't doing so bad.
  
 I don't think we disagree about wanting to run things independently, but for 
 the moment I have seen no convincing arguments for separating the codebase.
  
  
  
 You said:
  
 That suggestion is contradictory, first you say not to separate then you 
 suggest creating separate projects. I am against creating separate projects, 
 the development is part of Nova until at least Cactus.
  
 This is exactly my suggestion below. Keep Nova monolithic until Cactus, then 
 integrate the new services once Cactus is shipped. There is work to be done 
 to create the service frameworks, API engines, extension mechanisms, and 
 porting the existing functionality. All of this can be done in parallel to 
 the stability work being done in the Nova code base. As far as I know there 
 are not major updates coming in either the volume or network management code 
 for this milestone.
  
 Where is this parallel work being done if not in a separate project?
  
 --andy
  
  
  
 John
  
 From: Andy Smith [mailto:andys...@gmail.com] 
 Sent: Friday, January 28, 2011 12:45 PM
 To: John Purrier
 Cc: Rick Clark; Jay Pipes; Ewan Mellor; Søren Hansen; 
 openstack@lists.launchpad.net
 
 Subject: Re: [Openstack] Network Service for L2/L3 Network Infrastructure 
 blueprint
  
  
 
 On Fri, Jan 28, 2011 at 10:18 AM, John Purrier j...@openstack.org wrote:
 Some clarification and a suggestion regarding Nova and the two new proposed 
 services (Network/Volume).
 
 To be clear, Nova today contains both volume and network services. We can 
 specify, attach, and manage block devices and also specify network related 
 items, such as IP assignment and VLAN creation. I have heard there is some

Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-01-31 Thread Jay Pipes
On Mon, Jan 31, 2011 at 1:42 PM, Devin Carlen devin.car...@gmail.com wrote:
 This has my support.  For our time frame and the goal of robustness and 
 stability for the upcoming release, this is the most reasonable course of 
 action.

Seconded.

-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-01-31 Thread Vishvananda Ishaya
+1

On Jan 31, 2011, at 10:40 AM, John Purrier wrote:

 In order to bring this discussion to a close and get everyone on the same 
 page for Cactus development, here is where we have landed:
  
 1.   We will *not* be separating the network and volume controllers and 
 API servers from the Nova project.
  
 2.   On-going work to extend the Nova capabilities in these areas will be 
 done within the existing project and be based on extending the existing 
 implementation. The folks working on these projects will determine the best 
 approach for code re-use, extending functionality, and potential integration 
 of additional community contributions in each area.
  
 3.   Like all efforts for Cactus, correct trade-offs must be made to 
 maintain deployability, stability, and reliability (key themes of the 
 release).
  
 4.   Core design concepts allowing each service to horizontally scale 
 independently, present public/management/event interfaces through a 
 documented OpenStack API, and allow services to be deployed independently of 
 each other must be maintained. If issues arise that do not allow the current 
 code structure to support these concepts the teams should raise the issues 
 and open discussions on how to best address.
  
 We will target the Diablo design summit to discuss and review the progress 
 made on these services and determine if the best approach to the project has 
 been made.
  
 Thoughts?
  
 John
  
 From: Andy Smith [mailto:andys...@gmail.com] 
 Sent: Friday, January 28, 2011 4:06 PM
 To: John Purrier
 Cc: Rick Clark; Jay Pipes; Ewan Mellor; Søren Hansen; 
 openstack@lists.launchpad.net
 Subject: Re: [Openstack] Network Service for L2/L3 Network Infrastructure 
 blueprint
  
  
 
 On Fri, Jan 28, 2011 at 1:19 PM, John Purrier j...@openstack.org wrote:
 Thanks for the response, Andy. I think we actually agree on this J.
  
 You said:
  
 This statement is invalid, nova is already broken into services, each of 
 which can be dealt with individually and scaled as such, whether the code is 
 part of the same repository has little bearing on that. The goals of scaling 
 are orthogonal to the location of the code and are much more related to 
 separation of concerns in the code, making sure that volume code does not 
 rely on compute code for example (which at this point it doesn't 
 particularly).
  
 The fact that the volume code and the compute code are not coupled make the 
 separation easy. One factor that I did not mention is that each service will 
 present public, management, and optional extension APIs, allowing each 
 service to be deployed independently.
  
 So far that is all possible under the existing auspices of Nova. DirectAPI 
 will happily sit in front of any of the services independently, any of the 
 services when run can be configured with different instances of RabbitMQ to 
 point at, DirectAPI supports a large amount of extensibility and pluggable 
 managers/drivers support a bunch more.
  
 Decoupling of the code has always been a goal, as have been providing public, 
 management, and extension APIs and we aren't doing so bad.
  
 I don't think we disagree about wanting to run things independently, but for 
 the moment I have seen no convincing arguments for separating the codebase.
  
  
  
 You said:
  
 That suggestion is contradictory, first you say not to separate then you 
 suggest creating separate projects. I am against creating separate projects, 
 the development is part of Nova until at least Cactus.
  
 This is exactly my suggestion below. Keep Nova monolithic until Cactus, then 
 integrate the new services once Cactus is shipped. There is work to be done 
 to create the service frameworks, API engines, extension mechanisms, and 
 porting the existing functionality. All of this can be done in parallel to 
 the stability work being done in the Nova code base. As far as I know there 
 are not major updates coming in either the volume or network management code 
 for this milestone.
  
 Where is this parallel work being done if not in a separate project?
  
 --andy
  
  
  
 John
  
 From: Andy Smith [mailto:andys...@gmail.com] 
 Sent: Friday, January 28, 2011 12:45 PM
 To: John Purrier
 Cc: Rick Clark; Jay Pipes; Ewan Mellor; Søren Hansen; 
 openstack@lists.launchpad.net
 
 Subject: Re: [Openstack] Network Service for L2/L3 Network Infrastructure 
 blueprint
  
  
 
 On Fri, Jan 28, 2011 at 10:18 AM, John Purrier j...@openstack.org wrote:
 Some clarification and a suggestion regarding Nova and the two new proposed 
 services (Network/Volume).
 
 To be clear, Nova today contains both volume and network services. We can 
 specify, attach, and manage block devices and also specify network related 
 items, such as IP assignment and VLAN creation. I have heard there is some 
 confusion on this, since we started talking about creating OpenStack services 
 around these areas that will be separate from the cloud controller (Nova).
 
 The driving

Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-01-31 Thread Dan Wendlandt
On Mon, Jan 31, 2011 at 10:40 AM, John Purrier j...@openstack.org wrote:

 In order to bring this discussion to a close and get everyone on the same
 page for Cactus development, here is where we have landed:



 1.   We will **not** be separating the network and volume controllers
 and API servers from the Nova project.


I think this is definitely the right move.




 2.   On-going work to extend the Nova capabilities in these areas will
 be done within the existing project and be based on extending the existing
 implementation. The folks working on these projects will determine the best
 approach for code re-use, extending functionality, and potential integration
 of additional community contributions in each area.



 3.   Like all efforts for Cactus, correct trade-offs must be made to
 maintain deployability, stability, and reliability (key themes of the
 release).



 4.   Core design concepts allowing each service to horizontally scale
 independently, present public/management/event interfaces through a
 documented OpenStack API, and allow services to be deployed independently of
 each other must be maintained. If issues arise that do not allow the current
 code structure to support these concepts the teams should raise the issues
 and open discussions on how to best address.



 We will target the Diablo design summit to discuss and review the progress
 made on these services and determine if the best approach to the project has
 been made.



 Thoughts?



 John



 *From:* Andy Smith [mailto:andys...@gmail.com]
 *Sent:* Friday, January 28, 2011 4:06 PM

 *To:* John Purrier
 *Cc:* Rick Clark; Jay Pipes; Ewan Mellor; Søren Hansen;
 openstack@lists.launchpad.net
 *Subject:* Re: [Openstack] Network Service for L2/L3 Network
 Infrastructure blueprint





 On Fri, Jan 28, 2011 at 1:19 PM, John Purrier j...@openstack.org wrote:

 Thanks for the response, Andy. I think we actually agree on this J.



 You said:



 *This statement is invalid, nova is already broken into services, each of
 which can be dealt with individually and scaled as such, whether the code is
 part of the same repository has little bearing on that. The goals of scaling
 are orthogonal to the location of the code and are much more related to
 separation of concerns in the code, making* sure *that volume code does
 not rely on compute code for example (which at this point it doesn't
 particularly).*



 The fact that the volume code and the compute code are not coupled make the
 separation easy. One factor that I did not mention is that each service will
 present public, management, and optional extension APIs, allowing each
 service to be deployed independently.



 So far that is all possible under the existing auspices of Nova. DirectAPI
 will happily sit in front of any of the services independently, any of the
 services when run can be configured with different instances of RabbitMQ to
 point at, DirectAPI supports a large amount of extensibility and pluggable
 managers/drivers support a bunch more.



 Decoupling of the code has always been a goal, as have been providing
 public, management, and extension APIs and we aren't doing so bad.



 I don't think we disagree about wanting to run things independently, but
 for the moment I have seen no convincing arguments for separating the
 codebase.







 You said:



 *That suggestion is contradictory, first you say not to separate then you
 suggest creating separate projects. I am against creating separate projects,
 the development is part of Nova until at least Cactus.*



 This is exactly my suggestion below. Keep Nova monolithic until Cactus,
 then integrate the new services once Cactus is shipped. There is work to be
 done to create the service frameworks, API engines, extension mechanisms,
 and porting the existing functionality. All of this can be done in parallel
 to the stability work being done in the Nova code base. As far as I know
 there are not major updates coming in either the volume or network
 management code for this milestone.



 Where is this parallel work being done if not in a separate project?



 --andy







 John



 *From:* Andy Smith [mailto:andys...@gmail.com]
 *Sent:* Friday, January 28, 2011 12:45 PM
 *To:* John Purrier
 *Cc:* Rick Clark; Jay Pipes; Ewan Mellor; Søren Hansen;
 openstack@lists.launchpad.net


 *Subject:* Re: [Openstack] Network Service for L2/L3 Network
 Infrastructure blueprint





 On Fri, Jan 28, 2011 at 10:18 AM, John Purrier j...@openstack.org wrote:

 Some clarification and a suggestion regarding Nova and the two new proposed
 services (Network/Volume).

 To be clear, Nova today contains both volume and network services. We can
 specify, attach, and manage block devices and also specify network related
 items, such as IP assignment and VLAN creation. I have heard there is some
 confusion on this, since we started talking about creating OpenStack
 services around these areas that will be separate from

Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-01-30 Thread 石井 久治

Hi all,

(2011/01/29 4:52), Thierry Carrez wrote:

I agree that there is value in starting the proof-of-concept work around
the network services, without sacrificing too many developers to it, so
that a good plan can be presented and discussed at the Diablo Summit.


We (NTT PF Lab.) also agree this plan to develop the network services.
And we would like to contribute these works.
Then I'm writing a part of POC codes now.
I would like to share that in a week.

--
Hisaharu Ishii
ishii.hisah...@lab.ntt.co.jp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-01-28 Thread Jay Pipes
Thanks for the update, Ewan, and for the gentle encouragement for
open, transparent, and public discussions of design. Let's move the
discussions of the Network Service project forward! All involved:
please don't hesitate to contact me or this mailing list if you have
any questions at all about using Launchpad, working with blueprints,
or anything else process-related.

Cheers,

jay

On Thu, Jan 27, 2011 at 6:45 PM, Ewan Mellor ewan.mel...@eu.citrix.com wrote:
 Thanks to everyone who has expressed an interest in the “Network Service for
 L2/L3 Network Infrastructure” blueprint (aka bexar-network-service, though
 it’s obviously not going to land for Bexar).  In particular, Ram Durairaj,
 Romain Lenglet, Koji Iida and Dan Wendlandt have all recently contacted me
 regarding this blueprint, and I expect names from Rackspace too.  I assure
 you that I want all of you to be closely involved and to get your
 requirements included.

 I am going to take the text that’s currently in the Etherpad and mould it
 into a more concrete specification.  I would appreciate any input that
 anyone would like to offer.  My intention is to have a blueprint that we can
 get accepted for Cactus, and maybe a set of features that we want to
 consider for releases after that.  We’ll discuss those future features at
 the next design summit.

 Romain, you said “I am currently very active developing this blueprint. I
 have proposed a concrete design on December 3rd, 2010, and I'm implementing
 it.” Please share this design, because it belongs on this blueprint.  We can
 all review it there.  Also, if you have code already, please refer us to a
 branch so that people can take a look at what you’ve done.  And thanks for
 your work so far!

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-01-28 Thread Rick Clark
Soren will be running the network service infrastructure from the
Rackspace/Openstack side.

I want to temper this discussion by reminding everyone that Cactus will
be a testing/stabilization release.  Feature freeze will come much
quicker and we want anything major changes to hit very early.

I think it is possible to come up with a plan that has the first phase
of this blueprint hitting in Cactus, but we don't want to do anything
that will jeopardize the stability of the network subsystem for Cactus.


Rick

On 01/28/2011 08:09 AM, Jay Pipes wrote:
 Thanks for the update, Ewan, and for the gentle encouragement for
 open, transparent, and public discussions of design. Let's move the
 discussions of the Network Service project forward! All involved:
 please don't hesitate to contact me or this mailing list if you have
 any questions at all about using Launchpad, working with blueprints,
 or anything else process-related.
 
 Cheers,
 
 jay
 
 On Thu, Jan 27, 2011 at 6:45 PM, Ewan Mellor ewan.mel...@eu.citrix.com 
 wrote:
 Thanks to everyone who has expressed an interest in the “Network Service for
 L2/L3 Network Infrastructure” blueprint (aka bexar-network-service, though
 it’s obviously not going to land for Bexar).  In particular, Ram Durairaj,
 Romain Lenglet, Koji Iida and Dan Wendlandt have all recently contacted me
 regarding this blueprint, and I expect names from Rackspace too.  I assure
 you that I want all of you to be closely involved and to get your
 requirements included.

 I am going to take the text that’s currently in the Etherpad and mould it
 into a more concrete specification.  I would appreciate any input that
 anyone would like to offer.  My intention is to have a blueprint that we can
 get accepted for Cactus, and maybe a set of features that we want to
 consider for releases after that.  We’ll discuss those future features at
 the next design summit.

 Romain, you said “I am currently very active developing this blueprint. I
 have proposed a concrete design on December 3rd, 2010, and I'm implementing
 it.” Please share this design, because it belongs on this blueprint.  We can
 all review it there.  Also, if you have code already, please refer us to a
 branch so that people can take a look at what you’ve done.  And thanks for
 your work so far!
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




signature.asc
Description: OpenPGP digital signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-01-28 Thread Rick Clark
On 01/28/2011 08:55 AM, Jay Pipes wrote:
 On Fri, Jan 28, 2011 at 8:47 AM, Rick Clark r...@openstack.org wrote:
 I recognise the desire to do this for Cactus, but I feel that pulling
 out the network controller (and/or volume controller) into their own
 separate OpenStack subprojects is not a good idea for Cactus.  Looking
 at the (dozens of) blueprints slated for Cactus, doing this kind of
 major rework will mean that most (if not all) of those blueprints will
 have to be delayed while this pulling out of code occurs. This will
 definitely jeopardise the Cactus release.
 
 My vote is to delay this at a minimum to the Diablo release.
 
 And, for the record, I haven't seen any blueprints for the network as
 a service or volume as a service projects. Can someone point us to
 them?
 
 Thanks!
 jay

Whew, Jay I thought you were advocating major changes in Cactus.  That
would completely mess up my view of the world :)

https://blueprints.launchpad.net/nova/+spec/bexar-network-service
https://blueprints.launchpad.net/nova/+spec/bexar-extend-network-model
https://blueprints.launchpad.net/nova/+spec/bexar-network-service


It was discussed at ODS, but I have not seen any code or momentum, to date.

I think it is worth while to have an open discussion about what if any
of this can be safely done in Cactus.  I like you, Jay, feel a bit
conservative.  I think we lost focus of the reason we chose time based
releases. It is time to focus on nova being a solid trustworthy
platform.  Features land when they are of sufficient quality, releases
contain only the features that passed muster.

I will be sending an email about the focus and theme of Cactus in a
little while.

Rick




signature.asc
Description: OpenPGP digital signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-01-28 Thread Andy Smith
I'd second a bit of what Jay says and toss in that I don't think the code is
ready to be splitting services off:

- There have already been significant problems dealing with glance, the nasa
people and the rackspace people have effectively completely different code
paths (nasa: ec2, objectstore, libvirt; rackspace: rackspace, glance,
xenapi) and that needs to be aligned a bit more before we can create more
separations if we want everybody to be working towards the same goals.
- Try as we might there is still not a real consensus on high level coding
style, for example the Xen-related code is radically different in shape and
style from the libvirt code as is the rackspace api from the ec2 api, and
having projects split off only worsens the problem as individual developers
have fewer eyes on them.

My goal and as far as I can tell most of my team's goals are to rectify a
lot of that situation over the course of the next release by:

- setting up and working through the rackspace side of the code paths (as
mentioned above) enough that we can start generalizing its utility for the
entire project
- actual deprecation of the majority of objectstore
- more thorough code reviews to ensure that code is meeting the overall
style of the project, and probably a document describing the code review
process

After Cactus if the idea makes sense to split off then it can be pursued
then, but at the moment it is much too early to consider it.

On Fri, Jan 28, 2011 at 7:06 AM, Rick Clark r...@openstack.org wrote:

 On 01/28/2011 08:55 AM, Jay Pipes wrote:
  On Fri, Jan 28, 2011 at 8:47 AM, Rick Clark r...@openstack.org wrote:
  I recognise the desire to do this for Cactus, but I feel that pulling
  out the network controller (and/or volume controller) into their own
  separate OpenStack subprojects is not a good idea for Cactus.  Looking
  at the (dozens of) blueprints slated for Cactus, doing this kind of
  major rework will mean that most (if not all) of those blueprints will
  have to be delayed while this pulling out of code occurs. This will
  definitely jeopardise the Cactus release.
 
  My vote is to delay this at a minimum to the Diablo release.
 
  And, for the record, I haven't seen any blueprints for the network as
  a service or volume as a service projects. Can someone point us to
  them?
 
  Thanks!
  jay

 Whew, Jay I thought you were advocating major changes in Cactus.  That
 would completely mess up my view of the world :)

 https://blueprints.launchpad.net/nova/+spec/bexar-network-service
 https://blueprints.launchpad.net/nova/+spec/bexar-extend-network-model
 https://blueprints.launchpad.net/nova/+spec/bexar-network-service


 It was discussed at ODS, but I have not seen any code or momentum, to date.

 I think it is worth while to have an open discussion about what if any
 of this can be safely done in Cactus.  I like you, Jay, feel a bit
 conservative.  I think we lost focus of the reason we chose time based
 releases. It is time to focus on nova being a solid trustworthy
 platform.  Features land when they are of sufficient quality, releases
 contain only the features that passed muster.

 I will be sending an email about the focus and theme of Cactus in a
 little while.

 Rick



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-01-28 Thread Vishvananda Ishaya
I agree.  I think splitting glance into a separate project has actually slowed 
it down.  We should keep network service in trunk for the moment.

Also, there were a couple of networking blueprints that were combined at the 
last design summit into one presentation.  The presentation was given by one 
racker and one person from nicira, and also included a group from japan. I 
thought the plan was to implement this with openvswitch.  Is this the same 
team/project?  Or did that effort die?

Vish

On Jan 28, 2011, at 7:40 AM, Andy Smith wrote:

 I'd second a bit of what Jay says and toss in that I don't think the code is 
 ready to be splitting services off:
 
 - There have already been significant problems dealing with glance, the nasa 
 people and the rackspace people have effectively completely different code 
 paths (nasa: ec2, objectstore, libvirt; rackspace: rackspace, glance, xenapi) 
 and that needs to be aligned a bit more before we can create more separations 
 if we want everybody to be working towards the same goals.
 - Try as we might there is still not a real consensus on high level coding 
 style, for example the Xen-related code is radically different in shape and 
 style from the libvirt code as is the rackspace api from the ec2 api, and 
 having projects split off only worsens the problem as individual developers 
 have fewer eyes on them.
 
 My goal and as far as I can tell most of my team's goals are to rectify a lot 
 of that situation over the course of the next release by:
 
 - setting up and working through the rackspace side of the code paths (as 
 mentioned above) enough that we can start generalizing its utility for the 
 entire project
 - actual deprecation of the majority of objectstore
 - more thorough code reviews to ensure that code is meeting the overall style 
 of the project, and probably a document describing the code review process
 
 After Cactus if the idea makes sense to split off then it can be pursued 
 then, but at the moment it is much too early to consider it.
 
 On Fri, Jan 28, 2011 at 7:06 AM, Rick Clark r...@openstack.org wrote:
 On 01/28/2011 08:55 AM, Jay Pipes wrote:
  On Fri, Jan 28, 2011 at 8:47 AM, Rick Clark r...@openstack.org wrote:
  I recognise the desire to do this for Cactus, but I feel that pulling
  out the network controller (and/or volume controller) into their own
  separate OpenStack subprojects is not a good idea for Cactus.  Looking
  at the (dozens of) blueprints slated for Cactus, doing this kind of
  major rework will mean that most (if not all) of those blueprints will
  have to be delayed while this pulling out of code occurs. This will
  definitely jeopardise the Cactus release.
 
  My vote is to delay this at a minimum to the Diablo release.
 
  And, for the record, I haven't seen any blueprints for the network as
  a service or volume as a service projects. Can someone point us to
  them?
 
  Thanks!
  jay
 
 Whew, Jay I thought you were advocating major changes in Cactus.  That
 would completely mess up my view of the world :)
 
 https://blueprints.launchpad.net/nova/+spec/bexar-network-service
 https://blueprints.launchpad.net/nova/+spec/bexar-extend-network-model
 https://blueprints.launchpad.net/nova/+spec/bexar-network-service
 
 
 It was discussed at ODS, but I have not seen any code or momentum, to date.
 
 I think it is worth while to have an open discussion about what if any
 of this can be safely done in Cactus.  I like you, Jay, feel a bit
 conservative.  I think we lost focus of the reason we chose time based
 releases. It is time to focus on nova being a solid trustworthy
 platform.  Features land when they are of sufficient quality, releases
 contain only the features that passed muster.
 
 I will be sending an email about the focus and theme of Cactus in a
 little while.
 
 Rick
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-01-28 Thread Jay Pipes
On Fri, Jan 28, 2011 at 11:37 AM, Vishvananda Ishaya
vishvana...@gmail.com wrote:
 I agree.  I think splitting glance into a separate project has actually
 slowed it down.

Massively disagree here.  The only slowdown integrating Glance/Nova
was around packaging issues, and those have now been resolved.  What
other slowdowns are you referring to?  Glance is going at light-speed
compared to other projects IMHO.

Glance blueprints and milestones are all online and mailing list
discussion has already occurred on many of them.  If there are further
integration issues between Nova and Glance, please do file bugs and
blueprints for them and we'll get to them quickly.  I can't fix stuff
I don't know about.

-jay

 We should keep network service in trunk for the moment.
 Also, there were a couple of networking blueprints that were combined at the
 last design summit into one presentation.  The presentation was given by one
 racker and one person from nicira, and also included a group from japan. I
 thought the plan was to implement this with openvswitch.  Is this the same
 team/project?  Or did that effort die?
 Vish
 On Jan 28, 2011, at 7:40 AM, Andy Smith wrote:

 I'd second a bit of what Jay says and toss in that I don't think the code is
 ready to be splitting services off:
 - There have already been significant problems dealing with glance, the nasa
 people and the rackspace people have effectively completely different code
 paths (nasa: ec2, objectstore, libvirt; rackspace: rackspace, glance,
 xenapi) and that needs to be aligned a bit more before we can create more
 separations if we want everybody to be working towards the same goals.
 - Try as we might there is still not a real consensus on high level coding
 style, for example the Xen-related code is radically different in shape and
 style from the libvirt code as is the rackspace api from the ec2 api, and
 having projects split off only worsens the problem as individual developers
 have fewer eyes on them.
 My goal and as far as I can tell most of my team's goals are to rectify a
 lot of that situation over the course of the next release by:
 - setting up and working through the rackspace side of the code paths (as
 mentioned above) enough that we can start generalizing its utility for the
 entire project
 - actual deprecation of the majority of objectstore
 - more thorough code reviews to ensure that code is meeting the overall
 style of the project, and probably a document describing the code review
 process
 After Cactus if the idea makes sense to split off then it can be pursued
 then, but at the moment it is much too early to consider it.
 On Fri, Jan 28, 2011 at 7:06 AM, Rick Clark r...@openstack.org wrote:

 On 01/28/2011 08:55 AM, Jay Pipes wrote:
  On Fri, Jan 28, 2011 at 8:47 AM, Rick Clark r...@openstack.org wrote:
  I recognise the desire to do this for Cactus, but I feel that pulling
  out the network controller (and/or volume controller) into their own
  separate OpenStack subprojects is not a good idea for Cactus.  Looking
  at the (dozens of) blueprints slated for Cactus, doing this kind of
  major rework will mean that most (if not all) of those blueprints will
  have to be delayed while this pulling out of code occurs. This will
  definitely jeopardise the Cactus release.
 
  My vote is to delay this at a minimum to the Diablo release.
 
  And, for the record, I haven't seen any blueprints for the network as
  a service or volume as a service projects. Can someone point us to
  them?
 
  Thanks!
  jay

 Whew, Jay I thought you were advocating major changes in Cactus.  That
 would completely mess up my view of the world :)

 https://blueprints.launchpad.net/nova/+spec/bexar-network-service
 https://blueprints.launchpad.net/nova/+spec/bexar-extend-network-model
 https://blueprints.launchpad.net/nova/+spec/bexar-network-service


 It was discussed at ODS, but I have not seen any code or momentum, to
 date.

 I think it is worth while to have an open discussion about what if any
 of this can be safely done in Cactus.  I like you, Jay, feel a bit
 conservative.  I think we lost focus of the reason we chose time based
 releases. It is time to focus on nova being a solid trustworthy
 platform.  Features land when they are of sufficient quality, releases
 contain only the features that passed muster.

 I will be sending an email about the focus and theme of Cactus in a
 little while.

 Rick



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


 ___
 Mailing list: 

Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-01-28 Thread Dan Wendlandt
Hi Vish, all,

We paused our efforts around the next-service because the plan was that
Rackspace was going to offer a dev lead for the work, and we didn't want to
be making design decisions without that dev lead taking part.  It sounds
like Soren is that guy and that Ewan will also be playing a leading role,
which is great.

Dan

On Fri, Jan 28, 2011 at 9:37 AM, Vishvananda Ishaya
vishvana...@gmail.comwrote:

 I agree.  I think splitting glance into a separate project has actually
 slowed it down.  We should keep network service in trunk for the moment.

 Also, there were a couple of networking blueprints that were combined at
 the last design summit into one presentation.  The presentation was given by
 one racker and one person from nicira, and also included a group from japan.
 I thought the plan was to implement this with openvswitch.  Is this the same
 team/project?  Or did that effort die?

 Vish

 On Jan 28, 2011, at 7:40 AM, Andy Smith wrote:

 I'd second a bit of what Jay says and toss in that I don't think the code
 is ready to be splitting services off:

 - There have already been significant problems dealing with glance, the
 nasa people and the rackspace people have effectively completely different
 code paths (nasa: ec2, objectstore, libvirt; rackspace: rackspace, glance,
 xenapi) and that needs to be aligned a bit more before we can create more
 separations if we want everybody to be working towards the same goals.
 - Try as we might there is still not a real consensus on high level coding
 style, for example the Xen-related code is radically different in shape and
 style from the libvirt code as is the rackspace api from the ec2 api, and
 having projects split off only worsens the problem as individual developers
 have fewer eyes on them.

 My goal and as far as I can tell most of my team's goals are to rectify a
 lot of that situation over the course of the next release by:

 - setting up and working through the rackspace side of the code paths (as
 mentioned above) enough that we can start generalizing its utility for the
 entire project
 - actual deprecation of the majority of objectstore
 - more thorough code reviews to ensure that code is meeting the overall
 style of the project, and probably a document describing the code review
 process

 After Cactus if the idea makes sense to split off then it can be pursued
 then, but at the moment it is much too early to consider it.

 On Fri, Jan 28, 2011 at 7:06 AM, Rick Clark r...@openstack.org wrote:

 On 01/28/2011 08:55 AM, Jay Pipes wrote:
  On Fri, Jan 28, 2011 at 8:47 AM, Rick Clark r...@openstack.org wrote:
  I recognise the desire to do this for Cactus, but I feel that pulling
  out the network controller (and/or volume controller) into their own
  separate OpenStack subprojects is not a good idea for Cactus.  Looking
  at the (dozens of) blueprints slated for Cactus, doing this kind of
  major rework will mean that most (if not all) of those blueprints will
  have to be delayed while this pulling out of code occurs. This will
  definitely jeopardise the Cactus release.
 
  My vote is to delay this at a minimum to the Diablo release.
 
  And, for the record, I haven't seen any blueprints for the network as
  a service or volume as a service projects. Can someone point us to
  them?
 
  Thanks!
  jay

 Whew, Jay I thought you were advocating major changes in Cactus.  That
 would completely mess up my view of the world :)

 https://blueprints.launchpad.net/nova/+spec/bexar-network-service
 https://blueprints.launchpad.net/nova/+spec/bexar-extend-network-model
 https://blueprints.launchpad.net/nova/+spec/bexar-network-service


 It was discussed at ODS, but I have not seen any code or momentum, to
 date.

 I think it is worth while to have an open discussion about what if any
 of this can be safely done in Cactus.  I like you, Jay, feel a bit
 conservative.  I think we lost focus of the reason we chose time based
 releases. It is time to focus on nova being a solid trustworthy
 platform.  Features land when they are of sufficient quality, releases
 contain only the features that passed muster.

 I will be sending an email about the focus and theme of Cactus in a
 little while.

 Rick



 ___
 Mailing list: 
 https://launchpad.net/~openstackhttps://launchpad.net/%7Eopenstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : 
 https://launchpad.net/~openstackhttps://launchpad.net/%7Eopenstack
 More help   : https://help.launchpad.net/ListHelp


 ___
 Mailing list: 
 https://launchpad.net/~openstackhttps://launchpad.net/%7Eopenstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : 
 https://launchpad.net/~openstackhttps://launchpad.net/%7Eopenstack
 More help   : https://help.launchpad.net/ListHelp



 ___
 Mailing list: 
 https://launchpad.net/~openstackhttps://launchpad.net/%7Eopenstack
 

Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-01-28 Thread Rick Clark
On 01/28/2011 11:45 AM, Jay Pipes wrote:
 On Fri, Jan 28, 2011 at 11:37 AM, Vishvananda Ishaya
 vishvana...@gmail.com wrote:
 I agree.  I think splitting glance into a separate project has actually
 slowed it down.
 
 Massively disagree here.  The only slowdown integrating Glance/Nova
 was around packaging issues, and those have now been resolved.  What
 other slowdowns are you referring to?  Glance is going at light-speed
 compared to other projects IMHO.

For historical accuracy:

Glance is in great shape now, but did flounder for the first couple
months of the Austin release cycle.  The problem was that separating it
took the work off of the radar of most of the Nova devs.  That was
primarily a communication issue.  Once Jay became involved and fixed
that, things have progressed very well.

So regardless of if and when we decide to split out other functionality,
we need to ensure that there is enough communication back to the core
project's development team.

 Glance blueprints and milestones are all online and mailing list
 discussion has already occurred on many of them.  If there are further
 integration issues between Nova and Glance, please do file bugs and
 blueprints for them and we'll get to them quickly.  I can't fix stuff
 I don't know about.
 
 -jay
 
 We should keep network service in trunk for the moment.
 Also, there were a couple of networking blueprints that were combined at the
 last design summit into one presentation.  The presentation was given by one
 racker and one person from nicira, and also included a group from japan. I
 thought the plan was to implement this with openvswitch.  Is this the same
 team/project?  Or did that effort die?
 Vish
 On Jan 28, 2011, at 7:40 AM, Andy Smith wrote:

 I'd second a bit of what Jay says and toss in that I don't think the code is
 ready to be splitting services off:
 - There have already been significant problems dealing with glance, the nasa
 people and the rackspace people have effectively completely different code
 paths (nasa: ec2, objectstore, libvirt; rackspace: rackspace, glance,
 xenapi) and that needs to be aligned a bit more before we can create more
 separations if we want everybody to be working towards the same goals.
 - Try as we might there is still not a real consensus on high level coding
 style, for example the Xen-related code is radically different in shape and
 style from the libvirt code as is the rackspace api from the ec2 api, and
 having projects split off only worsens the problem as individual developers
 have fewer eyes on them.
 My goal and as far as I can tell most of my team's goals are to rectify a
 lot of that situation over the course of the next release by:
 - setting up and working through the rackspace side of the code paths (as
 mentioned above) enough that we can start generalizing its utility for the
 entire project
 - actual deprecation of the majority of objectstore
 - more thorough code reviews to ensure that code is meeting the overall
 style of the project, and probably a document describing the code review
 process
 After Cactus if the idea makes sense to split off then it can be pursued
 then, but at the moment it is much too early to consider it.
 On Fri, Jan 28, 2011 at 7:06 AM, Rick Clark r...@openstack.org wrote:

 On 01/28/2011 08:55 AM, Jay Pipes wrote:
 On Fri, Jan 28, 2011 at 8:47 AM, Rick Clark r...@openstack.org wrote:
 I recognise the desire to do this for Cactus, but I feel that pulling
 out the network controller (and/or volume controller) into their own
 separate OpenStack subprojects is not a good idea for Cactus.  Looking
 at the (dozens of) blueprints slated for Cactus, doing this kind of
 major rework will mean that most (if not all) of those blueprints will
 have to be delayed while this pulling out of code occurs. This will
 definitely jeopardise the Cactus release.

 My vote is to delay this at a minimum to the Diablo release.

 And, for the record, I haven't seen any blueprints for the network as
 a service or volume as a service projects. Can someone point us to
 them?

 Thanks!
 jay

 Whew, Jay I thought you were advocating major changes in Cactus.  That
 would completely mess up my view of the world :)

 https://blueprints.launchpad.net/nova/+spec/bexar-network-service
 https://blueprints.launchpad.net/nova/+spec/bexar-extend-network-model
 https://blueprints.launchpad.net/nova/+spec/bexar-network-service


 It was discussed at ODS, but I have not seen any code or momentum, to
 date.

 I think it is worth while to have an open discussion about what if any
 of this can be safely done in Cactus.  I like you, Jay, feel a bit
 conservative.  I think we lost focus of the reason we chose time based
 releases. It is time to focus on nova being a solid trustworthy
 platform.  Features land when they are of sufficient quality, releases
 contain only the features that passed muster.

 I will be sending an email about the focus and theme of Cactus in a
 little while.

 Rick



 

Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-01-28 Thread Vishvananda Ishaya
And please don't get the idea that I'm complaining about the glance project 
itself, or how it is managed.  As far as I'm concerned, Jay and the other 
developers have done an excellent job with glance.  It is just very difficult 
to keep up with multiple projects, and I think they should be kept together as 
long as possible.

Vish

On Jan 28, 2011, at 9:45 AM, Jay Pipes wrote:

 On Fri, Jan 28, 2011 at 11:37 AM, Vishvananda Ishaya
 vishvana...@gmail.com wrote:
 I agree.  I think splitting glance into a separate project has actually
 slowed it down.
 
 Massively disagree here.  The only slowdown integrating Glance/Nova
 was around packaging issues, and those have now been resolved.  What
 other slowdowns are you referring to?  Glance is going at light-speed
 compared to other projects IMHO.
 
 Glance blueprints and milestones are all online and mailing list
 discussion has already occurred on many of them.  If there are further
 integration issues between Nova and Glance, please do file bugs and
 blueprints for them and we'll get to them quickly.  I can't fix stuff
 I don't know about.
 
 -jay
 
 We should keep network service in trunk for the moment.
 Also, there were a couple of networking blueprints that were combined at the
 last design summit into one presentation.  The presentation was given by one
 racker and one person from nicira, and also included a group from japan. I
 thought the plan was to implement this with openvswitch.  Is this the same
 team/project?  Or did that effort die?
 Vish
 On Jan 28, 2011, at 7:40 AM, Andy Smith wrote:
 
 I'd second a bit of what Jay says and toss in that I don't think the code is
 ready to be splitting services off:
 - There have already been significant problems dealing with glance, the nasa
 people and the rackspace people have effectively completely different code
 paths (nasa: ec2, objectstore, libvirt; rackspace: rackspace, glance,
 xenapi) and that needs to be aligned a bit more before we can create more
 separations if we want everybody to be working towards the same goals.
 - Try as we might there is still not a real consensus on high level coding
 style, for example the Xen-related code is radically different in shape and
 style from the libvirt code as is the rackspace api from the ec2 api, and
 having projects split off only worsens the problem as individual developers
 have fewer eyes on them.
 My goal and as far as I can tell most of my team's goals are to rectify a
 lot of that situation over the course of the next release by:
 - setting up and working through the rackspace side of the code paths (as
 mentioned above) enough that we can start generalizing its utility for the
 entire project
 - actual deprecation of the majority of objectstore
 - more thorough code reviews to ensure that code is meeting the overall
 style of the project, and probably a document describing the code review
 process
 After Cactus if the idea makes sense to split off then it can be pursued
 then, but at the moment it is much too early to consider it.
 On Fri, Jan 28, 2011 at 7:06 AM, Rick Clark r...@openstack.org wrote:
 
 On 01/28/2011 08:55 AM, Jay Pipes wrote:
 On Fri, Jan 28, 2011 at 8:47 AM, Rick Clark r...@openstack.org wrote:
 I recognise the desire to do this for Cactus, but I feel that pulling
 out the network controller (and/or volume controller) into their own
 separate OpenStack subprojects is not a good idea for Cactus.  Looking
 at the (dozens of) blueprints slated for Cactus, doing this kind of
 major rework will mean that most (if not all) of those blueprints will
 have to be delayed while this pulling out of code occurs. This will
 definitely jeopardise the Cactus release.
 
 My vote is to delay this at a minimum to the Diablo release.
 
 And, for the record, I haven't seen any blueprints for the network as
 a service or volume as a service projects. Can someone point us to
 them?
 
 Thanks!
 jay
 
 Whew, Jay I thought you were advocating major changes in Cactus.  That
 would completely mess up my view of the world :)
 
 https://blueprints.launchpad.net/nova/+spec/bexar-network-service
 https://blueprints.launchpad.net/nova/+spec/bexar-extend-network-model
 https://blueprints.launchpad.net/nova/+spec/bexar-network-service
 
 
 It was discussed at ODS, but I have not seen any code or momentum, to
 date.
 
 I think it is worth while to have an open discussion about what if any
 of this can be safely done in Cactus.  I like you, Jay, feel a bit
 conservative.  I think we lost focus of the reason we chose time based
 releases. It is time to focus on nova being a solid trustworthy
 platform.  Features land when they are of sufficient quality, releases
 contain only the features that passed muster.
 
 I will be sending an email about the focus and theme of Cactus in a
 little while.
 
 Rick
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : 

Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-01-28 Thread Jay Pipes
On Fri, Jan 28, 2011 at 11:59 AM, Vishvananda Ishaya
vishvana...@gmail.com wrote:
 Integration is the issue.  It only works with osapi/xen at this point which 
 isn't even the default hypervisor setting in the packaging.  A large number 
 of people involved in Nova haven't even looked at it.  The changes to make it 
 support the ec2_api properly will need to be done in two separate projects 
 and require that the projects move forward in lock-step for versioning.  The 
 blueprints and design decisions are essentially being managed separately.  I 
 believe that most of this could have been avoided if we kept glance in nova 
 initially and moved it out if necessary at a later date.

Fair enough statement. It may indeed have been easier to manage for
the EC2 API. But Glance is serving more than the EC2 API (which is
already served adequately by nova-objectstore, no?)  Rackspace needed
Glance to move forward with non-EC2 stuff, which is what was done for
Bexar.

Glance is a separate project from Nova. It's an image service. Nova
can use Glance or not use Glance, and people can deploy Glance without
Nova at all (if all they want to do is have a public image repository
(like Ubuntu, for example...)).

While we can work on integration points with Nova, and as soon as I
get input from Nova devs I am making blueprints in Glance, it's not
necessarily a bad thing that the design and blueprints for Glance are
separate from Nova. The Glance project needs to move at a different
pace at this point IMHO.

-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-01-28 Thread Jay Pipes
On Fri, Jan 28, 2011 at 12:09 PM, Vishvananda Ishaya
vishvana...@gmail.com wrote:
 And please don't get the idea that I'm complaining about the glance project 
 itself, or how it is managed.  As far as I'm concerned, Jay and the other 
 developers have done an excellent job with glance.  It is just very difficult 
 to keep up with multiple projects, and I think they should be kept together 
 as long as possible.

No offense taken, Vishy :)   I totally understand your points. I look
forward to getting tighter Nova-Glance integration in the coming
months.

Cheers,
Jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-01-28 Thread John Purrier
Some clarification and a suggestion regarding Nova and the two new proposed 
services (Network/Volume). 

To be clear, Nova today contains both volume and network services. We can 
specify, attach, and manage block devices and also specify network related 
items, such as IP assignment and VLAN creation. I have heard there is some 
confusion on this, since we started talking about creating OpenStack services 
around these areas that will be separate from the cloud controller (Nova).

The driving factors to consider creating independent services for VM, Images, 
Network, and Volumes are 1) To allow deployment scenarios that may be scoped to 
a single service, so that we don't drag all of the Nova code in if we just want 
to deploy virtual volumes, and 2) To allow greater innovation and community 
contribution to the individual services.

Another nice effect of separation of services is that each service can scale 
horizontally per the demands of the deployment, independent of the other 
services.

We have an existing blueprint discussing the Network Service. We have *not* 
published a blueprint discussing the Volume Service, this will be coming soon.

The net is that creating the correct architecture in OpenStack Compute 
(automation and infrastructure) is a good thing as we look to the future 
evolution of the project.

Here is the suggestion. It is clear from the response on the list that 
refactoring Nova in the Cactus timeframe will be too risky, particularly as we 
are focusing Cactus on Stability, Reliability, and Deployability (along with a 
complete OpenStack API). For Cactus we should leave the network and volume 
services alone in Nova to minimize destabilizing the code base. In parallel, we 
can initiate the Network and Volume Service projects in Launchpad and allow the 
teams that form around these efforts to move in parallel, perhaps seeding their 
projects from the existing Nova code.

Once we complete Cactus we can have discussions at the Diablo DS about progress 
these efforts have made and how best to move forward with Nova integration and 
determine release targets.

Thoughts?

John

-Original Message-
From: openstack-bounces+john=openstack@lists.launchpad.net 
[mailto:openstack-bounces+john=openstack@lists.launchpad.net] On Behalf Of 
Rick Clark
Sent: Friday, January 28, 2011 9:06 AM
To: Jay Pipes
Cc: Ewan Mellor; Søren Hansen; openstack@lists.launchpad.net
Subject: Re: [Openstack] Network Service for L2/L3 Network Infrastructure 
blueprint

On 01/28/2011 08:55 AM, Jay Pipes wrote:
 On Fri, Jan 28, 2011 at 8:47 AM, Rick Clark r...@openstack.org wrote:
 I recognise the desire to do this for Cactus, but I feel that pulling 
 out the network controller (and/or volume controller) into their own 
 separate OpenStack subprojects is not a good idea for Cactus.  Looking 
 at the (dozens of) blueprints slated for Cactus, doing this kind of 
 major rework will mean that most (if not all) of those blueprints will 
 have to be delayed while this pulling out of code occurs. This will 
 definitely jeopardise the Cactus release.
 
 My vote is to delay this at a minimum to the Diablo release.
 
 And, for the record, I haven't seen any blueprints for the network as 
 a service or volume as a service projects. Can someone point us to 
 them?
 
 Thanks!
 jay

Whew, Jay I thought you were advocating major changes in Cactus.  That would 
completely mess up my view of the world :)

https://blueprints.launchpad.net/nova/+spec/bexar-network-service
https://blueprints.launchpad.net/nova/+spec/bexar-extend-network-model
https://blueprints.launchpad.net/nova/+spec/bexar-network-service


It was discussed at ODS, but I have not seen any code or momentum, to date.

I think it is worth while to have an open discussion about what if any of this 
can be safely done in Cactus.  I like you, Jay, feel a bit conservative.  I 
think we lost focus of the reason we chose time based releases. It is time to 
focus on nova being a solid trustworthy platform.  Features land when they are 
of sufficient quality, releases contain only the features that passed muster.

I will be sending an email about the focus and theme of Cactus in a little 
while.

Rick




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-01-28 Thread Andy Smith
On Fri, Jan 28, 2011 at 10:18 AM, John Purrier j...@openstack.org wrote:

 Some clarification and a suggestion regarding Nova and the two new proposed
 services (Network/Volume).

 To be clear, Nova today contains both volume and network services. We can
 specify, attach, and manage block devices and also specify network related
 items, such as IP assignment and VLAN creation. I have heard there is some
 confusion on this, since we started talking about creating OpenStack
 services around these areas that will be separate from the cloud controller
 (Nova).

 The driving factors to consider creating independent services for VM,
 Images, Network, and Volumes are 1) To allow deployment scenarios that may
 be scoped to a single service, so that we don't drag all of the Nova code in
 if we just want to deploy virtual volumes, and 2) To allow greater
 innovation and community contribution to the individual services.

 Another nice effect of separation of services is that each service can
 scale horizontally per the demands of the deployment, independent of the
 other services.


This statement is invalid, nova is already broken into services, each of
which can be dealt with individually and scaled as such, whether the code is
part of the same repository has little bearing on that. The goals of scaling
are orthogonal to the location of the code and are much more related to
separation of concerns in the code, making sure that volume code does not
rely on compute code for example (which at this point it doesn't
particularly).



 We have an existing blueprint discussing the Network Service. We have *not*
 published a blueprint discussing the Volume Service, this will be coming
 soon.

 The net is that creating the correct architecture in OpenStack Compute
 (automation and infrastructure) is a good thing as we look to the future
 evolution of the project.

 Here is the suggestion. It is clear from the response on the list that
 refactoring Nova in the Cactus timeframe will be too risky, particularly as
 we are focusing Cactus on Stability, Reliability, and Deployability (along
 with a complete OpenStack API). For Cactus we should leave the network and
 volume services alone in Nova to minimize destabilizing the code base. In
 parallel, we can initiate the Network and Volume Service projects in
 Launchpad and allow the teams that form around these efforts to move in
 parallel, perhaps seeding their projects from the existing Nova code.


That suggestion is contradictory, first you say not to separate then you
suggest creating separate projects. I am against creating separate projects,
the development is part of Nova until at least Cactus.


 Once we complete Cactus we can have discussions at the Diablo DS about
 progress these efforts have made and how best to move forward with Nova
 integration and determine release targets.

 Thoughts?

 John

 -Original Message-
 From: openstack-bounces+john=openstack@lists.launchpad.net [mailto:
 openstack-bounces+john openstack-bounces%2Bjohn=openstack.org@
 lists.launchpad.net] On Behalf Of Rick Clark
 Sent: Friday, January 28, 2011 9:06 AM
 To: Jay Pipes
 Cc: Ewan Mellor; Søren Hansen; openstack@lists.launchpad.net
 Subject: Re: [Openstack] Network Service for L2/L3 Network Infrastructure
 blueprint

 On 01/28/2011 08:55 AM, Jay Pipes wrote:
  On Fri, Jan 28, 2011 at 8:47 AM, Rick Clark r...@openstack.org wrote:
  I recognise the desire to do this for Cactus, but I feel that pulling
  out the network controller (and/or volume controller) into their own
  separate OpenStack subprojects is not a good idea for Cactus.  Looking
  at the (dozens of) blueprints slated for Cactus, doing this kind of
  major rework will mean that most (if not all) of those blueprints will
  have to be delayed while this pulling out of code occurs. This will
  definitely jeopardise the Cactus release.
 
  My vote is to delay this at a minimum to the Diablo release.
 
  And, for the record, I haven't seen any blueprints for the network as
  a service or volume as a service projects. Can someone point us to
  them?
 
  Thanks!
  jay

 Whew, Jay I thought you were advocating major changes in Cactus.  That
 would completely mess up my view of the world :)

 https://blueprints.launchpad.net/nova/+spec/bexar-network-service
 https://blueprints.launchpad.net/nova/+spec/bexar-extend-network-model
 https://blueprints.launchpad.net/nova/+spec/bexar-network-service


 It was discussed at ODS, but I have not seen any code or momentum, to date.

 I think it is worth while to have an open discussion about what if any of
 this can be safely done in Cactus.  I like you, Jay, feel a bit
 conservative.  I think we lost focus of the reason we chose time based
 releases. It is time to focus on nova being a solid trustworthy platform.
  Features land when they are of sufficient quality, releases contain only
 the features that passed muster.

 I will be sending an email about the focus and theme of Cactus

Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-01-28 Thread Thierry Carrez
John Purrier wrote:
 Here is the suggestion. It is clear from the response on the list that 
 refactoring Nova in the Cactus timeframe will be too risky, particularly as 
 we are focusing Cactus on Stability, Reliability, and Deployability (along 
 with a complete OpenStack API). For Cactus we should leave the network and 
 volume services alone in Nova to minimize destabilizing the code base. In 
 parallel, we can initiate the Network and Volume Service projects in 
 Launchpad and allow the teams that form around these efforts to move in 
 parallel, perhaps seeding their projects from the existing Nova code.
 
 Once we complete Cactus we can have discussions at the Diablo DS about 
 progress these efforts have made and how best to move forward with Nova 
 integration and determine release targets.

I agree that there is value in starting the proof-of-concept work around
the network services, without sacrificing too many developers to it, so
that a good plan can be presented and discussed at the Diablo Summit.

If volume sounds relatively simple to me, network sounds significantly
more complex (just looking at the code ,network manager code is
currently used both by nova-compute and nova-network to modify the local
networking stack, so it's more than just handing out IP addresses
through an API).

Cheers,

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-01-28 Thread John Purrier
You are correct, the networking service will be more complex than the volume
service. The existing blueprint is pretty comprehensive, not only
encompassing the functionality that exists in today's network service in
Nova, but also forward looking functionality around flexible
networking/openvswitch and layer 2 network bridging between cloud
deployments.

This will be a longer term project and will serve as the bedrock for many
future OpenStack capabilities.

John

-Original Message-
From: openstack-bounces+john=openstack@lists.launchpad.net
[mailto:openstack-bounces+john=openstack@lists.launchpad.net] On Behalf
Of Thierry Carrez
Sent: Friday, January 28, 2011 1:52 PM
To: openstack@lists.launchpad.net
Subject: Re: [Openstack] Network Service for L2/L3 Network Infrastructure
blueprint

John Purrier wrote:
 Here is the suggestion. It is clear from the response on the list that
refactoring Nova in the Cactus timeframe will be too risky, particularly as
we are focusing Cactus on Stability, Reliability, and Deployability (along
with a complete OpenStack API). For Cactus we should leave the network and
volume services alone in Nova to minimize destabilizing the code base. In
parallel, we can initiate the Network and Volume Service projects in
Launchpad and allow the teams that form around these efforts to move in
parallel, perhaps seeding their projects from the existing Nova code.
 
 Once we complete Cactus we can have discussions at the Diablo DS about
progress these efforts have made and how best to move forward with Nova
integration and determine release targets.

I agree that there is value in starting the proof-of-concept work around
the network services, without sacrificing too many developers to it, so
that a good plan can be presented and discussed at the Diablo Summit.

If volume sounds relatively simple to me, network sounds significantly
more complex (just looking at the code ,network manager code is
currently used both by nova-compute and nova-network to modify the local
networking stack, so it's more than just handing out IP addresses
through an API).

Cheers,

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-01-28 Thread John Purrier
Thanks for the response, Andy. I think we actually agree on this J.

 

You said:

 

This statement is invalid, nova is already broken into services, each of which 
can be dealt with individually and scaled as such, whether the code is part of 
the same repository has little bearing on that. The goals of scaling are 
orthogonal to the location of the code and are much more related to separation 
of concerns in the code, making sure that volume code does not rely on compute 
code for example (which at this point it doesn't particularly).

 

The fact that the volume code and the compute code are not coupled make the 
separation easy. One factor that I did not mention is that each service will 
present public, management, and optional extension APIs, allowing each service 
to be deployed independently.

 

You said:

 

That suggestion is contradictory, first you say not to separate then you 
suggest creating separate projects. I am against creating separate projects, 
the development is part of Nova until at least Cactus.

 

This is exactly my suggestion below. Keep Nova monolithic until Cactus, then 
integrate the new services once Cactus is shipped. There is work to be done to 
create the service frameworks, API engines, extension mechanisms, and porting 
the existing functionality. All of this can be done in parallel to the 
stability work being done in the Nova code base. As far as I know there are not 
major updates coming in either the volume or network management code for this 
milestone.

 

John

 

From: Andy Smith [mailto:andys...@gmail.com] 
Sent: Friday, January 28, 2011 12:45 PM
To: John Purrier
Cc: Rick Clark; Jay Pipes; Ewan Mellor; Søren Hansen; 
openstack@lists.launchpad.net
Subject: Re: [Openstack] Network Service for L2/L3 Network Infrastructure 
blueprint

 

 

On Fri, Jan 28, 2011 at 10:18 AM, John Purrier j...@openstack.org wrote:

Some clarification and a suggestion regarding Nova and the two new proposed 
services (Network/Volume).

To be clear, Nova today contains both volume and network services. We can 
specify, attach, and manage block devices and also specify network related 
items, such as IP assignment and VLAN creation. I have heard there is some 
confusion on this, since we started talking about creating OpenStack services 
around these areas that will be separate from the cloud controller (Nova).

The driving factors to consider creating independent services for VM, Images, 
Network, and Volumes are 1) To allow deployment scenarios that may be scoped to 
a single service, so that we don't drag all of the Nova code in if we just want 
to deploy virtual volumes, and 2) To allow greater innovation and community 
contribution to the individual services.

Another nice effect of separation of services is that each service can scale 
horizontally per the demands of the deployment, independent of the other 
services.

 

This statement is invalid, nova is already broken into services, each of which 
can be dealt with individually and scaled as such, whether the code is part of 
the same repository has little bearing on that. The goals of scaling are 
orthogonal to the location of the code and are much more related to separation 
of concerns in the code, making sure that volume code does not rely on compute 
code for example (which at this point it doesn't particularly).

 


We have an existing blueprint discussing the Network Service. We have *not* 
published a blueprint discussing the Volume Service, this will be coming soon.

The net is that creating the correct architecture in OpenStack Compute 
(automation and infrastructure) is a good thing as we look to the future 
evolution of the project.

Here is the suggestion. It is clear from the response on the list that 
refactoring Nova in the Cactus timeframe will be too risky, particularly as we 
are focusing Cactus on Stability, Reliability, and Deployability (along with a 
complete OpenStack API). For Cactus we should leave the network and volume 
services alone in Nova to minimize destabilizing the code base. In parallel, we 
can initiate the Network and Volume Service projects in Launchpad and allow the 
teams that form around these efforts to move in parallel, perhaps seeding their 
projects from the existing Nova code.

 

That suggestion is contradictory, first you say not to separate then you 
suggest creating separate projects. I am against creating separate projects, 
the development is part of Nova until at least Cactus.

 

Once we complete Cactus we can have discussions at the Diablo DS about progress 
these efforts have made and how best to move forward with Nova integration and 
determine release targets.

Thoughts?

John


-Original Message-
From: openstack-bounces+john=openstack@lists.launchpad.net 
[mailto:openstack-bounces+john mailto:openstack-bounces%2Bjohn 
=openstack@lists.launchpad.net] On Behalf Of Rick Clark
Sent: Friday, January 28, 2011 9:06 AM
To: Jay Pipes
Cc: Ewan Mellor; Søren

Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint

2011-01-28 Thread Andy Smith
On Fri, Jan 28, 2011 at 1:19 PM, John Purrier j...@openstack.org wrote:

 Thanks for the response, Andy. I think we actually agree on this J.



 You said:



 *This statement is invalid, nova is already broken into services, each of
 which can be dealt with individually and scaled as such, whether the code is
 part of the same repository has little bearing on that. The goals of scaling
 are orthogonal to the location of the code and are much more related to
 separation of concerns in the code, making* sure *that volume code does
 not rely on compute code for example (which at this point it doesn't
 particularly).*



 The fact that the volume code and the compute code are not coupled make the
 separation easy. One factor that I did not mention is that each service will
 present public, management, and optional extension APIs, allowing each
 service to be deployed independently.


So far that is all possible under the existing auspices of Nova. DirectAPI
will happily sit in front of any of the services independently, any of the
services when run can be configured with different instances of RabbitMQ to
point at, DirectAPI supports a large amount of extensibility and pluggable
managers/drivers support a bunch more.

Decoupling of the code has always been a goal, as have been providing
public, management, and extension APIs and we aren't doing so bad.

I don't think we disagree about wanting to run things independently, but for
the moment I have seen no convincing arguments for separating the codebase.





 You said:



 *That suggestion is contradictory, first you say not to separate then you
 suggest creating separate projects. I am against creating separate projects,
 the development is part of Nova until at least Cactus.*



 This is exactly my suggestion below. Keep Nova monolithic until Cactus,
 then integrate the new services once Cactus is shipped. There is work to be
 done to create the service frameworks, API engines, extension mechanisms,
 and porting the existing functionality. All of this can be done in parallel
 to the stability work being done in the Nova code base. As far as I know
 there are not major updates coming in either the volume or network
 management code for this milestone.


Where is this parallel work being done if not in a separate project?

--andy





 John



 *From:* Andy Smith [mailto:andys...@gmail.com]
 *Sent:* Friday, January 28, 2011 12:45 PM
 *To:* John Purrier
 *Cc:* Rick Clark; Jay Pipes; Ewan Mellor; Søren Hansen;
 openstack@lists.launchpad.net

 *Subject:* Re: [Openstack] Network Service for L2/L3 Network
 Infrastructure blueprint





 On Fri, Jan 28, 2011 at 10:18 AM, John Purrier j...@openstack.org wrote:

 Some clarification and a suggestion regarding Nova and the two new proposed
 services (Network/Volume).

 To be clear, Nova today contains both volume and network services. We can
 specify, attach, and manage block devices and also specify network related
 items, such as IP assignment and VLAN creation. I have heard there is some
 confusion on this, since we started talking about creating OpenStack
 services around these areas that will be separate from the cloud controller
 (Nova).

 The driving factors to consider creating independent services for VM,
 Images, Network, and Volumes are 1) To allow deployment scenarios that may
 be scoped to a single service, so that we don't drag all of the Nova code in
 if we just want to deploy virtual volumes, and 2) To allow greater
 innovation and community contribution to the individual services.

 Another nice effect of separation of services is that each service can
 scale horizontally per the demands of the deployment, independent of the
 other services.



 This statement is invalid, nova is already broken into services, each of
 which can be dealt with individually and scaled as such, whether the code is
 part of the same repository has little bearing on that. The goals of scaling
 are orthogonal to the location of the code and are much more related to
 separation of concerns in the code, making sure that volume code does not
 rely on compute code for example (which at this point it doesn't
 particularly).




 We have an existing blueprint discussing the Network Service. We have *not*
 published a blueprint discussing the Volume Service, this will be coming
 soon.

 The net is that creating the correct architecture in OpenStack Compute
 (automation and infrastructure) is a good thing as we look to the future
 evolution of the project.

 Here is the suggestion. It is clear from the response on the list that
 refactoring Nova in the Cactus timeframe will be too risky, particularly as
 we are focusing Cactus on Stability, Reliability, and Deployability (along
 with a complete OpenStack API). For Cactus we should leave the network and
 volume services alone in Nova to minimize destabilizing the code base. In
 parallel, we can initiate the Network and Volume Service projects in
 Launchpad and allow the teams