Re: [Openstack] Decoupling of Network and Compute services for the new Network Service design

2011-02-25 Thread Eric Day
Hi Ed,

So it sounds like we're all talking about the same thing, we just need
to start a Nova glossary so we're all on the smae page of what terms
mean. :)  So it sounds like from my last email, kernel == scheduler,
and scheduler == best match.

I'm not too concerned about naming of things as long as they are
accurate. I thought "scheduler" was being a bit overloaded, as it
was originally only supposed to be the "best match" functionality you
describe. I think it would be fine to use kernel though, as the same
terms are used in many different contexts in computing, and this is
certainly a different context. Having said that, it doesn't really
matter as long as it works and folks can understand what it's doing
with a brief look. :)

As for the best way to locate the host when routing requests for
existing instances, the most straightforward way is to keep a lookup
table (probably another SQLite table would be easiest). This table
can stay up to date by registering callbacks with child zones, and
the child needs to have an API call for the parent zone to be able to
say "give me all changes after time X" for the initial sync (time=0)
or after a long outage (time=last update before going down).

-Eric

On Fri, Feb 25, 2011 at 08:27:24AM -0600, Ed Leafe wrote:
> On Feb 24, 2011, at 2:02 PM, Eric Day wrote:
> 
> > I agree with Vish, I think the correct approach is 3. I have some
> > ideas on terminology and how to think about this. A "scheduler"
> > should not be it's own top-level service. It should instead be a
> > plugin point (more like auth or db). It would plug into new service
> > called "kernel". Another way to look at it is s/scheduler/kernel/
> > and expand kernel.
> 
> 
>   As I've been reading this thread, it did strike me that the terminology 
> was being used differently by various people. Let me see if I can explain the 
> way we've been using the terms in the development currently underway among 
> the Ozone team.
> 
>   Given an Openstack deployment with several nested zones, most external 
> API requests to interact with a VM will need to be routed to the appropriate 
> compute node. The top-level API will know nothing about the VM, and so some 
> sort of communication must be established to handle resolving these calls. 
> This is what we have been calling the "scheduler", and what you seem to be 
> referring to as the "kernel". Example: a request to pause a VM will have to 
> be routed through the zone structure to find the host for that VM in order to 
> pause it. The method used to efficiently locate the host is currently the 
> focus of much discussion.
> 
>   One other such task (and probably the most involved) will be the 
> creation of a new VM. This will require determining which hosts can 
> accommodate the proposed instance, and a way of weighting or otherwise 
> choosing the host from among all that are available. It will also require 
> additional actions, such as establishing the network configuration, but let's 
> keep this discussion focused. The process of selecting the host to receive 
> the new VM is something we don't have a catchy name for, but we have been 
> referring to "best match", since that's the current term used in the 
> Slicehost codebase. We have assumed that this will be pluggable, with the 
> default being the current random selector, so that the way Rackspace 
> allocates its VMs can be customized to their needs, while still allowing 
> everyone else to create their own selection criteria.
> 
>   I hope that this clarifies some of what we have been talking about. 
> BTW, I understand your choice of the term "kernel", but I would prefer 
> something that might be less confusing, since kernel already has a common 
> meaning in computing.
> 
> 
> -- Ed Leafe

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Decoupling of Network and Compute services for the new Network Service design

2011-02-25 Thread Trey Morris
There can be reconfiguration of the network, just not adding/removing of
vifs. The addition of a new vif would generally only be done if an
additional nic or bridge was added to the host. I figure this to be a rare
occurrence. You can add or remove IPs to/from an instance by configuring
aliases on existing vifs (which the agent will do). Biggest case I can think
of where dhcp is not appropriate: service providers.

On Thu, Feb 24, 2011 at 6:49 PM, Dan Mihai Dumitriu wrote:

> So in cases where static injection into the image is used, it seems there
> can be no dynamic reconfiguration of the network, ie cannot plug a vNic into
> a network after the VM is started.
>
> Just so we're all on the same page, in what cases is dhcp/ra not
> appropriate?
>
> Cheers,
> Dan
>
> On Feb 25, 2011 7:11 AM, "Trey Morris"  wrote:
> > definitely not fine to use dhcp in all cases.
> >
> > On Thu, Feb 24, 2011 at 3:28 AM, Dan Mihai Dumitriu  >wrote:
> >
> >> If we can dynamically plug (and presumably unplug) a vNIC into a
> >> vPort, and assign the IP at that time, does that imply that we cannot
> >> use the IP injection into the VM image? Is it fine to use DHCP or RA
> >> in all cases?
> >>
> >>
> >> On Wed, Feb 23, 2011 at 22:29, Ishimoto, Ryu  wrote:
> >> >
> >> > Hi everyone,
> >> > I have been following the discussion regarding the new 'pluggable'
> >> network
> >> > service design, and wanted to drop in my 2 cents ;-)
> >> > Looking at the current implementation of Nova, there seems to be a
> very
> >> > strong coupling between compute and network services. That is, tasks
> >> that
> >> > are done by the network service are executed at the time of VM
> >> > instantiation, making the compute code dependent on the network
> service,
> >> and
> >> > vice versa. This dependency seems undesirable to me as it adds
> >> restrictions
> >> > to implementing 'pluggable' network services, which can vary, with
> many
> >> ways
> >> > to implement them.
> >> > Would anyone be opposed to completely separating out the network
> service
> >> > logic from compute? I don't think it's too difficult to accomplish
> this,
> >> > but to do so, it will require that the network service tasks, such as
> IP
> >> > allocation, be executed by the user prior to instantiating the VM.
> >> > In the new network design(from what I've read up so far), there are
> >> concepts
> >> > of vNICs, and vPorts, where vNICs are network interfaces that are
> >> associated
> >> > with the VMs, and vPorts are logical ports that vNICs are plugged into
> >> for
> >> > network connectivity. If we are to decouple network and compute
> >> services,
> >> > the steps required for FlatManager networking service would look
> >> something
> >> > like:
> >> > 1. Create ports for a network. Each port is associated with an IP
> >> address
> >> > in this particular case, since it's an IP-based network.
> >> > 2. Create a vNIC
> >> > 3. Plug a vNIC into an avaiable vPort. In this case it just means
> >> mapping
> >> > this vNIC to an unused IP address.
> >> > 4. Start a VM with this vNIC. vNIC is already mapped to an IP address,
> >> so
> >> > compute does not have to ask the network service to do any IP
> allocation.
> >> > In this simple example, by removing the request for IP allocation from
> >> > compute, the network service is no longer needed during the VM
> >> > instantiation. While it may require more steps for the network setup
> in
> >> > more complex cases, it would still hold true that, once the vNIC and
> >> vPort
> >> > are mapped, compute service would not require any network service
> during
> >> the
> >> > VM instantiation.
> >> > IF there is still a need for the compute to access the network
> service,
> >> > there is another way. Currently, the setup of the network
> >> > environment(bridge, vlan, etc) is all done by the compute service.
> With
> >> the
> >> > new network model, these tasks should either be separated out into a
> >> > standalone service('network agent') or at least be separated out into
> >> > modules with generic APIs that the network plugin providers can
> >> implement.
> >> > By doing so, and if we can agree on a rule that the compute service
> must
> >> > always go through the network agent to access the network service, we
> can
> >> > still achieve the separation of compute from network services. Network
> >> > agents should have full access to the network service as they are both
> >> > implemented by the same plugin provider. Compute would not be aware of
> >> the
> >> > network agent accessing the network service.
> >> > With this design, the network service is only tied to the network REST
> >> API
> >> > and the network agent, both of which are implemented by the plugin
> >> > providers. This would allow them to implement their network service
> >> without
> >> > worrying about the details of the compute service.
> >> > Please let me know if all this made any sense. :-) Would love to get
> >> some
> >> > feedbacks.
> >> > Regards,
> 

Re: [Openstack] Decoupling of Network and Compute services for the new Network Service design

2011-02-25 Thread Ed Leafe
On Feb 24, 2011, at 2:02 PM, Eric Day wrote:

> I agree with Vish, I think the correct approach is 3. I have some
> ideas on terminology and how to think about this. A "scheduler"
> should not be it's own top-level service. It should instead be a
> plugin point (more like auth or db). It would plug into new service
> called "kernel". Another way to look at it is s/scheduler/kernel/
> and expand kernel.


As I've been reading this thread, it did strike me that the terminology 
was being used differently by various people. Let me see if I can explain the 
way we've been using the terms in the development currently underway among the 
Ozone team.

Given an Openstack deployment with several nested zones, most external 
API requests to interact with a VM will need to be routed to the appropriate 
compute node. The top-level API will know nothing about the VM, and so some 
sort of communication must be established to handle resolving these calls. This 
is what we have been calling the "scheduler", and what you seem to be referring 
to as the "kernel". Example: a request to pause a VM will have to be routed 
through the zone structure to find the host for that VM in order to pause it. 
The method used to efficiently locate the host is currently the focus of much 
discussion.

One other such task (and probably the most involved) will be the 
creation of a new VM. This will require determining which hosts can accommodate 
the proposed instance, and a way of weighting or otherwise choosing the host 
from among all that are available. It will also require additional actions, 
such as establishing the network configuration, but let's keep this discussion 
focused. The process of selecting the host to receive the new VM is something 
we don't have a catchy name for, but we have been referring to "best match", 
since that's the current term used in the Slicehost codebase. We have assumed 
that this will be pluggable, with the default being the current random 
selector, so that the way Rackspace allocates its VMs can be customized to 
their needs, while still allowing everyone else to create their own selection 
criteria.

I hope that this clarifies some of what we have been talking about. 
BTW, I understand your choice of the term "kernel", but I would prefer 
something that might be less confusing, since kernel already has a common 
meaning in computing.


-- Ed Leafe


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Decoupling of Network and Compute services for the new Network Service design

2011-02-25 Thread Thierry Carrez
Eric Day wrote:
> I agree with Vish, I think the correct approach is 3. I have some
> ideas on terminology and how to think about this. A "scheduler"
> should not be it's own top-level service. It should instead be a
> plugin point (more like auth or db). It would plug into new service
> called "kernel". Another way to look at it is s/scheduler/kernel/
> and expand kernel.

I agree (and that was my point above): with a
kernel/supervisor/orchestrator top-level service, you should fold the
scheduler function in it.

That said, if the scheduler ends up being a plugin in the
kernel/supervisor and no longer it's own top-level service, it looks to
me you're advocating approach 2 (scheduler is rewrapped into some sort
of supervisor), rather than approach 3 (create a compute supervisor
separate from scheduler) ?

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Decoupling of Network and Compute services for the new Network Service design

2011-02-24 Thread Dan Mihai Dumitriu
So in cases where static injection into the image is used, it seems there
can be no dynamic reconfiguration of the network, ie cannot plug a vNic into
a network after the VM is started.

Just so we're all on the same page, in what cases is dhcp/ra not
appropriate?

Cheers,
Dan

On Feb 25, 2011 7:11 AM, "Trey Morris"  wrote:
> definitely not fine to use dhcp in all cases.
>
> On Thu, Feb 24, 2011 at 3:28 AM, Dan Mihai Dumitriu wrote:
>
>> If we can dynamically plug (and presumably unplug) a vNIC into a
>> vPort, and assign the IP at that time, does that imply that we cannot
>> use the IP injection into the VM image? Is it fine to use DHCP or RA
>> in all cases?
>>
>>
>> On Wed, Feb 23, 2011 at 22:29, Ishimoto, Ryu  wrote:
>> >
>> > Hi everyone,
>> > I have been following the discussion regarding the new 'pluggable'
>> network
>> > service design, and wanted to drop in my 2 cents ;-)
>> > Looking at the current implementation of Nova, there seems to be a very
>> > strong coupling between compute and network services. That is, tasks
>> that
>> > are done by the network service are executed at the time of VM
>> > instantiation, making the compute code dependent on the network
service,
>> and
>> > vice versa. This dependency seems undesirable to me as it adds
>> restrictions
>> > to implementing 'pluggable' network services, which can vary, with many
>> ways
>> > to implement them.
>> > Would anyone be opposed to completely separating out the network
service
>> > logic from compute? I don't think it's too difficult to accomplish
this,
>> > but to do so, it will require that the network service tasks, such as
IP
>> > allocation, be executed by the user prior to instantiating the VM.
>> > In the new network design(from what I've read up so far), there are
>> concepts
>> > of vNICs, and vPorts, where vNICs are network interfaces that are
>> associated
>> > with the VMs, and vPorts are logical ports that vNICs are plugged into
>> for
>> > network connectivity. If we are to decouple network and compute
>> services,
>> > the steps required for FlatManager networking service would look
>> something
>> > like:
>> > 1. Create ports for a network. Each port is associated with an IP
>> address
>> > in this particular case, since it's an IP-based network.
>> > 2. Create a vNIC
>> > 3. Plug a vNIC into an avaiable vPort. In this case it just means
>> mapping
>> > this vNIC to an unused IP address.
>> > 4. Start a VM with this vNIC. vNIC is already mapped to an IP address,
>> so
>> > compute does not have to ask the network service to do any IP
allocation.
>> > In this simple example, by removing the request for IP allocation from
>> > compute, the network service is no longer needed during the VM
>> > instantiation. While it may require more steps for the network setup in
>> > more complex cases, it would still hold true that, once the vNIC and
>> vPort
>> > are mapped, compute service would not require any network service
during
>> the
>> > VM instantiation.
>> > IF there is still a need for the compute to access the network service,
>> > there is another way. Currently, the setup of the network
>> > environment(bridge, vlan, etc) is all done by the compute service. With
>> the
>> > new network model, these tasks should either be separated out into a
>> > standalone service('network agent') or at least be separated out into
>> > modules with generic APIs that the network plugin providers can
>> implement.
>> > By doing so, and if we can agree on a rule that the compute service
must
>> > always go through the network agent to access the network service, we
can
>> > still achieve the separation of compute from network services. Network
>> > agents should have full access to the network service as they are both
>> > implemented by the same plugin provider. Compute would not be aware of
>> the
>> > network agent accessing the network service.
>> > With this design, the network service is only tied to the network REST
>> API
>> > and the network agent, both of which are implemented by the plugin
>> > providers. This would allow them to implement their network service
>> without
>> > worrying about the details of the compute service.
>> > Please let me know if all this made any sense. :-) Would love to get
>> some
>> > feedbacks.
>> > Regards,
>> > Ryu Ishimoto
>> >
>> > ___
>> > Mailing list: https://launchpad.net/~openstack
>> > Post to : openstack@lists.launchpad.net
>> > Unsubscribe : https://launchpad.net/~openstack
>> > More help : https://help.launchpad.net/ListHelp
>> >
>> >
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help : https://help.launchpad.net/ListHelp
>>
>
>
> Confidentiality Notice: This e-mail message (including any attached or
> embedded documents) is intended for the exclusive and confidential use of
the
> 

Re: [Openstack] Decoupling of Network and Compute services for the new Network Service design

2011-02-24 Thread Trey Morris
definitely not fine to use dhcp in all cases.

On Thu, Feb 24, 2011 at 3:28 AM, Dan Mihai Dumitriu wrote:

> If we can dynamically plug (and presumably unplug) a vNIC into a
> vPort, and assign the IP at that time, does that imply that we cannot
> use the IP injection into the VM image?  Is it fine to use DHCP or RA
> in all cases?
>
>
> On Wed, Feb 23, 2011 at 22:29, Ishimoto, Ryu  wrote:
> >
> > Hi everyone,
> > I have been following the discussion regarding the new 'pluggable'
> network
> > service design, and wanted to drop in my 2 cents ;-)
> > Looking at the current implementation of Nova, there seems to be a very
> > strong coupling between compute and network services.  That is, tasks
> that
> > are done by the network service are executed at the time of VM
> > instantiation, making the compute code dependent on the network service,
> and
> > vice versa.  This dependency seems undesirable to me as it adds
> restrictions
> > to implementing 'pluggable' network services, which can vary, with many
> ways
> > to implement them.
> > Would anyone be opposed to completely separating out the network service
> > logic from compute?  I don't think it's too difficult to accomplish this,
> > but to do so, it will require that the network service tasks, such as IP
> > allocation, be executed by the user prior to instantiating the VM.
> > In the new network design(from what I've read up so far), there are
> concepts
> > of vNICs, and vPorts, where vNICs are network interfaces that are
> associated
> > with the VMs, and vPorts are logical ports that vNICs are plugged into
> for
> > network connectivity.  If we are to decouple network and compute
> services,
> > the steps required for FlatManager networking service would look
> something
> > like:
> > 1. Create ports for a network.  Each port is associated with an IP
> address
> > in this particular case, since it's an IP-based network.
> > 2. Create a vNIC
> > 3. Plug a vNIC into an avaiable vPort.  In this case it just means
> mapping
> > this vNIC to an unused IP address.
> > 4. Start a VM with this vNIC.  vNIC is already mapped to an IP address,
> so
> > compute does not have to ask the network service to do any IP allocation.
> > In this simple example, by removing the request for IP allocation from
> > compute, the network service is no longer needed during the VM
> > instantiation.  While it may require more steps for the network setup in
> > more complex cases, it would still hold true that, once the vNIC and
> vPort
> > are mapped, compute service would not require any network service during
> the
> > VM instantiation.
> > IF there is still a need for the compute to access the network service,
> > there is another way.  Currently, the setup of the network
> > environment(bridge, vlan, etc) is all done by the compute service. With
> the
> > new network model, these tasks should either be separated out into a
> > standalone service('network agent') or at least be separated out into
> > modules with generic APIs that the network plugin providers can
> implement.
> >  By doing so, and if we can agree on a rule that the compute service must
> > always go through the network agent to access the network service, we can
> > still achieve the separation of compute from network services.   Network
> > agents should have full access to the network service as they are both
> > implemented by the same plugin provider.  Compute would not be aware of
> the
> > network agent accessing the network service.
> > With this design, the network service is only tied to the network REST
> API
> > and the network agent, both of which are implemented by the plugin
> > providers.  This would allow them to implement their network service
> without
> > worrying about the details of the compute service.
> > Please let me know if all this made any sense. :-)  Would love to get
> some
> > feedbacks.
> > Regards,
> > Ryu Ishimoto
> >
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
> >
> >
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>


Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciat

Re: [Openstack] Decoupling of Network and Compute services for the new Network Service design

2011-02-24 Thread Eric Day
I agree with Vish, I think the correct approach is 3. I have some
ideas on terminology and how to think about this. A "scheduler"
should not be it's own top-level service. It should instead be a
plugin point (more like auth or db). It would plug into new service
called "kernel". Another way to look at it is s/scheduler/kernel/
and expand kernel.

Kernel will, as you might guess, tie together various resources. This
includes routing, scheduling, and ensuring things get done just like
a OS kernel manages vm, vfs, hardware devices, process scheduling,
etc. There can be multiple kernel processes/workers (just like there
can be schedulers) for HA. The kernel should define a scheduler API
that schedulers can implement and use (just like today).

My main concern is 'scheduler' is quickly becoming overloaded and
doing things besides scheduling.

Regardless of naming, we should make sure compute nodes never have the
ability to initiate resource requests directly for security reasons,
they should always go through the kernel (which verifies the request
is valid). The API servers should not be coordinating either, this
should be a simple shim over the kernel API (which you can consider
nova/compute.api, nova/network.api, etc. today).

-Eric

On Wed, Feb 23, 2011 at 10:26:55AM -0800, Vishvananda Ishaya wrote:
> Agreed that this is the right way to go.
> 
> We need some sort of supervisor to tell the network to allocate the network 
> before dispatching a message to compute.  I see three possibilities (from 
> easiest to hardest):
> 
> 1. Make the call in /nova/compute/api.py (this code runs on the api host) 
> 2. Make the call in the scheduler (the scheduler then becomes sort of a 
> supervisor to make sure all setup occurs for a vm to launch)
> 3. Create a separate compute supervisor that is responsible for managing the 
> calls to different components
> 
> The easiest seems to be 1, but unfortunately it forces us to wait for the 
> network allocation to finish before returning to the user which i dislike.
> 
> I think ultimately 3 is probably the best solution, but for now I suggest 2 
> as a middle ground between easy and best.
> 
> Vish
> 
> On Feb 23, 2011, at 5:29 AM, Ishimoto, Ryu wrote:
> 
> > 
> > Hi everyone,
> > 
> > I have been following the discussion regarding the new 'pluggable' network 
> > service design, and wanted to drop in my 2 cents ;-)
> > 
> > Looking at the current implementation of Nova, there seems to be a very 
> > strong coupling between compute and network services.  That is, tasks that 
> > are done by the network service are executed at the time of VM 
> > instantiation, making the compute code dependent on the network service, 
> > and vice versa.  This dependency seems undesirable to me as it adds 
> > restrictions to implementing 'pluggable' network services, which can vary, 
> > with many ways to implement them.
> > 
> > Would anyone be opposed to completely separating out the network service 
> > logic from compute?  I don't think it's too difficult to accomplish this, 
> > but to do so, it will require that the network service tasks, such as IP 
> > allocation, be executed by the user prior to instantiating the VM.  
> > 
> > In the new network design(from what I've read up so far), there are 
> > concepts of vNICs, and vPorts, where vNICs are network interfaces that are 
> > associated with the VMs, and vPorts are logical ports that vNICs are 
> > plugged into for network connectivity.  If we are to decouple network and 
> > compute services, the steps required for FlatManager networking service 
> > would look something like:
> > 
> > 1. Create ports for a network.  Each port is associated with an IP address 
> > in this particular case, since it's an IP-based network.
> > 2. Create a vNIC
> > 3. Plug a vNIC into an avaiable vPort.  In this case it just means mapping 
> > this vNIC to an unused IP address.
> > 4. Start a VM with this vNIC.  vNIC is already mapped to an IP address, so 
> > compute does not have to ask the network service to do any IP allocation. 
> > 
> > In this simple example, by removing the request for IP allocation from 
> > compute, the network service is no longer needed during the VM 
> > instantiation.  While it may require more steps for the network setup in 
> > more complex cases, it would still hold true that, once the vNIC and vPort 
> > are mapped, compute service would not require any network service during 
> > the VM instantiation.
> > 
> > IF there is still a need for the compute to access the network service, 
> > there is another way.  Currently, the setup of the network 
> > environment(bridge, vlan, etc) is all done by the compute service. With the 
> > new network model, these tasks should either be separated out into a 
> > standalone service('network agent') or at least be separated out into 
> > modules with generic APIs that the network plugin providers can implement.  
> > By doing so, and if we can agree on a rule that the compute se

Re: [Openstack] Decoupling of Network and Compute services for the new Network Service design

2011-02-24 Thread Monsyne Dragon

On 2/23/11 12:26 PM, Vishvananda Ishaya wrote:

Agreed that this is the right way to go.

We need some sort of supervisor to tell the network to allocate the network 
before dispatching a message to compute.  I see three possibilities (from 
easiest to hardest):

1. Make the call in /nova/compute/api.py (this code runs on the api host)
2. Make the call in the scheduler (the scheduler then becomes sort of a 
supervisor to make sure all setup occurs for a vm to launch)
3. Create a separate compute supervisor that is responsible for managing the 
calls to different components

The easiest seems to be 1, but unfortunately it forces us to wait for the 
network allocation to finish before returning to the user which i dislike.

I think ultimately 3 is probably the best solution, but for now I suggest 2 as 
a middle ground between easy and best.

Actually, thinking  about this..
What if we had some concept of a 'tasklist' of some sort?
The scheduler would handle this, looking at the first non-completed task 
on the list, and dispatching it.
Possibly, each task could pop things on/off the list too, and include 
result data for complete and/or failed tasks.


Possibly this could work like:

- Api generates a one task tasklist: 'gimme an instance w/ flavor x, 
requirement y ...'

- Scheduler dispatches this to a compute node.
- Compute node does some prep work (mebbe allocating an instance_id, or 
somesuch),

 and returns back the tasklist looking kindof like:
 """
   1. gimme an instance w/ flavor x, requirement y ...: [Done, 
instance_id = 1234]

   2. allocate network  for instance1234
   3. build instance1234
 """
- Scheduler looks at next task on list, and dispatches to a network worker.
- Network worker does magic, and returns tasklist:
"""
   1. gimme an instance w/ flavor x, requirement y ...: [Done, 
instance_id = 1234]
   2. allocate network  for instance1234: [Done 
network_info=]

   3. build instance1234
 """
- Scheduler looks at next task, dispatches to compute worker.
- Compute worker actually builds instance, with network info as allocated.
** tasklist done.

(This could also allow for retries. A worker could just return the 
tasklist with a soft error on that task. The scheduler would see the 
same task as the top one, and would reschedule that task, (but could use 
the info that it failed on host x to tell it not to send it there again) )



Vish

On Feb 23, 2011, at 5:29 AM, Ishimoto, Ryu wrote:


Hi everyone,

I have been following the discussion regarding the new 'pluggable' network 
service design, and wanted to drop in my 2 cents ;-)

Looking at the current implementation of Nova, there seems to be a very strong 
coupling between compute and network services.  That is, tasks that are done by 
the network service are executed at the time of VM instantiation, making the 
compute code dependent on the network service, and vice versa.  This dependency 
seems undesirable to me as it adds restrictions to implementing 'pluggable' 
network services, which can vary, with many ways to implement them.

Would anyone be opposed to completely separating out the network service logic 
from compute?  I don't think it's too difficult to accomplish this, but to do 
so, it will require that the network service tasks, such as IP allocation, be 
executed by the user prior to instantiating the VM.

In the new network design(from what I've read up so far), there are concepts of 
vNICs, and vPorts, where vNICs are network interfaces that are associated with 
the VMs, and vPorts are logical ports that vNICs are plugged into for network 
connectivity.  If we are to decouple network and compute services, the steps 
required for FlatManager networking service would look something like:

1. Create ports for a network.  Each port is associated with an IP address in 
this particular case, since it's an IP-based network.
2. Create a vNIC
3. Plug a vNIC into an avaiable vPort.  In this case it just means mapping this 
vNIC to an unused IP address.
4. Start a VM with this vNIC.  vNIC is already mapped to an IP address, so 
compute does not have to ask the network service to do any IP allocation.

In this simple example, by removing the request for IP allocation from compute, 
the network service is no longer needed during the VM instantiation.  While it 
may require more steps for the network setup in more complex cases, it would 
still hold true that, once the vNIC and vPort are mapped, compute service would 
not require any network service during the VM instantiation.

IF there is still a need for the compute to access the network service, there 
is another way.  Currently, the setup of the network environment(bridge, vlan, 
etc) is all done by the compute service. With the new network model, these 
tasks should either be separated out into a standalone service('network agent') 
or at least be separated out into modules with generic APIs that the networ

Re: [Openstack] Decoupling of Network and Compute services for the new Network Service design

2011-02-24 Thread Ed Leafe
On Feb 23, 2011, at 12:26 PM, Vishvananda Ishaya wrote:

> We need some sort of supervisor to tell the network to allocate the network 
> before dispatching a message to compute.  I see three possibilities (from 
> easiest to hardest):
> 
> 1. Make the call in /nova/compute/api.py (this code runs on the api host) 
> 2. Make the call in the scheduler (the scheduler then becomes sort of a 
> supervisor to make sure all setup occurs for a vm to launch)
> 3. Create a separate compute supervisor that is responsible for managing the 
> calls to different components
> 
> The easiest seems to be 1, but unfortunately it forces us to wait for the 
> network allocation to finish before returning to the user which i dislike.
> 
> I think ultimately 3 is probably the best solution, but for now I suggest 2 
> as a middle ground between easy and best.


Currently, #2 is the approach being designed and developed, fwiw.


-- Ed Leafe




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Decoupling of Network and Compute services for the new Network Service design

2011-02-24 Thread Diego Parrilla Santamaría
I think we had this conversation before some weeks ago. From my perspective
I think networking services are normally not considered as first class
citizens of the 'Virtual Datacenter'. What Ishimoto-san describes is a
Virtual Switch. But networking services in the day-in day-out operations
include also DNS management, load balancers, firewalls, VPNs, netflow and
others. And this is the main reason to decouple all these services from the
Virtual Machine lifecycle: they are a lot of heterogenous network services
and some make sense tied to the VM but others make sense tied to the Virtual
Datacenter (let's call it the Openstack Project concept).

The scheduler should handle the network services tied to the VM, but most of
the network services are tied to a different kind of resource scheduler, the
Virtual Datacenter resources scheduler. This is the orchestrator we are
discussing in this thread.

So before adding new virtual resources I think we need some kind of new
Orchestrator/Resource scheduler that should handle dependencies between
resources (a netflow listener needs a virtual Port of a virtual Switch to be
allocated) and pluggable services. What I'm not sure about this kind of
orchestration components is if they implement fixed or dynamic workflows.
Fixed workflows reduce complexity a lot.

A long email and my poor english... hope you understand it!

-
Diego Parrilla
nubeblog.com | nubeb...@nubeblog.com | twitter.com/nubeblog
+34 649 94 43 29




On Wed, Feb 23, 2011 at 9:47 PM, John Purrier  wrote:

> And we are back to the discussion about orchestration... Given the
> flexibility of the OpenStack system and the goals of independently
> horizontally scaling services I think we will need to address this head on.
> #3 is the most difficult, but is also the right answer for the project as
> we
> look forward to adding functionality/services to the mix. This is also
> where
> we can make good use of asynchronous event publication interfaces within
> services to ensure maximum efficiency.
>
> John
>
> -Original Message-
> From: openstack-bounces+john=openstack@lists.launchpad.net
> [mailto:openstack-bounces+john=openstack@lists.launchpad.net] On
> Behalf
> Of Vishvananda Ishaya
> Sent: Wednesday, February 23, 2011 12:27 PM
> To: Ishimoto, Ryu
> Cc: openstack@lists.launchpad.net
> Subject: Re: [Openstack] Decoupling of Network and Compute services for the
> new Network Service design
>
> Agreed that this is the right way to go.
>
> We need some sort of supervisor to tell the network to allocate the network
> before dispatching a message to compute.  I see three possibilities (from
> easiest to hardest):
>
> 1. Make the call in /nova/compute/api.py (this code runs on the api host)
> 2. Make the call in the scheduler (the scheduler then becomes sort of a
> supervisor to make sure all setup occurs for a vm to launch)
> 3. Create a separate compute supervisor that is responsible for managing
> the
> calls to different components
>
> The easiest seems to be 1, but unfortunately it forces us to wait for the
> network allocation to finish before returning to the user which i dislike.
>
> I think ultimately 3 is probably the best solution, but for now I suggest 2
> as a middle ground between easy and best.
>
> Vish
>
> On Feb 23, 2011, at 5:29 AM, Ishimoto, Ryu wrote:
>
> >
> > Hi everyone,
> >
> > I have been following the discussion regarding the new 'pluggable'
> network
> service design, and wanted to drop in my 2 cents ;-)
> >
> > Looking at the current implementation of Nova, there seems to be a very
> strong coupling between compute and network services.  That is, tasks that
> are done by the network service are executed at the time of VM
> instantiation, making the compute code dependent on the network service,
> and
> vice versa.  This dependency seems undesirable to me as it adds
> restrictions
> to implementing 'pluggable' network services, which can vary, with many
> ways
> to implement them.
> >
> > Would anyone be opposed to completely separating out the network service
> logic from compute?  I don't think it's too difficult to accomplish this,
> but to do so, it will require that the network service tasks, such as IP
> allocation, be executed by the user prior to instantiating the VM.
> >
> > In the new network design(from what I've read up so far), there are
> concepts of vNICs, and vPorts, where vNICs are network interfaces that are
> associated with the VMs, and vPorts are logical ports that vNICs are
> plugged
> into for network connectivity.  If we are to decouple network and compute
> services, the steps required for FlatManager networking service would look
> somethin

Re: [Openstack] Decoupling of Network and Compute services for the new Network Service design

2011-02-24 Thread Dan Mihai Dumitriu
If we can dynamically plug (and presumably unplug) a vNIC into a
vPort, and assign the IP at that time, does that imply that we cannot
use the IP injection into the VM image?  Is it fine to use DHCP or RA
in all cases?


On Wed, Feb 23, 2011 at 22:29, Ishimoto, Ryu  wrote:
>
> Hi everyone,
> I have been following the discussion regarding the new 'pluggable' network
> service design, and wanted to drop in my 2 cents ;-)
> Looking at the current implementation of Nova, there seems to be a very
> strong coupling between compute and network services.  That is, tasks that
> are done by the network service are executed at the time of VM
> instantiation, making the compute code dependent on the network service, and
> vice versa.  This dependency seems undesirable to me as it adds restrictions
> to implementing 'pluggable' network services, which can vary, with many ways
> to implement them.
> Would anyone be opposed to completely separating out the network service
> logic from compute?  I don't think it's too difficult to accomplish this,
> but to do so, it will require that the network service tasks, such as IP
> allocation, be executed by the user prior to instantiating the VM.
> In the new network design(from what I've read up so far), there are concepts
> of vNICs, and vPorts, where vNICs are network interfaces that are associated
> with the VMs, and vPorts are logical ports that vNICs are plugged into for
> network connectivity.  If we are to decouple network and compute services,
> the steps required for FlatManager networking service would look something
> like:
> 1. Create ports for a network.  Each port is associated with an IP address
> in this particular case, since it's an IP-based network.
> 2. Create a vNIC
> 3. Plug a vNIC into an avaiable vPort.  In this case it just means mapping
> this vNIC to an unused IP address.
> 4. Start a VM with this vNIC.  vNIC is already mapped to an IP address, so
> compute does not have to ask the network service to do any IP allocation.
> In this simple example, by removing the request for IP allocation from
> compute, the network service is no longer needed during the VM
> instantiation.  While it may require more steps for the network setup in
> more complex cases, it would still hold true that, once the vNIC and vPort
> are mapped, compute service would not require any network service during the
> VM instantiation.
> IF there is still a need for the compute to access the network service,
> there is another way.  Currently, the setup of the network
> environment(bridge, vlan, etc) is all done by the compute service. With the
> new network model, these tasks should either be separated out into a
> standalone service('network agent') or at least be separated out into
> modules with generic APIs that the network plugin providers can implement.
>  By doing so, and if we can agree on a rule that the compute service must
> always go through the network agent to access the network service, we can
> still achieve the separation of compute from network services.   Network
> agents should have full access to the network service as they are both
> implemented by the same plugin provider.  Compute would not be aware of the
> network agent accessing the network service.
> With this design, the network service is only tied to the network REST API
> and the network agent, both of which are implemented by the plugin
> providers.  This would allow them to implement their network service without
> worrying about the details of the compute service.
> Please let me know if all this made any sense. :-)  Would love to get some
> feedbacks.
> Regards,
> Ryu Ishimoto
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Decoupling of Network and Compute services for the new Network Service design

2011-02-24 Thread Dan Mihai Dumitriu
I see, the latency of setting up bridges and vlans could be a problem.

How about the second problem, that of not having enough information to
assign the IP.  Is it really necessary to know what physical node the
VM will run on before assigning the IP?  Shouldn't that be decoupled?
For example, if this eventually supports VM migration, then wouldn't
the physical host be irrelevant in that case?  Probably I'm not
understanding the precise use case.

Cheers,
Dan


On Thu, Feb 24, 2011 at 18:21, Vishvananda Ishaya  wrote:
> It could be relatively quick, but if the underlying architechture needs to 
> setup bridges and vlans, I can see this taking a second or more.  I like an 
> api that returns in the hundreds of ms.  A greater concern of #1 is that 
> there isn't always enough information to assign the ip at the compute/api.py 
> layer.  Often this decision only makes sense once we know which host the vm 
> will run on.  It therefore really needs to be in the scheduler or later to 
> have the most flexibility.
>
> Vish
>
> On Feb 24, 2011, at 12:16 AM, Dan Mihai Dumitriu wrote:
>
>> Hi Vish,
>>
>>> We need some sort of supervisor to tell the network to allocate the network 
>>> before dispatching a message to compute.  I see three possibilities (from 
>>> easiest to hardest):
>>>
>>> 1. Make the call in /nova/compute/api.py (this code runs on the api host)
>>> 2. Make the call in the scheduler (the scheduler then becomes sort of a 
>>> supervisor to make sure all setup occurs for a vm to launch)
>>> 3. Create a separate compute supervisor that is responsible for managing 
>>> the calls to different components
>>>
>>> The easiest seems to be 1, but unfortunately it forces us to wait for the 
>>> network allocation to finish before returning to the user which i dislike.
>>
>> What is the problem with waiting for the network allocation?  I would
>> imagine that will be a quick operation.  (though it would depend on
>> the plugin implementation)
>>
>> Cheers,
>> Dan
>
>

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Decoupling of Network and Compute services for the new Network Service design

2011-02-24 Thread Dan Mihai Dumitriu
Hi Vish,

> We need some sort of supervisor to tell the network to allocate the network 
> before dispatching a message to compute.  I see three possibilities (from 
> easiest to hardest):
>
> 1. Make the call in /nova/compute/api.py (this code runs on the api host)
> 2. Make the call in the scheduler (the scheduler then becomes sort of a 
> supervisor to make sure all setup occurs for a vm to launch)
> 3. Create a separate compute supervisor that is responsible for managing the 
> calls to different components
>
> The easiest seems to be 1, but unfortunately it forces us to wait for the 
> network allocation to finish before returning to the user which i dislike.

What is the problem with waiting for the network allocation?  I would
imagine that will be a quick operation.  (though it would depend on
the plugin implementation)

Cheers,
Dan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Decoupling of Network and Compute services for the new Network Service design

2011-02-23 Thread Thierry Carrez
John Purrier wrote:
> And we are back to the discussion about orchestration... Given the
> flexibility of the OpenStack system and the goals of independently
> horizontally scaling services I think we will need to address this head on.
> #3 is the most difficult, but is also the right answer for the project as we
> look forward to adding functionality/services to the mix. This is also where
> we can make good use of asynchronous event publication interfaces within
> services to ensure maximum efficiency.

I can see the need for a supervisor component that will make sure that
all needed resources/nodes are correctly called (currently only
network+compute, but potentially more with higher-level requests using
extra services).

In the case of #3, could you explain what would be left for the
scheduler to do ? Would it just pick the supervisor node, or just the
compute node, or both ? Just trying to make sure there is a real benefit
in the added complexity and that #2 is really a worse option.

Regards,

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Decoupling of Network and Compute services for the new Network Service design

2011-02-23 Thread John Purrier
And we are back to the discussion about orchestration... Given the
flexibility of the OpenStack system and the goals of independently
horizontally scaling services I think we will need to address this head on.
#3 is the most difficult, but is also the right answer for the project as we
look forward to adding functionality/services to the mix. This is also where
we can make good use of asynchronous event publication interfaces within
services to ensure maximum efficiency.

John

-Original Message-
From: openstack-bounces+john=openstack@lists.launchpad.net
[mailto:openstack-bounces+john=openstack@lists.launchpad.net] On Behalf
Of Vishvananda Ishaya
Sent: Wednesday, February 23, 2011 12:27 PM
To: Ishimoto, Ryu
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] Decoupling of Network and Compute services for the
new Network Service design

Agreed that this is the right way to go.

We need some sort of supervisor to tell the network to allocate the network
before dispatching a message to compute.  I see three possibilities (from
easiest to hardest):

1. Make the call in /nova/compute/api.py (this code runs on the api host) 
2. Make the call in the scheduler (the scheduler then becomes sort of a
supervisor to make sure all setup occurs for a vm to launch)
3. Create a separate compute supervisor that is responsible for managing the
calls to different components

The easiest seems to be 1, but unfortunately it forces us to wait for the
network allocation to finish before returning to the user which i dislike.

I think ultimately 3 is probably the best solution, but for now I suggest 2
as a middle ground between easy and best.

Vish

On Feb 23, 2011, at 5:29 AM, Ishimoto, Ryu wrote:

> 
> Hi everyone,
> 
> I have been following the discussion regarding the new 'pluggable' network
service design, and wanted to drop in my 2 cents ;-)
> 
> Looking at the current implementation of Nova, there seems to be a very
strong coupling between compute and network services.  That is, tasks that
are done by the network service are executed at the time of VM
instantiation, making the compute code dependent on the network service, and
vice versa.  This dependency seems undesirable to me as it adds restrictions
to implementing 'pluggable' network services, which can vary, with many ways
to implement them.
> 
> Would anyone be opposed to completely separating out the network service
logic from compute?  I don't think it's too difficult to accomplish this,
but to do so, it will require that the network service tasks, such as IP
allocation, be executed by the user prior to instantiating the VM.  
> 
> In the new network design(from what I've read up so far), there are
concepts of vNICs, and vPorts, where vNICs are network interfaces that are
associated with the VMs, and vPorts are logical ports that vNICs are plugged
into for network connectivity.  If we are to decouple network and compute
services, the steps required for FlatManager networking service would look
something like:
> 
> 1. Create ports for a network.  Each port is associated with an IP address
in this particular case, since it's an IP-based network.
> 2. Create a vNIC
> 3. Plug a vNIC into an avaiable vPort.  In this case it just means mapping
this vNIC to an unused IP address.
> 4. Start a VM with this vNIC.  vNIC is already mapped to an IP address, so
compute does not have to ask the network service to do any IP allocation. 
> 
> In this simple example, by removing the request for IP allocation from
compute, the network service is no longer needed during the VM
instantiation.  While it may require more steps for the network setup in
more complex cases, it would still hold true that, once the vNIC and vPort
are mapped, compute service would not require any network service during the
VM instantiation.
> 
> IF there is still a need for the compute to access the network service,
there is another way.  Currently, the setup of the network
environment(bridge, vlan, etc) is all done by the compute service. With the
new network model, these tasks should either be separated out into a
standalone service('network agent') or at least be separated out into
modules with generic APIs that the network plugin providers can implement.
By doing so, and if we can agree on a rule that the compute service must
always go through the network agent to access the network service, we can
still achieve the separation of compute from network services.   Network
agents should have full access to the network service as they are both
implemented by the same plugin provider.  Compute would not be aware of the
network agent accessing the network service.
> 
> With this design, the network service is only tied to the network REST API
and the network agent, both of which are implemented by the plugin
providers.  This would allow them to implement their network serv

Re: [Openstack] Decoupling of Network and Compute services for the new Network Service design

2011-02-23 Thread Vishvananda Ishaya
Agreed that this is the right way to go.

We need some sort of supervisor to tell the network to allocate the network 
before dispatching a message to compute.  I see three possibilities (from 
easiest to hardest):

1. Make the call in /nova/compute/api.py (this code runs on the api host) 
2. Make the call in the scheduler (the scheduler then becomes sort of a 
supervisor to make sure all setup occurs for a vm to launch)
3. Create a separate compute supervisor that is responsible for managing the 
calls to different components

The easiest seems to be 1, but unfortunately it forces us to wait for the 
network allocation to finish before returning to the user which i dislike.

I think ultimately 3 is probably the best solution, but for now I suggest 2 as 
a middle ground between easy and best.

Vish

On Feb 23, 2011, at 5:29 AM, Ishimoto, Ryu wrote:

> 
> Hi everyone,
> 
> I have been following the discussion regarding the new 'pluggable' network 
> service design, and wanted to drop in my 2 cents ;-)
> 
> Looking at the current implementation of Nova, there seems to be a very 
> strong coupling between compute and network services.  That is, tasks that 
> are done by the network service are executed at the time of VM instantiation, 
> making the compute code dependent on the network service, and vice versa.  
> This dependency seems undesirable to me as it adds restrictions to 
> implementing 'pluggable' network services, which can vary, with many ways to 
> implement them.
> 
> Would anyone be opposed to completely separating out the network service 
> logic from compute?  I don't think it's too difficult to accomplish this, but 
> to do so, it will require that the network service tasks, such as IP 
> allocation, be executed by the user prior to instantiating the VM.  
> 
> In the new network design(from what I've read up so far), there are concepts 
> of vNICs, and vPorts, where vNICs are network interfaces that are associated 
> with the VMs, and vPorts are logical ports that vNICs are plugged into for 
> network connectivity.  If we are to decouple network and compute services, 
> the steps required for FlatManager networking service would look something 
> like:
> 
> 1. Create ports for a network.  Each port is associated with an IP address in 
> this particular case, since it's an IP-based network.
> 2. Create a vNIC
> 3. Plug a vNIC into an avaiable vPort.  In this case it just means mapping 
> this vNIC to an unused IP address.
> 4. Start a VM with this vNIC.  vNIC is already mapped to an IP address, so 
> compute does not have to ask the network service to do any IP allocation. 
> 
> In this simple example, by removing the request for IP allocation from 
> compute, the network service is no longer needed during the VM instantiation. 
>  While it may require more steps for the network setup in more complex cases, 
> it would still hold true that, once the vNIC and vPort are mapped, compute 
> service would not require any network service during the VM instantiation.
> 
> IF there is still a need for the compute to access the network service, there 
> is another way.  Currently, the setup of the network environment(bridge, 
> vlan, etc) is all done by the compute service. With the new network model, 
> these tasks should either be separated out into a standalone service('network 
> agent') or at least be separated out into modules with generic APIs that the 
> network plugin providers can implement.  By doing so, and if we can agree on 
> a rule that the compute service must always go through the network agent to 
> access the network service, we can still achieve the separation of compute 
> from network services.   Network agents should have full access to the 
> network service as they are both implemented by the same plugin provider.  
> Compute would not be aware of the network agent accessing the network service.
> 
> With this design, the network service is only tied to the network REST API 
> and the network agent, both of which are implemented by the plugin providers. 
>  This would allow them to implement their network service without worrying 
> about the details of the compute service.
> 
> Please let me know if all this made any sense. :-)  Would love to get some 
> feedbacks.
> 
> Regards,
> Ryu Ishimoto
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Decoupling of Network and Compute services for the new Network Service design

2011-02-23 Thread John Purrier
I agree, this is exactly where we want to take the network services for
OpenStack. The goal should be to decouple Compute from Network, with an eye
toward a project separation post-Cactus (this should have a lot of
discussion at the next design summit). For Cactus we have explicitly kept
the network manager (and the volume manager) inside of Nova in order to
minimize risk to stability for this release. For Diablo I think we need to
identify any of the dependencies and touchpoints that Compute has on Network
and make a clean separation. Ryu Ishimoto has made a good first step, we
need to identify any issues with all the possible network configurations.

 

Following up on the other big networking thread, I would like to see a
project schema that includes the core networking API, network
manager/controller, and plug-in interfaces. Additionally, we should identify
the "sub-projects" that can be optional networking components (such as VPN,
DHCP, etc.).

 

Separate from networking we need to do the same exercise for the volume
manager and block storage systems.

 

Thanks,

 

John

 

From: openstack-bounces+john=openstack@lists.launchpad.net
[mailto:openstack-bounces+john=openstack@lists.launchpad.net] On Behalf
Of Dan Wendlandt
Sent: Wednesday, February 23, 2011 7:49 AM
To: Ishimoto, Ryu
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] Decoupling of Network and Compute services for the
new Network Service design

 

I think this is very much inline with what we've been thinking.  To me,
providing a clean and generic programming interface that decouples the
network functionality from the existing nova stack is a first step in
creating a standalone network service.  

 

Also, I am not sure if this is implied by step #3 below, but it seems that
the compute and network service will need to share some identifier so that
the network entity running on the compute node can "recognize" a VM
interface and associate it with a vPort.  For example, each vNIC has an
identifier assigned by the compute service, a call to the network service
associates that vNIC id with a vPort, and when the compute node creates a
device (e.g., tap0), it tells the network plugin on the host the vNIC id for
that device (there are several other possible variations on this theme...).
In your example below this may not be strictly required because all vNICs
get connected to the same network, but in a general model for a network
service this will be required.  

 

dan

On Wed, Feb 23, 2011 at 5:29 AM, Ishimoto, Ryu  wrote:

 

Hi everyone,

 

I have been following the discussion regarding the new 'pluggable' network
service design, and wanted to drop in my 2 cents ;-)

 

Looking at the current implementation of Nova, there seems to be a very
strong coupling between compute and network services.  That is, tasks that
are done by the network service are executed at the time of VM
instantiation, making the compute code dependent on the network service, and
vice versa.  This dependency seems undesirable to me as it adds restrictions
to implementing 'pluggable' network services, which can vary, with many ways
to implement them.

 

Would anyone be opposed to completely separating out the network service
logic from compute?  I don't think it's too difficult to accomplish this,
but to do so, it will require that the network service tasks, such as IP
allocation, be executed by the user prior to instantiating the VM.  

 

In the new network design(from what I've read up so far), there are concepts
of vNICs, and vPorts, where vNICs are network interfaces that are associated
with the VMs, and vPorts are logical ports that vNICs are plugged into for
network connectivity.  If we are to decouple network and compute services,
the steps required for FlatManager networking service would look something
like:

 

1. Create ports for a network.  Each port is associated with an IP address
in this particular case, since it's an IP-based network.

2. Create a vNIC

3. Plug a vNIC into an avaiable vPort.  In this case it just means mapping
this vNIC to an unused IP address.

4. Start a VM with this vNIC.  vNIC is already mapped to an IP address, so
compute does not have to ask the network service to do any IP allocation. 

 

In this simple example, by removing the request for IP allocation from
compute, the network service is no longer needed during the VM
instantiation.  While it may require more steps for the network setup in
more complex cases, it would still hold true that, once the vNIC and vPort
are mapped, compute service would not require any network service during the
VM instantiation.

 

IF there is still a need for the compute to access the network service,
there is another way.  Currently, the setup of the network
environment(bridge, vlan, etc) is all done by the compute service. With the
new network model, these tasks should either be separated out into a
standalone s

Re: [Openstack] Decoupling of Network and Compute services for the new Network Service design

2011-02-23 Thread Dan Wendlandt
I think this is very much inline with what we've been thinking.  To me,
providing a clean and generic programming interface that decouples the
network functionality from the existing nova stack is a first step in
creating a standalone network service.

Also, I am not sure if this is implied by step #3 below, but it seems that
the compute and network service will need to share some identifier so that
the network entity running on the compute node can "recognize" a VM
interface and associate it with a vPort.  For example, each vNIC has an
identifier assigned by the compute service, a call to the network service
associates that vNIC id with a vPort, and when the compute node creates a
device (e.g., tap0), it tells the network plugin on the host the vNIC id for
that device (there are several other possible variations on this theme...).
 In your example below this may not be strictly required because all vNICs
get connected to the same network, but in a general model for a network
service this will be required.

dan

On Wed, Feb 23, 2011 at 5:29 AM, Ishimoto, Ryu  wrote:

>
> Hi everyone,
>
> I have been following the discussion regarding the new 'pluggable' network
> service design, and wanted to drop in my 2 cents ;-)
>
> Looking at the current implementation of Nova, there seems to be a very
> strong coupling between compute and network services.  That is, tasks that
> are done by the network service are executed at the time of VM
> instantiation, making the compute code dependent on the network service, and
> vice versa.  This dependency seems undesirable to me as it adds restrictions
> to implementing 'pluggable' network services, which can vary, with many ways
> to implement them.
>
> Would anyone be opposed to completely separating out the network service
> logic from compute?  I don't think it's too difficult to accomplish this,
> but to do so, it will require that the network service tasks, such as IP
> allocation, be executed by the user prior to instantiating the VM.
>
> In the new network design(from what I've read up so far), there are
> concepts of vNICs, and vPorts, where vNICs are network interfaces that are
> associated with the VMs, and vPorts are logical ports that vNICs are plugged
> into for network connectivity.  If we are to decouple network and compute
> services, the steps required for FlatManager networking service would look
> something like:
>
> 1. Create ports for a network.  Each port is associated with an IP address
> in this particular case, since it's an IP-based network.
> 2. Create a vNIC
> 3. Plug a vNIC into an avaiable vPort.  In this case it just means mapping
> this vNIC to an unused IP address.
> 4. Start a VM with this vNIC.  vNIC is already mapped to an IP address, so
> compute does not have to ask the network service to do any IP allocation.
>
> In this simple example, by removing the request for IP allocation from
> compute, the network service is no longer needed during the VM
> instantiation.  While it may require more steps for the network setup in
> more complex cases, it would still hold true that, once the vNIC and vPort
> are mapped, compute service would not require any network service during the
> VM instantiation.
>
> IF there is still a need for the compute to access the network service,
> there is another way.  Currently, the setup of the network
> environment(bridge, vlan, etc) is all done by the compute service. With the
> new network model, these tasks should either be separated out into a
> standalone service('network agent') or at least be separated out into
> modules with generic APIs that the network plugin providers can implement.
>  By doing so, and if we can agree on a rule that the compute service must
> always go through the network agent to access the network service, we can
> still achieve the separation of compute from network services.   Network
> agents should have full access to the network service as they are both
> implemented by the same plugin provider.  Compute would not be aware of the
> network agent accessing the network service.
>
> With this design, the network service is only tied to the network REST API
> and the network agent, both of which are implemented by the plugin
> providers.  This would allow them to implement their network service without
> worrying about the details of the compute service.
>
> Please let me know if all this made any sense. :-)  Would love to get some
> feedbacks.
>
> Regards,
> Ryu Ishimoto
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>


-- 
~~~
Dan Wendlandt
Nicira Networks, Inc.
www.nicira.com | www.openvswitch.org
Sr. Product Manager
cell: 650-906-2650
~~~
___
Mailing list: https://launchpad.net/~open

[Openstack] Decoupling of Network and Compute services for the new Network Service design

2011-02-23 Thread Ishimoto, Ryu
Hi everyone,

I have been following the discussion regarding the new 'pluggable' network
service design, and wanted to drop in my 2 cents ;-)

Looking at the current implementation of Nova, there seems to be a very
strong coupling between compute and network services.  That is, tasks that
are done by the network service are executed at the time of VM
instantiation, making the compute code dependent on the network service, and
vice versa.  This dependency seems undesirable to me as it adds restrictions
to implementing 'pluggable' network services, which can vary, with many ways
to implement them.

Would anyone be opposed to completely separating out the network service
logic from compute?  I don't think it's too difficult to accomplish this,
but to do so, it will require that the network service tasks, such as IP
allocation, be executed by the user prior to instantiating the VM.

In the new network design(from what I've read up so far), there are concepts
of vNICs, and vPorts, where vNICs are network interfaces that are associated
with the VMs, and vPorts are logical ports that vNICs are plugged into for
network connectivity.  If we are to decouple network and compute services,
the steps required for FlatManager networking service would look something
like:

1. Create ports for a network.  Each port is associated with an IP address
in this particular case, since it's an IP-based network.
2. Create a vNIC
3. Plug a vNIC into an avaiable vPort.  In this case it just means mapping
this vNIC to an unused IP address.
4. Start a VM with this vNIC.  vNIC is already mapped to an IP address, so
compute does not have to ask the network service to do any IP allocation.

In this simple example, by removing the request for IP allocation from
compute, the network service is no longer needed during the VM
instantiation.  While it may require more steps for the network setup in
more complex cases, it would still hold true that, once the vNIC and vPort
are mapped, compute service would not require any network service during the
VM instantiation.

IF there is still a need for the compute to access the network service,
there is another way.  Currently, the setup of the network
environment(bridge, vlan, etc) is all done by the compute service. With the
new network model, these tasks should either be separated out into a
standalone service('network agent') or at least be separated out into
modules with generic APIs that the network plugin providers can implement.
 By doing so, and if we can agree on a rule that the compute service must
always go through the network agent to access the network service, we can
still achieve the separation of compute from network services.   Network
agents should have full access to the network service as they are both
implemented by the same plugin provider.  Compute would not be aware of the
network agent accessing the network service.

With this design, the network service is only tied to the network REST API
and the network agent, both of which are implemented by the plugin
providers.  This would allow them to implement their network service without
worrying about the details of the compute service.

Please let me know if all this made any sense. :-)  Would love to get some
feedbacks.

Regards,
Ryu Ishimoto
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp