On 26 January 2015 at 15:30, Ian Wells <ijw.ubu...@cack.org.uk> wrote:
> Lots of open questions in here, because I think we need a long conversation
> on the subject.
>
> On 23 January 2015 at 15:51, Kevin Benton <blak...@gmail.com> wrote:
>>
>> It seems like a change to using internal RPC interfaces would be pretty
>> unstable at this point.
>
>
>>
>> Can we start by identifying the shortcomings of the HTTP interface and see
>> if we can address them before making the jump to using an interface which
>> has been internal to Neutron so far?
>
>
> I think the protocol being used is a distraction from the actual
> shortcomings.
>
> Firstly, you'd have to explain to me why HTTP is so much slower than RPC.
> If HTTP is incredibly slow, can be be sped up?  If RPC is moving the data
> around using the same calls, what changes?  Secondly, the problem seems more
> that we make too many roundtrips - which would be the same over RPC - and if
> that's true, perhaps we should be doing bulk operations - which is not
> transport-specific.

There's nothing intrinsic to HTTP that makes it higher latency than
RPC-over-AMQP (in fact, it should be lower latency for established
connections); however we have lots of room to improve the performance
of our stack here. Joe Gordon has some specific numbers. That said,
we're talking what - one API call (plug this vif please), so we've
probably got a realistic time budget in the high tenths of a second
before folk would notice. The tromboning involved in our current use
of HTTP is a concern for me - more moving parts, more ways things can
break, more synchronisation needed. Our coding style is very
synchronous : we're not writing with signals and events, rather
straight forward single-threaded worker code.  I'd like to see us have
some mechanism that fits with that.

E.g. its a call (not cast) out to Neutron, and Neutron returns when
the VIF(s) are ready to use, at which point Nova brings the VM up. If
the call times out, we error.

Right now we have this mix of synchronous and async code, and its
causing us to overlook things and have bugs. I'd be equally happy if
we went all in with an async event driven approach, but we should
decide if we're fish or fowl, not pick bits of both and hope reviewers
can remember every little detail.

> I absolutely do agree that Neutron should be doing more of the work, and
> Nova less, when it comes to port binding.  (And, in fact, I'd like that we
> stopped considering it 'Nova-Neutron' port binding, since in theory another
> service attaching stuff to the network could request a port be bound; it
> just happens at the moment that it's always Nova.)

E.g. Ironic, which does port updates today using Neutron directly.

> One other problem, not yet raised,  is that Nova doesn't express its needs
> when it asks for a port to be bound, and this is actually becoming a problem
> for me right now.  At the moment, Neutron knows, almost psychically, what
> binding type Nova will accept, and hands it over; Nova then deals with
> whatever binding type it receives (optimisitically expecting it's one it
> will support, and getting shirty if it isn't).  The problem I'm seeing at
> the moment, and other people have mentioned, is that certain forwarders can
> only bind a vhostuser port to a VM if the VM itself has hugepages enabled.
> They could fall back to another binding type but at the moment that isn't an
> option: Nova doesn't tell Neutron anything about what it supports, so
> there's no data on which to choose.  It should be saying 'I will take these
> binding types in this preference order'.  I think, in fact, that asking
> Neutron for bindings of a certain preference type order, would give us much
> more flexibility - like, for instance, not having to know exactly which
> binding type to deliver to which compute node in multi-hypervisor
> environments, where at the moment the choice is made in Neutron.

+1, OTOH I don't think this is a structural problem - it doesn't
matter what protocol or calling style we use, this is just the
parameters in the call :).

>> I scanned through the etherpad and I really like Salvatore's idea of
>> adding a service plugin to Neutron that is designed specifically for
>> interacting with Nova. All of the Nova notification interactions can be
>> handled there and we can add new API components designed for Nova's use
>> (e.g. syncing data, etc). Does anyone have any objections to that approach?
>
>
> I think we should be leaning the other way, actually - working out what a
> generic service - think a container management service, or an edge network
> service - would want to ask when it wanted to connect to a virtual network,
> and making an Neutron interface that supports that properly *without* being
> tailored to Nova.  The requirements are similar in all cases, so it's not
> clear that a generic interface would be any more complex.
>
> Notifications on data changes in Neutron to prevent orphaning is another
> example of a repeating pattern.  It's probably the same for any service that
> binds to Neutron, but right now Neutron has Nova-specific code in it.
> Broadening the scope, it's also likely the same in Cinder, and in fact it's
> also pretty similar to the problem you get when you delete a project in
> Keystone and all your resources get orphaned.  Is a Nova-Neutron specific
> solution the right thing to do?

I think your desire and Salvatore's are compatible: an interface that
is excellent for Nova can also be excellent for other users.
Notifications aren't a complete solution to the orphaning issue unless
the notification system is guaranteed non-lossy. Something like Kafka
would be an excellent substrate for such a system, or we could look at
per-service journalling (on either side of the integration point).

-Rob

-- 
Robert Collins <rbtcoll...@hp.com>
Distinguished Technologist
HP Converged Cloud

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to