On Wed, Nov 30, 2011 at 12:13:24PM +0200, Yaniv Kaul wrote:
> On 11/30/2011 11:41 AM, Daniel P. Berrange wrote:
> >On Tue, Nov 29, 2011 at 12:54:44PM -0800, Chris Wright wrote:
> >>* Adam Litke (a...@us.ibm.com) wrote:
> >>>On Tue, Nov 29, 2011 at 11:39:08AM -0800, Chris Wright wrote:
> >>>>* Adam Litke (a...@us.ibm.com) wrote:
> >>>>>Today I am releasing a proof of concept implementation of a REST API for
> >>>>>You can find the code on github. My goal is to eventually replace the
> >>>>>xmlrpc interface with a REST API. Once completed, ovirt-engine could
> >>>>>switch to
> >>>>>this new API. The major advantages to making this change are: 1) VDSM
> >>>>>will gain
> >>>>>a structured API that conceptually, structurally, and functionally
> >>>>>aligns with
> >>>>>the ovirt-engine REST API, 2) this new API can be made public, thus
> >>>>>providing an
> >>>>>entry point for direct virtualization management@the node level.
> >>>>Adam, this looks like a nice PoC. I didn't see how API versioning is
> >>>>handled. Any VDSM developers willing to review this work?
> >>>Thanks for taking a look. I am not handling versioning yet. I think we
> >>>can add
> >>>a version field to the root API object. As for compatibility, we'll just
> >>>to decide on an API backwards-compat support policy. Would this be enough
> >>>handle versioning issues? We shouldn't need anything like capabilities
> >>>the API is discoverable.
> >>Right, that seems sensible.
> >>Did you find cases where RPC to REST resource mapping was difficult?
> >>I could see something like migrate() plus migrateStatus() and
> >>migrateCancel() needing to add some kind of operational state that to the
> >>resource. And something like monitorCommand() which has both a possible
> >>side-effect and some freefrom response...
> >>>>>ovirt-engine wants to subscribe to asynchronous events. REST APIs do not
> >>>>>typically support async events and instead rely on polling of resources.
> >>>>> I am
> >>>>>investigating what options are available for supporting async events via
> >>>>I think typical is either polling or long polling. If it's a single
> >>>>resource, then perhaps long polling would be fine (won't be a large
> >>>>number of connections tied up if it's only a single resource).
> >>>Not sure if this is what you are referring to, but I was thinking we could
> >>>do a
> >>>batch-polling mechanism where an API user passes in a list of task UUIDs
> >>>event URIs. The server would respond with the status of these resources
> >>>in one
> >>>response. I have some ideas on how we could wire up asynchronous events
> >>>on the
> >>>server side to reduce the amount of actual work that such a batch-polling
> >>>operation would require.
> >>Oh, I just meant this:
> >>Polling (GET /event + 404 loop)
> >>Long polling (GET + block ... can chew up a thread connection)
> >Yes, this is what I was inclined to suggest too. Polling is such a
> >misleading description :-P
> >This isn't really an either/or choice - you could trivially just
> >pass a query parmeter to /event to indicate whether you want an
> >immediate 404 reply, or to wait for 'N' seconds, or infinity
> >There's no reason it needs to consume a thread in VDSM permanently,
> >though I don't think that's a huge problem even if it does keep a
> >permanent thread open, since Linux threading is so lightweight.
> >In more detail, I'd was going to suggest
> > 1. One or more location for registering/unregistering interest
> > in events in various different resources/objects
> > /some/path/register-event?foo=bar
> > /some/path/unregister-event?foo=bar
> > /other/path/register-event?eek=oooh
> > 2. A common location for receiving all events
> > /event?wait=NNNN (NNN = 0 => immediate 404,
> > NNN>= 0 => wait time,
> > omitted parameter => inifinity)
> > 3. Added bonus points to either
> > - Support persistent keepalive connections so that
> > you can just send further GET /event requests over
> > the same socket FD after receving each reply
> > - Used HTTP chunked encoding. Each event is sent back
> > as a separate chunk. The server can just carry
> > on sending you chunks forever as new events occur,
> > so the HTTP reply is in effect infinitely long.
> - the server would respond with the event status in a
> 100-continue response, thus 'lingering' the response for as much as
> the client needs. Less overhead than chunks, just as problematic
> with crappy proxies that do not properly support either chunking or
> 100-continue properly (but should not be an issue if it's within
In this hostile world, I think it is reasonable for us to consider HTTPS
to be the only serious option for production deployments, and discount
plain HTTP for anything except adhoc dev/test usage. So I say we don't
have to worry abvout broken proxies here.
|: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org -o- http://virt-manager.org :|
|: http://autobuild.org -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|
vdsm-devel mailing list