These all sound perfect to me. I'm hoping our PSHB implementation solves that 
problem. More specifically, the publisher worker that I linked to earlier I 
think solves most of what you're referring to, and works well with the Google 
reference hub. There's a lot more work to be done, but I think it's on target 
with what you're suggesting

Thanks!

From: George Reese 
<george.re...@enstratus.com<mailto:george.re...@enstratus.com>>
Date: Tue, 10 May 2011 12:07:22 -0500
To: Matt Dietz <matt.di...@rackspace.com<mailto:matt.di...@rackspace.com>>
Cc: "openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>" 
<openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] Notifications proposal

I came into the conversation late and it struck me this proposal was a bit 
heavier than what I was proposing.

I agree with letting something outside of Nova do the heavy lifting. Much more 
scaleable. The base things I would like to see are:

a) the minimal amount of information to let a subscriber know that there was a 
change (not the details of the change)
b) only information that is safe to deliver over a public network to an 
untrusted target
c) that the subscriber be able to be a programmatic endpoint (not simply 
email/SMS)
d) the subscriber should not assume anything about the message, including its 
authenticity (it should use its credentials to verify the truth of the message 
and details of change with provider)

-George

On May 10, 2011, at 12:01 PM, Matt Dietz wrote:

George,

Unless I'm completely mistaken, I think our proposal satisfies this suggestion. 
What you have here looks like a slight variation on PSHB. Our stuff is coded 
such that the responsibility of any heavy lifting falls outside of Nova. In our 
case, we'll be implementing the PubSub publisher externally; I.e. I don't think 
any of the infrastructure for making PSHB work belongs in Nova. We can then 
follow all of the other rules of PSHB(feed discovery and subscriptions, 
callbacks etc.)

Does this make sense?

Matt

From: George Reese 
<george.re...@enstratus.com<mailto:george.re...@enstratus.com>>
Date: Mon, 9 May 2011 23:17:29 -0500
To: Jorge Williams 
<jorge.willi...@rackspace.com<mailto:jorge.willi...@rackspace.com>>
Cc: Matt Dietz <matt.di...@rackspace.com<mailto:matt.di...@rackspace.com>>, 
"openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>" 
<openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] Notifications proposal

I think notifications need to be kept really simple. I put out a proposal a few 
months ago at:

http://broadcast.oreilly.com/2011/04/proposal-for-cloud-state-notifications.html

Let the subscribers do any heavy lifting. Just provide them enough information 
that they can make the right requests.

-George

On May 9, 2011, at 10:58 PM, Jorge Williams wrote:


On May 9, 2011, at 6:39 PM, Matt Dietz wrote:

Jorge,

   Thanks for the feedback!

   Regarding the message format, we actually don't need the unique id in the 
generic event format because that's implementation specific. The external 
publisher I've implemented actually does all of the pubsubhubbub specific heavy 
lifting for you. The idea behind keeping this system simple at the nova layer 
is allowing people to implement anything they'd like, such as emails or paging.

I guess, I'm not seeing the whole picture.  So these are internal messages? 
Will they cross service boundaries / zones?  I'm sorry I missed the 
conversation at the summit :-) Is there a blueprint I should be reading?


For categories, were you considering this to be a list? Could you give an 
example of an event that would span multiple categories?


>From an Atom perspective, I suppose anything a client might want to key in on 
>or subscribe to may be a category.  So "create" may be a category -- a billing 
>layer may key in on all create messages and ignore others. "compute" may also 
>be a category -- you can aggregate messages from other services so It'd be 
>nice for messages from compute to have their own category.  To my knowledge, 
>atom doesn't have the concept of priority so "WARN" may also be a category.  I 
>suppose if these are internal messages an external publisher can split the 
>event_type and priority into individual categories.

Finally, I can make the changes to the timestamp. This as just a hypothetical 
example, anyway.


Okay cool, thanks Matt.



On May 9, 2011, at 6:13 PM, "Jorge Williams" 
<jorge.willi...@rackspace.com<mailto:jorge.willi...@rackspace.com>> wrote:


On May 9, 2011, at 5:20 PM, Matt Dietz wrote:

Message example:

    { 'publisher_id': 'compute.host1',
      'timestamp': '2011-05-09 22:00:14.621831',
      'priority': 'WARN',
      'event_type': 'compute.create_instance',
      'payload': {'instance_id': 12, ... }}

There was a lot of concern voiced over messages backing up in any of the 
queueing implementations, as well as the intended priority of one message over 
another. There are couple of immediately obvious solutions to this. We think 
the simplest solution is to implement N queues, where N is equal the number of 
priorities. Afterwards, consuming those queues is implementation specific and 
dependent on the solution that works best for the user.

The current plan for the Rackspace specific implementation is to use 
PubSubHubBub, with a dedicated worker consuming the notification queues and 
providing the glue necessary to work with a standard Hub implementation. I have 
a very immature worker implementation at <https://github.com/Cerberus98/yagi> 
https://github.com/Cerberus98/yagi if you're interested in checking that out.


Some thoughts:

In order to support PubSubHubBub you'll also need each message to also contain 
a globally unique ID.  It would also be nice if you had the concept of 
categories.  I realize you kinda get that with the event type 
"compute.create_instance" but there are always going to be messages that may 
belong to multiple categories. Also, ISO timestamps with a T :  
"2011-05-09T22:00:14.621831" are way more interoperable -- I would also include 
a timezone designator Z for standard time 2011-05-09T22:00:14.621831Z -- 
otherwise some implementation assume the local timezone.

-jOrGe W.

_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : 
openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

--
George Reese - Chief Technology Officer, enStratus
e: george.re...@enstratus.com<mailto:george.re...@enstratus.com>    t: 
@GeorgeReese    p: +1.207.956.0217    f: +1.612.338.5041
enStratus: Governance for Public, Private, and Hybrid Clouds - @enStratus - 
http://www.enstratus.com<http://www.enstratus.com/>
To schedule a meeting with me: http://tungle.me/GeorgeReese


Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com<mailto:ab...@rackspace.com>, and delete the original 
message.
Your cooperation is appreciated.


--
George Reese - Chief Technology Officer, enStratus
e: george.re...@enstratus.com<mailto:george.re...@enstratus.com>    t: 
@GeorgeReese    p: +1.207.956.0217    f: +1.612.338.5041
enStratus: Governance for Public, Private, and Hybrid Clouds - @enStratus - 
http://www.enstratus.com
To schedule a meeting with me: http://tungle.me/GeorgeReese



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.

_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

Reply via email to