Re: [Openstack] Notifications proposal

2011-05-11 Thread Monsyne Dragon

On May 11, 2011, at 10:47 AM, Matt Dietz wrote:

Hey Seshu,

1) Yes, that will be contained within the publisher_id field of the message body
2) We should be able to get customer related data from the message where it 
makes sense. It would be contained within the payload dictionary. Given that 
not all messages are going to be related to customers, we shouldn't explicitly 
force this in the standard message attributes
3) The mandatory fields on the nova side of things are: message_id, 
publisher_id, timestamp, priority, event_type, and payload. Payload is a 
dictionary containing the data that Nova is sending a notification for in the 
first place. On the Rackspace side of things, outside of System Usages, no 
message headers have been defined to my knowledge. We're simply designing the 
transport mechanism. Dragon can weigh in more on what usages will look like.

Ya, the usages are pretty simple notifications. They all have the account 
ID/user on them.
By IP address, do you mean source IP for the api request?  That we do not have 
(the usages are often generated by the worker processes long after the api call 
is over)
If you mean IP of the instance, that can be looked up from the instance id.

For instance usages (create/delete/resize, etc) they have the instance ID, and 
type (flavor).
For things like adding/removing IP's they have  the IP and the instance id.

For bandwidth usages, they have the bandwidth used, the period, and the 
instance.


Seshu Vavilikolanu 
seshu.vavilikol...@rackspace.commailto:seshu.vavilikol...@rackspace.com 
wrote:

I am part of cloud integration team in RackSpace, Austin.  I am working on 
Analytics project.
I went through the blue-prints of notifications and related topics.I have 
few questions, I am still trying to learn about openstack, so please let me 
know if any of these are already documented and available somewhere.

  *   Can we know what service generated this particular notification?  e.g.:  
Auth, etc…
 *   Can we get the service name from publisher id or some other field?
  *   Can we get the IP address/customer/account ID from the message?I am 
interested to know how this is handled when we are using load balancer.
  *   Are there any mandatory fields in message?   I guess these notifications 
are generated by Nova using Atom events, right?

--
--Monsyne M. Dragon



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com, and delete the original message.
Your cooperation is appreciated.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Notifications proposal

2011-05-10 Thread Jorge Williams

On May 10, 2011, at 11:07 AM, Matt Dietz wrote:

Alright, I'll buy it. Simply adding a UUID would be trivial


Cool.

Regarding categories, I tend to agree with Jay on this. I think it would
be treacherous to try to account for any number of possibilities, and I
also think that we need to keep this as simple as possible.


Okay fair enough,  the external publisher may create categories as needed.

On 5/10/11 10:35 AM, Jay Pipes 
jaypi...@gmail.commailto:jaypi...@gmail.com wrote:

On Mon, May 9, 2011 at 11:58 PM, Jorge Williams
jorge.willi...@rackspace.commailto:jorge.willi...@rackspace.com wrote:
On May 9, 2011, at 6:39 PM, Matt Dietz wrote:

Jorge,
  Thanks for the feedback!
  Regarding the message format, we actually don't need the unique id
in the
generic event format because that's implementation specific. The
external
publisher I've implemented actually does all of the pubsubhubbub
specific
heavy lifting for you. The idea behind keeping this system simple at the
nova layer is allowing people to implement anything they'd like, such as
emails or paging.

I guess, I'm not seeing the whole picture.  So these are internal
messages?
Will they cross service boundaries / zones?  I'm sorry I missed the
conversation at the summit :-) Is there a blueprint I should be reading?

On this particular point, I agree with Jorge. A unique identifier
should be attached to a message *before* it leaves Nova via the
publisher. Otherwise, subscribers will not be able to distinguish
between different messages if more than one publisher is publishing
the message and tacking on their own unique identifier.

For instance, if a Rabbit publisher and email publisher are both
enabled, and both attach a unique identifier in a different way,
there's no good way to determine two messages are the same.

For categories, were you considering this to be a list? Could you give
an
example of an event that would span multiple categories?

From an Atom perspective, I suppose anything a client might want to key
in
on or subscribe to may be a category.  So create may be a category --
a
billing layer may key in on all create messages and ignore others.
compute
may also be a category -- you can aggregate messages from other
services so
It'd be nice for messages from compute to have their own category.  To
my
knowledge, atom doesn't have the concept of priority so WARN may also
be a
category.  I suppose if these are internal messages an external
publisher
can split the event_type and priority into individual categories.

I disagree with this assessment, Jorge, for this reason: attempting to
identify all the possible categories that an organization may wish to
assign to a particular event may be near impossible, and in all
likelihood, different deployers will have different categories for
events.

I think a solution of codifying the event_type in the message to a
singular set of strings, with a single dotted group notation (like
instance.create or something like that) is the best we can do. The
subscriber of messages can later act as a translation or aggregator
based on the business rules in place at the deployer. For example,
let's say a deployer wanted to aggregate messages with event_type of
instance.create into two categories instance and create. A
custom-written subscriber could either do the aggregation itself, or
modify the message payload to include these custom deployer-specific
categories.

Hope that makes sense.

-jay


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Notifications proposal

2011-05-10 Thread Matt Dietz
George,

Unless I'm completely mistaken, I think our proposal satisfies this suggestion. 
What you have here looks like a slight variation on PSHB. Our stuff is coded 
such that the responsibility of any heavy lifting falls outside of Nova. In our 
case, we'll be implementing the PubSub publisher externally; I.e. I don't think 
any of the infrastructure for making PSHB work belongs in Nova. We can then 
follow all of the other rules of PSHB(feed discovery and subscriptions, 
callbacks etc.)

Does this make sense?

Matt

From: George Reese 
george.re...@enstratus.commailto:george.re...@enstratus.com
Date: Mon, 9 May 2011 23:17:29 -0500
To: Jorge Williams 
jorge.willi...@rackspace.commailto:jorge.willi...@rackspace.com
Cc: Matt Dietz matt.di...@rackspace.commailto:matt.di...@rackspace.com, 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] Notifications proposal

I think notifications need to be kept really simple. I put out a proposal a few 
months ago at:

http://broadcast.oreilly.com/2011/04/proposal-for-cloud-state-notifications.html

Let the subscribers do any heavy lifting. Just provide them enough information 
that they can make the right requests.

-George

On May 9, 2011, at 10:58 PM, Jorge Williams wrote:


On May 9, 2011, at 6:39 PM, Matt Dietz wrote:

Jorge,

   Thanks for the feedback!

   Regarding the message format, we actually don't need the unique id in the 
generic event format because that's implementation specific. The external 
publisher I've implemented actually does all of the pubsubhubbub specific heavy 
lifting for you. The idea behind keeping this system simple at the nova layer 
is allowing people to implement anything they'd like, such as emails or paging.

I guess, I'm not seeing the whole picture.  So these are internal messages? 
Will they cross service boundaries / zones?  I'm sorry I missed the 
conversation at the summit :-) Is there a blueprint I should be reading?


For categories, were you considering this to be a list? Could you give an 
example of an event that would span multiple categories?


From an Atom perspective, I suppose anything a client might want to key in on 
or subscribe to may be a category.  So create may be a category -- a billing 
layer may key in on all create messages and ignore others. compute may also 
be a category -- you can aggregate messages from other services so It'd be 
nice for messages from compute to have their own category.  To my knowledge, 
atom doesn't have the concept of priority so WARN may also be a category.  I 
suppose if these are internal messages an external publisher can split the 
event_type and priority into individual categories.

Finally, I can make the changes to the timestamp. This as just a hypothetical 
example, anyway.


Okay cool, thanks Matt.



On May 9, 2011, at 6:13 PM, Jorge Williams 
jorge.willi...@rackspace.commailto:jorge.willi...@rackspace.com wrote:


On May 9, 2011, at 5:20 PM, Matt Dietz wrote:

Message example:

{ 'publisher_id': 'compute.host1',
  'timestamp': '2011-05-09 22:00:14.621831',
  'priority': 'WARN',
  'event_type': 'compute.create_instance',
  'payload': {'instance_id': 12, ... }}

There was a lot of concern voiced over messages backing up in any of the 
queueing implementations, as well as the intended priority of one message over 
another. There are couple of immediately obvious solutions to this. We think 
the simplest solution is to implement N queues, where N is equal the number of 
priorities. Afterwards, consuming those queues is implementation specific and 
dependent on the solution that works best for the user.

The current plan for the Rackspace specific implementation is to use 
PubSubHubBub, with a dedicated worker consuming the notification queues and 
providing the glue necessary to work with a standard Hub implementation. I have 
a very immature worker implementation at https://github.com/Cerberus98/yagi 
https://github.com/Cerberus98/yagi if you're interested in checking that out.


Some thoughts:

In order to support PubSubHubBub you'll also need each message to also contain 
a globally unique ID.  It would also be nice if you had the concept of 
categories.  I realize you kinda get that with the event type 
compute.create_instance but there are always going to be messages that may 
belong to multiple categories. Also, ISO timestamps with a T :  
2011-05-09T22:00:14.621831 are way more interoperable -- I would also include 
a timezone designator Z for standard time 2011-05-09T22:00:14.621831Z -- 
otherwise some implementation assume the local timezone.

-jOrGe W.

___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

--
George

Re: [Openstack] Notifications proposal

2011-05-10 Thread Matt Dietz
These all sound perfect to me. I'm hoping our PSHB implementation solves that 
problem. More specifically, the publisher worker that I linked to earlier I 
think solves most of what you're referring to, and works well with the Google 
reference hub. There's a lot more work to be done, but I think it's on target 
with what you're suggesting

Thanks!

From: George Reese 
george.re...@enstratus.commailto:george.re...@enstratus.com
Date: Tue, 10 May 2011 12:07:22 -0500
To: Matt Dietz matt.di...@rackspace.commailto:matt.di...@rackspace.com
Cc: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] Notifications proposal

I came into the conversation late and it struck me this proposal was a bit 
heavier than what I was proposing.

I agree with letting something outside of Nova do the heavy lifting. Much more 
scaleable. The base things I would like to see are:

a) the minimal amount of information to let a subscriber know that there was a 
change (not the details of the change)
b) only information that is safe to deliver over a public network to an 
untrusted target
c) that the subscriber be able to be a programmatic endpoint (not simply 
email/SMS)
d) the subscriber should not assume anything about the message, including its 
authenticity (it should use its credentials to verify the truth of the message 
and details of change with provider)

-George

On May 10, 2011, at 12:01 PM, Matt Dietz wrote:

George,

Unless I'm completely mistaken, I think our proposal satisfies this suggestion. 
What you have here looks like a slight variation on PSHB. Our stuff is coded 
such that the responsibility of any heavy lifting falls outside of Nova. In our 
case, we'll be implementing the PubSub publisher externally; I.e. I don't think 
any of the infrastructure for making PSHB work belongs in Nova. We can then 
follow all of the other rules of PSHB(feed discovery and subscriptions, 
callbacks etc.)

Does this make sense?

Matt

From: George Reese 
george.re...@enstratus.commailto:george.re...@enstratus.com
Date: Mon, 9 May 2011 23:17:29 -0500
To: Jorge Williams 
jorge.willi...@rackspace.commailto:jorge.willi...@rackspace.com
Cc: Matt Dietz matt.di...@rackspace.commailto:matt.di...@rackspace.com, 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] Notifications proposal

I think notifications need to be kept really simple. I put out a proposal a few 
months ago at:

http://broadcast.oreilly.com/2011/04/proposal-for-cloud-state-notifications.html

Let the subscribers do any heavy lifting. Just provide them enough information 
that they can make the right requests.

-George

On May 9, 2011, at 10:58 PM, Jorge Williams wrote:


On May 9, 2011, at 6:39 PM, Matt Dietz wrote:

Jorge,

   Thanks for the feedback!

   Regarding the message format, we actually don't need the unique id in the 
generic event format because that's implementation specific. The external 
publisher I've implemented actually does all of the pubsubhubbub specific heavy 
lifting for you. The idea behind keeping this system simple at the nova layer 
is allowing people to implement anything they'd like, such as emails or paging.

I guess, I'm not seeing the whole picture.  So these are internal messages? 
Will they cross service boundaries / zones?  I'm sorry I missed the 
conversation at the summit :-) Is there a blueprint I should be reading?


For categories, were you considering this to be a list? Could you give an 
example of an event that would span multiple categories?


From an Atom perspective, I suppose anything a client might want to key in on 
or subscribe to may be a category.  So create may be a category -- a billing 
layer may key in on all create messages and ignore others. compute may also 
be a category -- you can aggregate messages from other services so It'd be 
nice for messages from compute to have their own category.  To my knowledge, 
atom doesn't have the concept of priority so WARN may also be a category.  I 
suppose if these are internal messages an external publisher can split the 
event_type and priority into individual categories.

Finally, I can make the changes to the timestamp. This as just a hypothetical 
example, anyway.


Okay cool, thanks Matt.



On May 9, 2011, at 6:13 PM, Jorge Williams 
jorge.willi...@rackspace.commailto:jorge.willi...@rackspace.com wrote:


On May 9, 2011, at 5:20 PM, Matt Dietz wrote:

Message example:

{ 'publisher_id': 'compute.host1',
  'timestamp': '2011-05-09 22:00:14.621831',
  'priority': 'WARN',
  'event_type': 'compute.create_instance',
  'payload': {'instance_id': 12, ... }}

There was a lot of concern voiced over messages backing up in any of the 
queueing implementations, as well as the intended priority of one message over 
another. There are couple

Re: [Openstack] Notifications proposal

2011-05-10 Thread Eric Day
Hi George,

Understood, but burrow can act as both. At the core, the difference
between SQS and SNS are notification workers and a lower default
message TTL. Matt mentioned that Nova will push to RabbitMQ or some
other MQ and workers pull from the queue to translate into PuSH, email,
sms, etc. If this intermediate message queue is burrow, clients could
also subscribe directly to the notification queue with their OpenStack
credentials and see messages along with the other workers. It's simply
opening up the data pipe at another level if thats more convenient
or efficient for the event consumers.

If we're going through the trouble of building a scalable message
queue/notification service for general use, I'm not sure why we
wouldn't use it over maintaining other MQ systems. If we don't want to
use burrow when it's ready, I should probably reevaluate the purpose
of Burrow as this was one of the example use cases. :)

-Eric

On Tue, May 10, 2011 at 02:17:46PM -0500, George Reese wrote:
 This isn't a message queue, it's a push system.
 
 In other words, consumers don't pull info from a queue, the info is pushed 
 out to any number of subscribers as the message is generated.
 
 Amazon SNS vs. SQS, except this isn't a cloud service but a mechanism for 
 notifying interested party of cloud changes.
 
 -George
 
 On May 10, 2011, at 1:49 PM, Eric Day wrote:
 
  We may also want to put in some kind version or self-documenting URL
  so it's easier to accommodate message format changes later on.
  
  As for the issue of things getting backed up in the queues for other
  non-PuSH mechanisms (and fanout), burrow has fanout functionality
  that depends on messages to expire (every message is inserted with
  a TTL). This would allow multiple readers to see the same message
  and for it to disappear after say an hour. This allows deployments,
  third party tools, and clients to write workers to act on events from
  the raw queue.
  
  With burrow, it will also be possible for clients to pull raw messages
  directly from the queue via a REST API in a secure fashion using
  the same account credentials as other OpenStack service (whatever
  keystone is configured for). So while an email notification will want
  to strip any sensitive information, a direct queue client could see
  more details.
  
  -Eric
  
  On Mon, May 09, 2011 at 10:20:04PM +, Matt Dietz wrote:
Hey guys,
Monsyne Dragon and myself are proposing an implementation for
notifications going forward. Currently my branch exists
under https://code.launchpad.net/~cerberus/nova/nova_notifications. 
  you'll
see that's it been proposed for merge, but we're currently refactoring it
around changes proposed at the summit during the notifications 
  discussion,
which you can see at http://etherpad.openstack.org/notifications
At the heart of the above branch is the idea that, because nova is about
compute, we get notifications away from Nova as quickly as possible. As
such, we've implemented a simple modular driver system which merely 
  pushes
messages out. The two sample drivers are for pushing messages into
Rabbit, or doing nothing at all. There's been talk about adding Burrow as
a third possible driver, which I don't think would be an issue.
One of the proposals is to have priority levels for each notification.
What we're proposing is emulating the standard Python logging module and
providing levels like WARN' and CRITICAL in the notification.
Additionally, the message format we're proposing will be a JSON 
  dictionary
containing the following attributes:
publisher_id - the source worker_type.host of the message.
timestamp - the GMT timestamp the notification was sent at
event_type - the literal type of event (ex. Instance Creation)
priority - patterned after the enumeration of Python logging levels in
   the set (DEBUG, WARN, INFO, ERROR, CRITICAL)
payload - A python dictionary of attributes
Message example:
{ 'publisher_id': 'compute.host1',
  'timestamp': '2011-05-09 22:00:14.621831',
  'priority': 'WARN',
  'event_type': 'compute.create_instance',
  'payload': {'instance_id': 12, ... }}
There was a lot of concern voiced over messages backing up in any of the
queueing implementations, as well as the intended priority of one message
over another. There are couple of immediately obvious solutions to this.
We think the simplest solution is to implement N queues, where N is equal
the number of priorities. Afterwards, consuming those queues is
implementation specific and dependent on the solution that works best for
the user.
The current plan for the Rackspace specific implementation is to use
PubSubHubBub, with a dedicated worker consuming the notification queues
and providing the glue necessary to work with a standard Hub
implementation. I have a very immature worker 

Re: [Openstack] Notifications proposal

2011-05-10 Thread Eric Day
For the record, I should also say I think RabbitMQ is awesome and
should be used for deployments where it makes sense. Keeping it
modular and also allowing burrow to be an option will make more sense
for some deployments.

-Eric

On Tue, May 10, 2011 at 07:52:55PM +, Matt Dietz wrote:
 For the record, I like the idea of using Burrow at this level. I certainly
 don't expect everyone to go to the trouble of setting up something like
 PSHB to get their notifications. I can look at adding another driver for
 Burrow in addition to Rabbit so there are plenty of options.
 
 On 5/10/11 2:30 PM, Eric Day e...@oddments.org wrote:
 
 Hi George,
 
 Understood, but burrow can act as both. At the core, the difference
 between SQS and SNS are notification workers and a lower default
 message TTL. Matt mentioned that Nova will push to RabbitMQ or some
 other MQ and workers pull from the queue to translate into PuSH, email,
 sms, etc. If this intermediate message queue is burrow, clients could
 also subscribe directly to the notification queue with their OpenStack
 credentials and see messages along with the other workers. It's simply
 opening up the data pipe at another level if thats more convenient
 or efficient for the event consumers.
 
 If we're going through the trouble of building a scalable message
 queue/notification service for general use, I'm not sure why we
 wouldn't use it over maintaining other MQ systems. If we don't want to
 use burrow when it's ready, I should probably reevaluate the purpose
 of Burrow as this was one of the example use cases. :)
 
 -Eric
 
 On Tue, May 10, 2011 at 02:17:46PM -0500, George Reese wrote:
  This isn't a message queue, it's a push system.
  
  In other words, consumers don't pull info from a queue, the info is
 pushed out to any number of subscribers as the message is generated.
  
  Amazon SNS vs. SQS, except this isn't a cloud service but a mechanism
 for notifying interested party of cloud changes.
  
  -George
  
  On May 10, 2011, at 1:49 PM, Eric Day wrote:
  
   We may also want to put in some kind version or self-documenting URL
   so it's easier to accommodate message format changes later on.
   
   As for the issue of things getting backed up in the queues for other
   non-PuSH mechanisms (and fanout), burrow has fanout functionality
   that depends on messages to expire (every message is inserted with
   a TTL). This would allow multiple readers to see the same message
   and for it to disappear after say an hour. This allows deployments,
   third party tools, and clients to write workers to act on events from
   the raw queue.
   
   With burrow, it will also be possible for clients to pull raw messages
   directly from the queue via a REST API in a secure fashion using
   the same account credentials as other OpenStack service (whatever
   keystone is configured for). So while an email notification will want
   to strip any sensitive information, a direct queue client could see
   more details.
   
   -Eric
   
   On Mon, May 09, 2011 at 10:20:04PM +, Matt Dietz wrote:
 Hey guys,
 Monsyne Dragon and myself are proposing an implementation for
 notifications going forward. Currently my branch exists
 under 
 https://code.launchpad.net/~cerberus/nova/nova_notifications. you'll
 see that's it been proposed for merge, but we're currently
 refactoring it
 around changes proposed at the summit during the notifications
 discussion,
 which you can see at http://etherpad.openstack.org/notifications
 At the heart of the above branch is the idea that, because nova is
 about
 compute, we get notifications away from Nova as quickly as
 possible. As
 such, we've implemented a simple modular driver system which
 merely pushes
 messages out. The two sample drivers are for pushing messages
 into
 Rabbit, or doing nothing at all. There's been talk about adding
 Burrow as
 a third possible driver, which I don't think would be an issue.
 One of the proposals is to have priority levels for each
 notification.
 What we're proposing is emulating the standard Python logging
 module and
 providing levels like WARN' and CRITICAL in the notification.
 Additionally, the message format we're proposing will be a JSON
 dictionary
 containing the following attributes:
 publisher_id - the source worker_type.host of the message.
 timestamp - the GMT timestamp the notification was sent at
 event_type - the literal type of event (ex. Instance Creation)
 priority - patterned after the enumeration of Python logging
 levels in
the set (DEBUG, WARN, INFO, ERROR, CRITICAL)
 payload - A python dictionary of attributes
 Message example:
 { 'publisher_id': 'compute.host1',
   'timestamp': '2011-05-09 22:00:14.621831',
   'priority': 'WARN',
   'event_type': 'compute.create_instance',
   'payload': {'instance_id': 12, ... }}
 There was a lot 

Re: [Openstack] Notifications proposal

2011-05-09 Thread Jorge Williams

On May 9, 2011, at 5:20 PM, Matt Dietz wrote:

Message example:

{ 'publisher_id': 'compute.host1',
  'timestamp': '2011-05-09 22:00:14.621831',
  'priority': 'WARN',
  'event_type': 'compute.create_instance',
  'payload': {'instance_id': 12, ... }}

There was a lot of concern voiced over messages backing up in any of the 
queueing implementations, as well as the intended priority of one message over 
another. There are couple of immediately obvious solutions to this. We think 
the simplest solution is to implement N queues, where N is equal the number of 
priorities. Afterwards, consuming those queues is implementation specific and 
dependent on the solution that works best for the user.

The current plan for the Rackspace specific implementation is to use 
PubSubHubBub, with a dedicated worker consuming the notification queues and 
providing the glue necessary to work with a standard Hub implementation. I have 
a very immature worker implementation at https://github.com/Cerberus98/yagi if 
you're interested in checking that out.


Some thoughts:

In order to support PubSubHubBub you'll also need each message to also contain 
a globally unique ID.  It would also be nice if you had the concept of 
categories.  I realize you kinda get that with the event type 
compute.create_instance but there are always going to be messages that may 
belong to multiple categories. Also, ISO timestamps with a T :  
2011-05-09T22:00:14.621831 are way more interoperable -- I would also include 
a timezone designator Z for standard time 2011-05-09T22:00:14.621831Z -- 
otherwise some implementation assume the local timezone.

-jOrGe W.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Notifications proposal

2011-05-09 Thread Jorge Williams

On May 9, 2011, at 6:39 PM, Matt Dietz wrote:

Jorge,

   Thanks for the feedback!

   Regarding the message format, we actually don't need the unique id in the 
generic event format because that's implementation specific. The external 
publisher I've implemented actually does all of the pubsubhubbub specific heavy 
lifting for you. The idea behind keeping this system simple at the nova layer 
is allowing people to implement anything they'd like, such as emails or paging.

I guess, I'm not seeing the whole picture.  So these are internal messages? 
Will they cross service boundaries / zones?  I'm sorry I missed the 
conversation at the summit :-) Is there a blueprint I should be reading?


For categories, were you considering this to be a list? Could you give an 
example of an event that would span multiple categories?


From an Atom perspective, I suppose anything a client might want to key in on 
or subscribe to may be a category.  So create may be a category -- a billing 
layer may key in on all create messages and ignore others. compute may also 
be a category -- you can aggregate messages from other services so It'd be 
nice for messages from compute to have their own category.  To my knowledge, 
atom doesn't have the concept of priority so WARN may also be a category.  I 
suppose if these are internal messages an external publisher can split the 
event_type and priority into individual categories.

Finally, I can make the changes to the timestamp. This as just a hypothetical 
example, anyway.


Okay cool, thanks Matt.



On May 9, 2011, at 6:13 PM, Jorge Williams 
jorge.willi...@rackspace.commailto:jorge.willi...@rackspace.com wrote:


On May 9, 2011, at 5:20 PM, Matt Dietz wrote:

Message example:

{ 'publisher_id': 'compute.host1',
  'timestamp': '2011-05-09 22:00:14.621831',
  'priority': 'WARN',
  'event_type': 'compute.create_instance',
  'payload': {'instance_id': 12, ... }}

There was a lot of concern voiced over messages backing up in any of the 
queueing implementations, as well as the intended priority of one message over 
another. There are couple of immediately obvious solutions to this. We think 
the simplest solution is to implement N queues, where N is equal the number of 
priorities. Afterwards, consuming those queues is implementation specific and 
dependent on the solution that works best for the user.

The current plan for the Rackspace specific implementation is to use 
PubSubHubBub, with a dedicated worker consuming the notification queues and 
providing the glue necessary to work with a standard Hub implementation. I have 
a very immature worker implementation at https://github.com/Cerberus98/yagi 
https://github.com/Cerberus98/yagi if you're interested in checking that out.


Some thoughts:

In order to support PubSubHubBub you'll also need each message to also contain 
a globally unique ID.  It would also be nice if you had the concept of 
categories.  I realize you kinda get that with the event type 
compute.create_instance but there are always going to be messages that may 
belong to multiple categories. Also, ISO timestamps with a T :  
2011-05-09T22:00:14.621831 are way more interoperable -- I would also include 
a timezone designator Z for standard time 2011-05-09T22:00:14.621831Z -- 
otherwise some implementation assume the local timezone.

-jOrGe W.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp