As per meeting ( Paricipants: Sanjiva, Shankar, Sumedha, Anjana, Miyuru,
Seshika, Suho, Nirmal, Nuwan)

Currently we generate several events per message from our products. For
example, when a message hits APIM, following events will be generated.


   1. One from HTTP level
   2. 1-2 from authentication and authorization logic
   3. 1 from Throttling
   4. 1 for ESB level stats
   5. 2 for request and response

If APIM is handling 10K TPS, that means DAS is receiving events in about
80K TPS. Although data bridge that transfers events are fast, writing to
Disk ( via RDBMS or Hbase) is a problem. We can scale Hbase. However, that
will run to a scenario where APIM deployment will need a very large
deployment of DAS.

We decided to figure out a way to collect all the events and send a single
event to DAS. Basically idea is to extend the data publisher library such
that user can keep adding readings to the library, and it will collect the
readings and send them over as a single event to the server.

However, some flows might terminated in the middle due to failures. There
are two solutions.


   1. Get the product to call a flush from a finally block
   2. Get the library to auto flush collected reading every few seconds

I feel #2 is simpler.

Do we have any concerns about going to this model?

Suho, Anjana we need to think how to do this with our stream definition as
we force you to define the streams before hand.

--Srinath





-- 
============================
Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://home.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
_______________________________________________
Architecture mailing list
[email protected]
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture

Reply via email to