Hi,

As we discussed I will use database and scheduler task based approach to
perform the push notification. So operations will be stored with additional
status in database. There will be scheduler task run in specific node to
periodically get the device set with operations and push notifications to
devices.

Initially those operations will be in SCHEDULED state and If push
notification is successful it will be changed to COMPLETED state. I will
update the thread with detail information on implementation with the
progress.

Thanks.
Waruna

On Thu, Mar 23, 2017 at 2:14 PM, Geeth Munasinghe <[email protected]> wrote:

>
>
> On Thu, Mar 23, 2017 at 12:15 PM, Sinthuja Ragendran <[email protected]>
> wrote:
>
>> Hi Geeth,
>>
>>
>> On Thu, Mar 23, 2017 at 11:42 AM, Geeth Munasinghe <[email protected]>
>> wrote:
>>
>>> Hi all,
>>>
>>> IMO, having DB update and in-memory queues are two different methods we
>>> can resolve this issue. Updating DB supports near reliable messaging and
>>> this can be done only from a single server. Because we cannot have multiple
>>> servers reading and updating database without coordination. Implementing
>>> coordination will be much complex task.
>>>
>>
>> IMO we need to go with database sharding for this, as these are
>> independant notifications for devices, I don't think we need the complex
>> coordinatio logic for this.
>>
>
> Coordination is about, not sending the same push notification to same
> device from multiple instances. Sharding will add more complexity and we
> will have to figure out ways to shard the data between multiple DB
> instances.  As of now we cannot support 20,000 devices with current
> implementation. And asking a user to shard the database for 20,000 devices
> is unrealistic.  I think, if the user need DB records updated, then we
> should handle it from a single server or single server per device type.
>
>
>>>
>>> And im-memory queues can be run from any worker node, hence supports
>>> distributed architecture. This way it is will easy to manage and implement.
>>>
>>
>> You mean to use the distributed in-memory queue? i.e., hazelcast queue as
>> we are in C4 based components? I think we had experienced some issues with
>> that in past, and trying hence we should avoid using it IMO.
>>
>
> What I meant as distributed, every worker node can run a task to push
> notifications when in-memory queues are used. When a worker node receives
> operation payload to send large number of devices, that worker node it self
> can send notifications to devices using in memory queue. So it does not
> need a separate task server as needed DB update approach.
>
>
>
>> Thanks,
>> Sinthuja.
>>
>>
>>>
>>> So in both approaches, there are pros and cons. DB update approach
>>> method is if some one needs reliable push notification. And in-memory
>>> queues can support system which does not need reliable messaging. In my
>>> opinion, emm use cases are such. If one message is not delivered to the
>>> device will not affect much, because if it receives at least one push
>>> message later, device will wake up and will retrieve all the pending
>>> operation at one go.
>>>
>>> Therefore I think we need to implement both of these approaches and let
>>> user select whatever best-way they deemed necessary.
>>>
>>> And apart from that, these two approaches requires managing the load
>>> only when device wake up message is sent as the push notification. But in
>>> MQTT or whole message is sent as push notification, we do not need any of
>>> the above approaches. AFAIT So most of IoT related use case will not
>>> require this kind of load balancing as it can be done from push messaging
>>> server.
>>>
>>> Thanks
>>> Geeth
>>>
>>> On Thu, Mar 23, 2017 at 9:17 AM, Chathura Ekanayake <[email protected]>
>>> wrote:
>>>
>>>> Without having a separate retry task, can we have a single task running
>>>> in one one to read messages from the db and send to devices. Then the
>>>> behavior for sending incoming operation requests and sending messages after
>>>> a restart will be similar. In addition, we can avoid an in-memory store of
>>>> messages, which could cause scalability issues.
>>>>
>>>> - Chathura
>>>>
>>>> On Wed, Mar 22, 2017 at 10:41 PM, Waruna Jayaweera <[email protected]>
>>>> wrote:
>>>>
>>>>> Hi Chathura,
>>>>>
>>>>> We can not read from db , since in a clustered environment when
>>>>> multiple nodes try read from same operation table will cause duplication 
>>>>> of
>>>>> push message sending. Anyway these bulk device request will come to given
>>>>> node at a time. So device can store them in queue for temporary for 
>>>>> sending
>>>>> push messages as batches.  If node restart we can have retry task ( Only 
>>>>> in
>>>>> manager node) to read from database.
>>>>>
>>>>> Thanks,
>>>>> Waruna
>>>>>
>>>>> On Wed, Mar 22, 2017 at 3:58 PM, Chathura Ekanayake <[email protected]
>>>>> > wrote:
>>>>>
>>>>>> I think option 2 is better. Using a message store will also
>>>>>> eventually access a persistent storage (probably with multiple io calls),
>>>>>> so the additional db call in option 2 is not significant. In addition,
>>>>>> option 1 adds an additional hop to the message flow.
>>>>>>
>>>>>> Why do we need a concurrent queue for option 2 (as we anyway store
>>>>>> all operations in db)? Is it as a cache?
>>>>>>
>>>>>> - Chathura
>>>>>>
>>>>>> On Wed, Mar 22, 2017 at 11:52 AM, Waruna Jayaweera <[email protected]>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> In  Device + Server Communication Push notifications used to notify
>>>>>>> devices to retrieve the device operations. This also use in policy
>>>>>>> monitoring where we send push notification to all devices during 
>>>>>>> monitoring
>>>>>>> task. There are different push notification strategies as for APNS, for
>>>>>>> android - GCM/FCM and for Windows - WNS also MQTT based push 
>>>>>>> notifications.
>>>>>>>
>>>>>>> *Problem*
>>>>>>> In current implementation when you set device operation to multiple
>>>>>>> devices, or during policy monitoring
>>>>>>> all push notifications are send all devices at once. This will cause
>>>>>>> sudden request burst when all devices try access the server for device
>>>>>>> operations. We also needs to think about of reliability of push
>>>>>>> notifications if those got failed. To solve this issue we have multiple
>>>>>>> solutions.
>>>>>>>
>>>>>>> *Solutions*
>>>>>>>
>>>>>>> *1) Use Message store and Message Processors*
>>>>>>>
>>>>>>> We can send each push notification message to inbuilt synapse
>>>>>>> message store via proxy. Then we can have sampling message processor to
>>>>>>> send those push notifications to given notification provider. In 
>>>>>>> sampling
>>>>>>> message processor we can define no of message per given time and time
>>>>>>> interval per batch of messages. Push notification logic will be inside
>>>>>>> sequence with custom mediator.
>>>>>>>
>>>>>>> Pros;
>>>>>>> Reliability can be achieved using jms message store.
>>>>>>> minimum code
>>>>>>>
>>>>>>> Cons;
>>>>>>> IOT server have to depend on message processors.
>>>>>>>
>>>>>>>
>>>>>>> *2) Java Queue with Scheduler Task*
>>>>>>>
>>>>>>> We can have java concurrent queue to store device notifications as
>>>>>>> batch jobs. Scheduler task will be there to read jobs and process them 
>>>>>>> and
>>>>>>> send notifications. Batch job object will content set of devices given
>>>>>>> batch size and scheduler task will be run based on delay interval.
>>>>>>>
>>>>>>> Pros;
>>>>>>> Do not need to rely on other feature
>>>>>>>
>>>>>>> Cons;
>>>>>>> We need to maintain push status in database so there will be
>>>>>>> additional database call in order to make reliability of notification
>>>>>>> sending. Later we can have retry logic to failed notifications based on 
>>>>>>> db
>>>>>>> status.
>>>>>>>
>>>>>>> If  device list is below the no of devices per batch, those commands
>>>>>>> will send immediately.
>>>>>>> IMO option 2 is better since we do not rely on any other
>>>>>>> implementation.
>>>>>>> Appreciate any suggestions.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Waruna
>>>>>>>
>>>>>>> --
>>>>>>> Regards,
>>>>>>>
>>>>>>> Waruna Lakshitha Jayaweera
>>>>>>> Senior Software Engineer
>>>>>>> WSO2 Inc; http://wso2.com
>>>>>>> phone: +94713255198 <+94%2071%20325%205198>
>>>>>>> http://waruapz.blogspot.com/
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Regards,
>>>>>
>>>>> Waruna Lakshitha Jayaweera
>>>>> Senior Software Engineer
>>>>> WSO2 Inc; http://wso2.com
>>>>> phone: +94713255198 <+94%2071%20325%205198>
>>>>> http://waruapz.blogspot.com/
>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> Architecture mailing list
>>>> [email protected]
>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>
>>>>
>>>
>>>
>>> --
>>> *G. K. S. Munasinghe*
>>> *WSO2, Inc. http://wso2.com <http://wso2.com/> *
>>> *lean.enterprise.middleware.*
>>>
>>> email: [email protected]
>>> phone:(+94) 777911226 <077%20791%201226>
>>>
>>> <http://wso2.com/signature>
>>>
>>> _______________________________________________
>>> Architecture mailing list
>>> [email protected]
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>> *Sinthuja Rajendran*
>> Technical Lead
>> WSO2, Inc.:http://wso2.com
>>
>> Blog: http://sinthu-rajan.blogspot.com/
>> Mobile: +94774273955 <+94%2077%20427%203955>
>>
>>
>>
>> _______________________________________________
>> Architecture mailing list
>> [email protected]
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> *G. K. S. Munasinghe*
> *WSO2, Inc. http://wso2.com <http://wso2.com/> *
> *lean.enterprise.middleware.*
>
> email: [email protected]
> phone:(+94) 777911226 <+94%2077%20791%201226>
>
> <http://wso2.com/signature>
>
> _______________________________________________
> Architecture mailing list
> [email protected]
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
Regards,

Waruna Lakshitha Jayaweera
Senior Software Engineer
WSO2 Inc; http://wso2.com
phone: +94713255198
http://waruapz.blogspot.com/
_______________________________________________
Architecture mailing list
[email protected]
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture

Reply via email to