Jerome,

In these cases typically what you want to do is that you define all of the
options in your topology, but the routing itself can be dynamic.  One
pattern I often see is a star like pattern.

Spout -> router.
router -> DBBolt
router -> KafkaBolt
router -> ESBolt
router -> streamingJoin -> druidBolt.

Then in the router you have code that looks at the tuple coming in and
decides which of the downstream paths it should send the message to.  For
one type of message it might need to go to just the DB.  For others you
might want to send it to elasticsearch + druid.

This gives you a lot of flexibility and the acking in storm can handle it
so long as you make sure to only ack the incoming tuple after you have sent
+ anchored the other messages on.

Thanks,

Bobby

On Thu, May 17, 2018 at 4:44 AM jerome moliere <jer...@javaxpert.com> wrote:

> Hi all,
> I am quite neww to Storm so sorry if my question sounds stupid.
> I am working with connected devices collecting metrrics through MQTT.
>
> So I have got a source of data with my MQTT server
> I will use a spout to store into db this data
>
> I may need depending from the contents of the message to start some
> workflows like pushing the message into another queue or db (when working
> with partners IS ).
>
> I am not sure that dynamic topologies are even possible with Storm. If
> possible what is the proper way to do that ?
>
> Thanks for your help
>
> Kind regards
> Jerome
>

Reply via email to