Two solutions:

1. You can group users by some sort of classification and create topics
based on that then for each user the consumer can check if it's interested
in the topic and consumer or reject the messages.

2. If each user writes a lot of data then you can use the concept of key
based custom partitioner.

For consumption depending on wrote volume and velocity you may want to
consider using a database for viewing events per user.

On Sep 21, 2016 3:01 PM, "Ivan Gozali" <> wrote:

> Hi everyone,
> I'm very new to Storm, and have read various documentation but haven't
> started using it.
> I have a use case where I could potentially have many users producing data
> points that are accumulated in one huge, single Kafka topic/Kinesis stream,
> and I was going to use Storm to "route" per-user mini-streams coming from
> this single huge stream to multiple processors.
> I was wondering how this use case is typically handled. I was going to
> create a topology (where the spout consumes the big stream) for each user's
> mini-stream that is then pushed to some new derived stream in
> Kinesis/Kafka, but this doesn't seem right, since there could be 100,000s,
> if not 1,000,000s of users and I would be creating 1,000,000 topics.
> Thanks in advance for any advice!
> --
> Regards,
> Ivan Gozali

Reply via email to