Thanks for the reply. We are currently deciding between kafka streams and
Samza. Which do you think would be more appropriate?
Also for files over 1mb would you increase the default kafka limit? Break
the document into chunks or pass a reference in the message?
Thanks again
On Sun, 28 Apr 2019
Hi Rob,
Yes, your use-case is a good fit. You can use Samza for fault-tolerant
stream processing.
We have document (eg: member profiles, articles/blogs) standardization
use-cases at LinkedIn powered by Samza.
Please let us know should you have further questions!
On Sun, Apr 28, 2019 at 7:09 AM
Im looking at creating a distributed steaming pipeline for processing text
documents (eg cleaning, NER and machine learning). Documents will generally
be under 1mb and processing will be stateless. Was aiming to feed documents
from various sources and additional data into Kafka to be streamed to th