Hi!

I've been thinking about Nathan Marz lambda architecture with the
components:

1. Kafka as message bus, the entry point of raw data.
2. Camus to dump data into HDFS (the batch layer).
3. And Storm to dump data into HBase (the speed layer).

I guess this is the "classical architecture" (the theory), however thinking
in the Storm-to-HDFS connector from P. Taylor: to dump processed data from
Storm to HDFS is a good idea taking into account the lambda architecture?
Do you think this could slow down the speed layer concept? Otherwise
Storm-to-HDFS connector is suitable for other plans.

Many thanks

--
Javi Roman

Reply via email to