This would be awesome in Pharo: https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol. Since we cannot yet call JARs, implementing the wire protocol and API proxies would really be huge from an enterprise pharo standpoint. If other folks are interested in working on this, I could help out.

Each topic has many partitions and replicates for fail-over. Say #pharo has 101 partitions with 2 replicas each, totals 300 partitions and two failovers per partition. Incoming pharo events would be partition by some algorithm.

The basic consumer talks to zookeeper to get topology and then talks to partitions with "fetchers" to put blocks of in-sequence events into a shed queue. Then single-thread process them in the application level consumer, by having it consume from the shared queue.

The basic consumer reads from zookeeper then writes to various partitions.

regards,
robert

Reply via email to