Security wouldn’t stop zombie processes from writing to kafka. I had this 
problem with yarn before where the container thought it was killing jobs but 
they never actually died, and in fact continued to write to kafka.


> On Feb 10, 2016, at 4:23 PM, Jagadish Venkatraman <jagadish1...@gmail.com> 
> wrote:
> 
> Hi John
> 
> Currently there is no authorization on who writes to Kafka. There is a
> Kafka security proposal that the kafka community is working on.
> https://cwiki.apache.org/confluence/display/KAFKA/Security
> 
> Building this into Samza may entail expensive coordination (to prevent
> other jobs). Since, jobs are usually run in a trusted environment, I've not
> seen people requesting this use-case. Even if we did build this into Samza,
> nothing stops people from writing to that Kafka topic by bypassing Samza
> completely. (thro' the kafka producer or external library)
> 
> I'd think Kafka would build support for authorization, principals, roles
> etc. in the future and Samza can leverage it once it's done.
> 
> Thoughts?
> 
> On Wednesday, February 10, 2016, John Dennison <dennison.j...@gmail.com>
> wrote:
> 
>> Greetings,
>> 
>> I have general design question i did not see addressed in the docs.
>> Basically how does samza guarantee a single writer for each changelog
>> partition. Because of strong ordering assumption of these changelog, how do
>> you protect against zombie processes writing to the changelog with out of
>> date values.
>> 
>> Thanks,
>> 
>> John
>> 

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

Reply via email to