Yeah, I agree.
I looked at CruiseControl and it seems to be using the same kind of model (as a 
producer).

The auth model we’re planning has a set of apps which each self-sign a token, 
so we need a list of keys and identities we trust, which changes over time.
Kinda sounds like a log ;), so our plan was to push changes to a special topic, 
and the authentication logic would maintain a cache it references when 
validating tokens.

It felt weird to make a Kafka consumer from inside the Kafka process (even if 
it is on a side-thread or something) that connects back out to the cluster.
Very probably to another, different, broker.
And especially from an authentication module.
It sounded like it’d be easier to have one partition with a replication factor 
of “all the brokers” and then they each just pull the cache off the file on 
disk they would all have.

But it sounds like that’s not trivial, so probably it’s for the best to do the 
right thing with consumers even if it does mean that all of the brokers are 
consuming from broker 4, including broker 4 itself.
Still feels weird, but I think it’s most right.

> On Mar 8, 2019, at 16:26, Gwen Shapira <g...@confluent.io> wrote:
> 
> Since you have the Kafka configuration, you can open your own connection to
> ZK.
> You also have the advertised listeners from same file, if you want to
> connect back to the Kafka cluster to check things.
> I'd use that if possible for your use-case, accessing the log files
> directly seems a bit risky to me.
> 
> 
> I'd love to hear what your authentication workflow looks like where you
> need to actually read data from disk.
> 
> On Fri, Mar 8, 2019 at 1:20 PM Christopher Vollick
> <christopher.voll...@shopify.com.invalid> wrote:
> 
>> Hello! I'm experimenting with an implementation of
>> AuthenticateCallbackHandler in an external JAR I’m loading, and I'd like to
>> use some of the methods / properties from ReplicaManager (or KafkaServer
>> which has a ReplicaManager), but I don't see anything that's passed to me
>> or any singletons that will give me access to those objects from my class.
>> 
>> I figure that’s probably intentional, but I wanted to ask just incase I’m
>> missing a hook I don’t know is there.
>> Specifically, I was looking to either get access to the logs on disk
>> (ReplicaManager’s fetchMessages method) so I could read some things, or
>> alternatively the ZooKeeper connection.
>> 
>> Since I’m in a JAR, any solution which involves changing the core Kafka
>> code-base isn’t something I’m interested in.
>> Thanks!
> 
> 
> 
> -- 
> *Gwen Shapira*
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter <https://twitter.com/ConfluentInc> | blog
> <http://www.confluent.io/blog>

Reply via email to