Github user koeninger commented on the pull request:

    https://github.com/apache/spark/pull/3798#issuecomment-72783141
  
    Yeah, more importantly it's so defaults for things like connection timeouts
    match what kafka provides.
    
    It's possible to assign fake zookeeper.connect and have it pass
    verification, that's what existing code does.
    
    Unfortunately ConsumerConfig has a private constructor so subclassing it in
    order for the broker list to pass verification without that warning may
    prove to be tricky.  Worst case scenario I'll re-implement a config that
    uses the kafka defaults.
    
    On Tue, Feb 3, 2015 at 9:05 PM, Tathagata Das <[email protected]>
    wrote:
    
    > I see. ConsumerConfig is really necessary only for high-level consumer,
    > but you are using it configure stuff in the low level consumer as well.
    > That is so that you dont have to introduce parameter strings to configure
    > them yourselves.
    >
    > Is it possible to assign fake but verified zookeeper.connect ?
    >
    > —
    > Reply to this email directly or view it on GitHub
    > <https://github.com/apache/spark/pull/3798#issuecomment-72782434>.
    >



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to