Github user srowen commented on the pull request:

    https://github.com/apache/spark/pull/1804#issuecomment-51606301
  
    @tdas also 'porting' my comment -- the problem is that right now, without 
the dependency declared in `core`, there is no way to control which version of 
ZK goes into Spark core. You get what Curator happens to depend on. Core 
already depends on ZK indirectly.
    
    However, I see no reason that Spark cares about ZK per se. Vendors can 
control the ZK versions downstream if it maters. So it's probably less 
confusing to just remove `zookeeper.version` too. Right now it's overridden in 
the MapR profile, as if it does something, when it doesn't.
    
    The explicit dependency in Kafka tests can directly specify a version of, 
say, 3.4.5 for its own purpose. That fully contains the ZK dependency issue and 
seems entirely correct to me. Updated PR coming...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to