Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/9893#issuecomment-174663486
Getting this installed cluster-wide is going to be non-trivial: our CI
process has separate Ivy caches for every build workspace, so the installation
of the JDBC driver JAR isn't a one-time process but needs to be automated.
Right now, we have the ability to just wipe out a build workspace in case its
caches get corrupted, since everything will be reconstituted by fetching from
Maven Central.
As a result, I'd like to hold off on merging this until we get that driver
JAR published to a public Maven repository or have some other means to allow
SBT and Maven to automatically obtain it. One alternative approach would be to
just check the DB2 JDBC JAR itself into the Spark repository; this isn't ideal,
but AFAIK it's technically okay for us to have binary artifacts in our repo as
long as their only for testing. I'm not sure whether IBM's licensing terms
would permit this, though.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]