Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/307#issuecomment-39429880
Ah I should have looked more at this. The dependency is just to bring in
some annotations that were later standardized. We do not use them, and so do
not need any copy, including from Guava. This could be omitted, as it does not
enable running Findbugs anyway.
I have the formula for adding a Findbugs plugin to the build. It's not
hard. As I say in the JIRA, it would not do much good in a Scala-based project.
(IntelliJ does have good Scala static analysis. I'd like to take a crack at
fixing all the stuff it's found. These are big PR-busting changes unfortunately
so I've not suggested many. It only gets harder later, so seems like worth
doing soon if at all, but it's so so busy!)
Updating Guava is good per se; it's a bit perilous when you get near Hadoop
since even current versions use Hadoop 11.0.2, and when executing code within a
Hadoop classloader, you'll end up linking against it's old version in the
parent classpath. I think we've had this discussion before -- it is somewhat
less of an issue for Spark but I confess I haven't though through whether it is
actually a non-issue. The project is already on 14, hmm.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---