Github user jongyoul commented on the pull request:
https://github.com/apache/incubator-zeppelin/pull/31#issuecomment-91746744
@RamVenkatesh It looks something different between spark and vanila
hadoop. Look at
https://github.com/apache/spark/blob/master/pom.xml#L1620-L1631. the profile
hadoop-2.4 of Spark supports that jackson.version is 1.9.13. As I've told you,
I've tested this jackson library 1.8.8 and 1.9.13 with parsing json of spark
1.2.0, 1.2.1 and 1.3.0 with hadoop 2.3, 2.4, 2.5, 2.3.0-cdh5.0.1,
2.5.0-cdh5.3.0 and 2.5.0-cdh5.3.1 in production level. It works fine with hive
and hive on spark. In fact, the version of jackson version is less important,
more important thing is fixing same version among jackson-core-asl,
jackson-mapper-asl, jackson-xc and jackson-jaxrs. You can find theses in
https://github.com/apache/spark/blob/master/pom.xml#L934-L955. Finally, I think
adding fixing jackson library 1.9.13 is enough to support hadoop-2.6. And
concerning (secure) Hadoop cluster, I've not tested it but It's Ok because I
believe Spark already is being tested. From now, Zeppelin uses Hadoop from
Spark,
actually. If you found serious problem while using Hadoop, feel free to talk
to me. I've known you patched for hiveInterpreter. Is there any problem to use
hiveInterpreter with this jackson version?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---