[ 
https://issues.apache.org/jira/browse/SPARK-1884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen resolved SPARK-1884.
------------------------------
    Resolution: Won't Fix

This appears to be a protobuf version mismatch, which suggests Shark is being 
used with an unsupported version of Hadoop. As Shark is deprecated and unlikely 
to take steps to support anything else -- and because there is a sort of clear 
path to workaround here if one cared to -- I think this is a "WontFix" too?

> Shark failed to start
> ---------------------
>
>                 Key: SPARK-1884
>                 URL: https://issues.apache.org/jira/browse/SPARK-1884
>             Project: Spark
>          Issue Type: Bug
>    Affects Versions: 0.9.1
>         Environment: ubuntu 14.04, spark 0.9.1, hive 0.13.0, hadoop 2.4.0 
> (stand alone), scala 2.11.0
>            Reporter: Wei Cui
>            Priority: Blocker
>
> the hadoop, hive, spark works fine.
> when start the shark, it failed with the following messages:
> Starting the Shark Command Line Client
> 14/05/19 16:47:21 INFO Configuration.deprecation: mapred.input.dir.recursive 
> is deprecated. Instead, use 
> mapreduce.input.fileinputformat.input.dir.recursive
> 14/05/19 16:47:21 INFO Configuration.deprecation: mapred.max.split.size is 
> deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
> 14/05/19 16:47:21 INFO Configuration.deprecation: mapred.min.split.size is 
> deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
> 14/05/19 16:47:21 INFO Configuration.deprecation: 
> mapred.min.split.size.per.rack is deprecated. Instead, use 
> mapreduce.input.fileinputformat.split.minsize.per.rack
> 14/05/19 16:47:21 INFO Configuration.deprecation: 
> mapred.min.split.size.per.node is deprecated. Instead, use 
> mapreduce.input.fileinputformat.split.minsize.per.node
> 14/05/19 16:47:21 INFO Configuration.deprecation: mapred.reduce.tasks is 
> deprecated. Instead, use mapreduce.job.reduces
> 14/05/19 16:47:21 INFO Configuration.deprecation: 
> mapred.reduce.tasks.speculative.execution is deprecated. Instead, use 
> mapreduce.reduce.speculative
> 14/05/19 16:47:22 WARN conf.Configuration: 
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@48c724c:an attempt to 
> override final parameter: mapreduce.job.end-notification.max.retry.interval;  
> Ignoring.
> 14/05/19 16:47:22 WARN conf.Configuration: 
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@48c724c:an attempt to 
> override final parameter: mapreduce.cluster.local.dir;  Ignoring.
> 14/05/19 16:47:22 WARN conf.Configuration: 
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@48c724c:an attempt to 
> override final parameter: mapreduce.job.end-notification.max.attempts;  
> Ignoring.
> 14/05/19 16:47:22 WARN conf.Configuration: 
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@48c724c:an attempt to 
> override final parameter: mapreduce.cluster.temp.dir;  Ignoring.
> Logging initialized using configuration in 
> jar:file:/usr/local/shark/lib_managed/jars/edu.berkeley.cs.shark/hive-common/hive-common-0.11.0-shark-0.9.1.jar!/hive-log4j.properties
> Hive history 
> file=/tmp/root/hive_job_log_root_14857@ubuntu_201405191647_897494215.txt
> 6.004: [GC 279616K->18440K(1013632K), 0.0438980 secs]
> 6.445: [Full GC 59125K->7949K(1013632K), 0.0685160 secs]
> Reloading cached RDDs from previous Shark sessions... (use -skipRddReload 
> flag to skip reloading)
> 7.535: [Full GC 104136K->13059K(1013632K), 0.0885820 secs]
> 8.459: [Full GC 61237K->18031K(1013632K), 0.0820400 secs]
> 8.662: [Full GC 29832K->8958K(1013632K), 0.0869700 secs]
> 8.751: [Full GC 13433K->8998K(1013632K), 0.0856520 secs]
> 10.435: [Full GC 72246K->14140K(1013632K), 0.1797530 secs]
> Exception in thread "main" org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.RuntimeException: Unable to instantiate 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient
>       at 
> org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1072)
>       at shark.memstore2.TableRecovery$.reloadRdds(TableRecovery.scala:49)
>       at shark.SharkCliDriver.<init>(SharkCliDriver.scala:283)
>       at shark.SharkCliDriver$.main(SharkCliDriver.scala:162)
>       at shark.SharkCliDriver.main(SharkCliDriver.scala)
> Caused by: java.lang.RuntimeException: Unable to instantiate 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient
>       at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1139)
>       at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:51)
>       at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:61)
>       at 
> org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2288)
>       at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2299)
>       at 
> org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1070)
>       ... 4 more
> Caused by: java.lang.reflect.InvocationTargetException
>       at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>       at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>       at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>       at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>       at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1137)
>       ... 9 more
> Caused by: java.lang.VerifyError: class 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$SetOwnerRequestProto
>  overrides final method 
> getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
>       at java.lang.ClassLoader.defineClass1(Native Method)
>       at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>       at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>       at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>       at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>       at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>       at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>       at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>       at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>       at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>       at java.lang.Class.getDeclaredMethods0(Native Method)
>       at java.lang.Class.privateGetDeclaredMethods(Class.java:2531)
>       at java.lang.Class.privateGetPublicMethods(Class.java:2651)
>       at java.lang.Class.privateGetPublicMethods(Class.java:2661)
>       at java.lang.Class.getMethods(Class.java:1467)
>       at sun.misc.ProxyGenerator.generateClassFile(ProxyGenerator.java:426)
>       at sun.misc.ProxyGenerator.generateProxyClass(ProxyGenerator.java:323)
>       at java.lang.reflect.Proxy.getProxyClass0(Proxy.java:636)
>       at java.lang.reflect.Proxy.newProxyInstance(Proxy.java:722)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getProxy(ProtobufRpcEngine.java:92)
>       at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:537)
>       at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNNProxyWithClientProtocol(NameNodeProxies.java:334)
>       at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:241)
>       at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:141)
>       at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:576)
>       at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:521)
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)
>       at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)
>       at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
>       at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
>       at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
>       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
>       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:167)
>       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:352)
>       at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>       at org.apache.hadoop.hive.metastore.Warehouse.getFs(Warehouse.java:105)
>       at 
> org.apache.hadoop.hive.metastore.Warehouse.getDnsPath(Warehouse.java:137)
>       at 
> org.apache.hadoop.hive.metastore.Warehouse.getWhRoot(Warehouse.java:152)
>       at 
> org.apache.hadoop.hive.metastore.Warehouse.getDefaultDatabasePath(Warehouse.java:170)
>       at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:420)
>       at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
>       at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
>       at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:285)
>       at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:54)
>       at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:59)
>       at 
> org.apache.hadoop.hive.metastore.HiveMetaStore.newHMSHandler(HiveMetaStore.java:4102)
>       at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:121)
>       ... 14 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to