I'm so sorry,it take effect.but Another issue arises

19/06/11 10:06:24 INFO spark.MapOutputTrackerMasterEndpoint: Asked to send map 
output locations for shuffle 0 to 172.10.3.161:56895 19/06/11 10:06:24 INFO 
spark.MapOutputTrackerMaster: Size of output statuses for shuffle 0 is 158 
bytes 19/06/11 10:06:30 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 
1.0 (TID 2, data1.test, executor 5): java.lang.NoClassDefFoundError: 
org/apache/hadoop/hbase/util/Counter   at 
org.apache.hadoop.metrics2.lib.MutableHistogram.<init>(MutableHistogram.java:42)
     at 
org.apache.hadoop.metrics2.lib.MutableRangeHistogram.<init>(MutableRangeHistogram.java:41)
   at 
org.apache.hadoop.metrics2.lib.MutableTimeHistogram.<init>(MutableTimeHistogram.java:42)
     at 
org.apache.hadoop.metrics2.lib.MutableTimeHistogram.<init>(MutableTimeHistogram.java:38)
     at 
org.apache.hadoop.metrics2.lib.DynamicMetricsRegistry.newTimeHistogram(DynamicMetricsRegistry.java:262)
      at 
org.apache.hadoop.hbase.io.MetricsIOSourceImpl.<init>(MetricsIOSourceImpl.java:49)
   at 
org.apache.hadoop.hbase.io.MetricsIOSourceImpl.<init>(MetricsIOSourceImpl.java:36)
   at 
org.apache.hadoop.hbase.regionserver.MetricsRegionServerSourceFactoryImpl.createIO(MetricsRegionServerSourceFactoryImpl.java:89)
     at org.apache.hadoop.hbase.io.MetricsIO.<init>(MetricsIO.java:32)       at 
org.apache.hadoop.hbase.io.hfile.HFile.<clinit>(HFile.java:192)      at 
org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2$1.getNewWriter(HFileOutputFormat2.java:247)
     at 
org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2$1.write(HFileOutputFormat2.java:194)
    at 
org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2$1.write(HFileOutputFormat2.java:152)
    at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply$mcV$sp(PairRDDFunctions.scala:1125)
    at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply(PairRDDFunctions.scala:1123)
   at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply(PairRDDFunctions.scala:1123)
   at 
org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1353)
         at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1131)
    at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1102)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)   at 
org.apache.spark.scheduler.Task.run(Task.scala:99)   at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
     at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
     at java.lang.Thread.run(Thread.java:748) Caused by: 
java.lang.ClassNotFoundException: org.apache.hadoop.hbase.util.Counter      at 
java.net.URLClassLoader.findClass(URLClassLoader.java:381)   at 
java.lang.ClassLoader.loadClass(ClassLoader.java:424)        at 
sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)        at 
java.lang.ClassLoader.loadClass(ClassLoader.java:357)        ... 25 more




------------------ ???????? ------------------
??????: "??????????????????"<[email protected]>;
????????: 2019??6??11??(??????) ????10:03
??????: "user"<[email protected]>;

????: ??????spark build cube ,has a problem



 nichunen:


First of all, thank you for your reply.


I tried in this way,but it is no effective:
Hadoop cluster(Five machine)
cp 
/usr/hdp/2.6.5.0-292/hive2/lib/{hbase-hadoop2-compat-1.1.2.2.6.5.0-292-tests.jar,hbase-hadoop2-compat-1.1.2.2.6.5.0-292.jar}
 /usr/hdp/2.6.5.0-292/spark2/jars/
cp 
/usr/hdp/2.6.5.0-292/hbase/lib/{hbase-hadoop-compat-1.1.2.2.6.5.0-292.jar,hbase-hadoop-compat.jar}
 /usr/hdp/2.6.5.0-292/spark2/jars/
kylin machine
cp 
/usr/hdp/2.6.5.0-292/hive2/lib/{hbase-hadoop2-compat-1.1.2.2.6.5.0-292-tests.jar,hbase-hadoop2-compat-1.1.2.2.6.5.0-292.jar}
 /usr/local/apache-kylin-2.5.2-bin-hbase1x/spark/jars/
cp 
/usr/hdp/2.6.5.0-292/hbase/lib/{hbase-hadoop-compat-1.1.2.2.6.5.0-292.jar,hbase-hadoop-compat.jar}
 /usr/local/apache-kylin-2.5.2-bin-hbase1x/spark/jars/



Is there any other way?


------------------ ???????? ------------------
??????: "nichunen"<[email protected]>;
????????: 2019??6??7??(??????) ????11:01
??????: "[email protected]"<[email protected]>;

????: Re:spark build cube ,has a problem



           
     Hi,


I think it??s a known issue with some versions of hadoop. By copying 
hbase-hadoop2-compat-*.jar and hbase-hadoop-compat-*.jar in your env to path 
$SPARK_HOME/jars/ may fix this.
                           
              
                                      
Best regards,

 


Ni Chunen / George
         
     
      
     
 
     On 05/31/2019 15:18????????????????????<[email protected]> wrote?? 
 
  ??????     when I use spark(hdp2.6.5) to build cube??a poblem has arisen:
       what should I do?


19/05/31 14:27:49 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 1.0 
(TID 2, data1.test, executor 10): java.lang.ExceptionInInitializerError
        at 
org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2$1.getNewWriter(HFileOutputFormat2.java:247)
        at 
org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2$1.write(HFileOutputFormat2.java:194)
        at 
org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2$1.write(HFileOutputFormat2.java:152)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply$mcV$sp(PairRDDFunctions.scala:1125)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply(PairRDDFunctions.scala:1123)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply(PairRDDFunctions.scala:1123)
        at 
org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1353)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1131)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1102)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: Could not create  interface 
org.apache.hadoop.hbase.regionserver.MetricsRegionServerSourceFactory Is the 
hadoop compatibility jar on the classpath?
        at 
org.apache.hadoop.hbase.CompatibilitySingletonFactory.getInstance(CompatibilitySingletonFactory.java:73)
        at org.apache.hadoop.hbase.io.MetricsIO.<init>(MetricsIO.java:31)
        at org.apache.hadoop.hbase.io.hfile.HFile.<clinit>(HFile.java:192)
        ... 15 more
Caused by: java.util.NoSuchElementException
        at 
java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:365)
        at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
        at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
        at 
org.apache.hadoop.hbase.CompatibilitySingletonFactory.getInstance(CompatibilitySingletonFactory.java:59)
        ... 17 more

Reply via email to