[jira] [Created] (HIVE-12888) TestSparkNegativeCliDriver does not run in Spark mode[Spark Branch]

2016-01-19 Thread Chengxiang Li (JIRA)
Chengxiang Li created HIVE-12888:


 Summary: TestSparkNegativeCliDriver does not run in Spark 
mode[Spark Branch]
 Key: HIVE-12888
 URL: https://issues.apache.org/jira/browse/HIVE-12888
 Project: Hive
  Issue Type: Bug
  Components: Spark
Affects Versions: 1.2.1
Reporter: Chengxiang Li
Assignee: Chengxiang Li


During test, i found TestSparkNegativeCliDriver run in MR mode actually, it 
should be fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-12515) Clean the SparkCounters related code after remove counter based stats collection[Spark Branch]

2015-11-24 Thread Chengxiang Li (JIRA)
Chengxiang Li created HIVE-12515:


 Summary: Clean the SparkCounters related code after remove counter 
based stats collection[Spark Branch]
 Key: HIVE-12515
 URL: https://issues.apache.org/jira/browse/HIVE-12515
 Project: Hive
  Issue Type: Improvement
  Components: Spark
Reporter: Chengxiang Li
Assignee: Xuefu Zhang






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-11267) Combine equavilent leaf works in SparkWork[Spark Branch]

2015-07-15 Thread Chengxiang Li (JIRA)
Chengxiang Li created HIVE-11267:


 Summary: Combine equavilent leaf works in SparkWork[Spark Branch]
 Key: HIVE-11267
 URL: https://issues.apache.org/jira/browse/HIVE-11267
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
Priority: Minor


There could be multi leaf works in SparkWork, like self-union query. If the 
subqueries are same with each other, we may combine the subqueries, and just 
execute once, then fetch twice in FetchTask.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-11204) Research on recent failed qtests[Spark Branch]

2015-07-08 Thread Chengxiang Li (JIRA)
Chengxiang Li created HIVE-11204:


 Summary: Research on recent failed qtests[Spark Branch]
 Key: HIVE-11204
 URL: https://issues.apache.org/jira/browse/HIVE-11204
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Affects Versions: spark-branch
Reporter: Chengxiang Li
Priority: Minor


Found some strange failed qtests in HIVE-11053 Hive QA, as it's pretty sure 
that failed qtests are not related to HIVE-11053 patch, so just reproduce and 
research it here.
Failed tests:
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby_bigdata
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby_resolution
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby_sort_1_23
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_join_literals
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_mapreduce1
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_skewjoinopt2
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_smb_mapjoin_15
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_union_remove_19
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_union_remove_4
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_union_remove_8
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_union_view



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-11082) Support multi edge between nodes in SparkPlan[Spark Branch]

2015-06-23 Thread Chengxiang Li (JIRA)
Chengxiang Li created HIVE-11082:


 Summary: Support multi edge between nodes in SparkPlan[Spark 
Branch]
 Key: HIVE-11082
 URL: https://issues.apache.org/jira/browse/HIVE-11082
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li


For Dynamic RDD caching optimization, we found SparkPlan::connect throw 
exception while we try to combine 2 works with same child, support multi edge 
between nodes in SparkPlan would help to enable dynamic RDD caching in more use 
cases, like self join and self union.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-11053) Add more tests for HIVE-10844[Spark Branch]

2015-06-19 Thread Chengxiang Li (JIRA)
Chengxiang Li created HIVE-11053:


 Summary: Add more tests for HIVE-10844[Spark Branch]
 Key: HIVE-11053
 URL: https://issues.apache.org/jira/browse/HIVE-11053
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Priority: Minor


Add some test cases for self union, self-join, CWE, and repeated sub-queries to 
verify the job of combining quivalent works in HIVE-10844.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-10844) Combine equivalent Works for HoS[Spark Branch]

2015-05-27 Thread Chengxiang Li (JIRA)
Chengxiang Li created HIVE-10844:


 Summary: Combine equivalent Works for HoS[Spark Branch]
 Key: HIVE-10844
 URL: https://issues.apache.org/jira/browse/HIVE-10844
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li


Some Hive queries(like [TPCDS 
Q39|https://github.com/hortonworks/hive-testbench/blob/hive14/sample-queries-tpcds/query39.sql])
 may share the same subquery, which translated into sperate, but equivalent 
Works in SparkWork, combining these equivalent Works into a single one would 
help to benifit from following dynamic RDD caching optimization.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-10550) Dynamic RDD caching optimization for HoS.[Spark Branch]

2015-04-30 Thread Chengxiang Li (JIRA)
Chengxiang Li created HIVE-10550:


 Summary: Dynamic RDD caching optimization for HoS.[Spark Branch]
 Key: HIVE-10550
 URL: https://issues.apache.org/jira/browse/HIVE-10550
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li


A Hive query may try to scan the same table multi times, like self-join, 
self-union, or even share the same subquery, [TPC-DS 
Q39|https://github.com/hortonworks/hive-testbench/blob/hive14/sample-queries-tpcds/query39.sql]
 is an example. As you may know that, Spark support cache RDD data, which mean 
Spark would put the calculated RDD data in memory and get the data from memory 
directly for next time, this avoid the calculation cost of this RDD(and all the 
cost of its dependencies) at the cost of more memory usage. Through analyze the 
query context, we should be able to understand which part of query could be 
shared, so that we can reuse the cached RDD in the generated Spark job.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-10235) Loop optimization for SIMD in ColumnDivideColumn.txt

2015-04-07 Thread Chengxiang Li (JIRA)
Chengxiang Li created HIVE-10235:


 Summary: Loop optimization for SIMD in ColumnDivideColumn.txt
 Key: HIVE-10235
 URL: https://issues.apache.org/jira/browse/HIVE-10235
 Project: Hive
  Issue Type: Sub-task
  Components: Vectorization
Affects Versions: 1.1.0
Reporter: Chengxiang Li
Assignee: Chengxiang Li
Priority: Minor


Found two loop which could be optimized for packed instruction set during 
execution.
1. hasDivBy0 depends on the result of last loop, which prevent the loop be 
executed vectorized.
{code:java}
for(int i = 0; i != n; i++) {
  OperandType2 denom = vector2[i];
  outputVector[i] = vector1[0] OperatorSymbol denom;
  hasDivBy0 = hasDivBy0 || (denom == 0);
}
{code}
2. same as HIVE-10180, vector2\[0\] reference provent JVM optimizing loop into 
packed instruction set.
{code:java}
for(int i = 0; i != n; i++) {
  outputVector[i] = vector1[i] OperatorSymbol vector2[0];
}
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-10238) Loop optimization for SIMD in IfExprColumnColumn.txt

2015-04-07 Thread Chengxiang Li (JIRA)
Chengxiang Li created HIVE-10238:


 Summary: Loop optimization for SIMD in IfExprColumnColumn.txt
 Key: HIVE-10238
 URL: https://issues.apache.org/jira/browse/HIVE-10238
 Project: Hive
  Issue Type: Sub-task
  Components: Vectorization
Affects Versions: 1.1.0
Reporter: Chengxiang Li
Assignee: Jitendra Nath Pandey
Priority: Minor


The ?: operator as following could not be vectorized in loop, we may transfer 
it into mathematical expression.
{code:java}
for(int j = 0; j != n; j++) {
  int i = sel[j];
  outputVector[i] = (vector1[i] == 1 ? vector2[i] : vector3[i]);
  outputIsNull[i] = (vector1[i] == 1 ?
  arg2ColVector.isNull[i] : arg3ColVector.isNull[i]);
}
{code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-10180) Loop optimization in ColumnArithmeticColumn.txt

2015-04-01 Thread Chengxiang Li (JIRA)
Chengxiang Li created HIVE-10180:


 Summary: Loop optimization in ColumnArithmeticColumn.txt
 Key: HIVE-10180
 URL: https://issues.apache.org/jira/browse/HIVE-10180
 Project: Hive
  Issue Type: Sub-task
Reporter: Chengxiang Li
Assignee: Chengxiang Li
Priority: Minor


JVM is quite strict on the code schema which may executed with SIMD 
instructions, take a loop in DoubleColAddDoubleColumn.java for example, 
{code:java}
for (int i = 0; i != n; i++) {
  outputVector[i] = vector1[0] + vector2[i];
}
{code}
The vector1[0] reference would prevent JVM to execute this part of code with 
vectorized instructions, we need to assign the vector1[0] to a variable 
outside of loop, and use that variable in loop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-10179) Optimization for SIMD instructions in Hive

2015-04-01 Thread Chengxiang Li (JIRA)
Chengxiang Li created HIVE-10179:


 Summary: Optimization for SIMD instructions in Hive
 Key: HIVE-10179
 URL: https://issues.apache.org/jira/browse/HIVE-10179
 Project: Hive
  Issue Type: Improvement
Reporter: Chengxiang Li
Assignee: Chengxiang Li


[SIMD|http://en.wikipedia.org/wiki/SIMD] instuctions could be found in most of 
current CPUs, such as Intel's SSE2, SSE3, SSE4.x, AVX and AVX2, and it would 
help Hive to outperform if we can vectorize the mathematical manipulation part 
of Hive. This umbrella JIRA may contains but not limited to the subtasks like:
# Code schema adaption, current JVM is quite strictly on the code schema which 
could be transformed into SIMD instructions during execution. 
# New implementation of mathematical manipulation part of Hive which designed 
to be optimized for SIMD instructions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-10052) HiveInputFormat implementations getsplits may lead to memory leak.[Spark Branch]

2015-03-22 Thread Chengxiang Li (JIRA)
Chengxiang Li created HIVE-10052:


 Summary: HiveInputFormat implementations getsplits may lead to 
memory leak.[Spark Branch]
 Key: HIVE-10052
 URL: https://issues.apache.org/jira/browse/HIVE-10052
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li


HiveInputFormat::init would cache MapWork/ReduceWork in ThreadLocal map, we 
need to clear the cache after getSplits from HiveInputFormat(or its 
implementations), or just not cache MapWork/ReduceWork during generation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-10006) RSC has memory leak while execute multi queries.[Spark Branch]

2015-03-18 Thread Chengxiang Li (JIRA)
Chengxiang Li created HIVE-10006:


 Summary: RSC has memory leak while execute multi queries.[Spark 
Branch]
 Key: HIVE-10006
 URL: https://issues.apache.org/jira/browse/HIVE-10006
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Affects Versions: 1.1.0
Reporter: Chengxiang Li
Assignee: Chengxiang Li
Priority: Critical


While execute query with RSC, MapWork/ReduceWork number is increased all the 
time, and lead to OOM at the end.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9425) External Function Jar files are not available for Driver when running with yarn-cluster mode [Spark Branch]

2015-02-04 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9425:

Assignee: Rui Li  (was: Chengxiang Li)

 External Function Jar files are not available for Driver when running with 
 yarn-cluster mode [Spark Branch]
 ---

 Key: HIVE-9425
 URL: https://issues.apache.org/jira/browse/HIVE-9425
 Project: Hive
  Issue Type: Sub-task
  Components: spark-branch
Reporter: Xiaomin Zhang
Assignee: Rui Li

 15/01/20 00:27:31 INFO cluster.YarnClusterScheduler: 
 YarnClusterScheduler.postStartHook done
 15/01/20 00:27:31 ERROR spark.SparkContext: Error adding jar 
 (java.io.FileNotFoundException: hive-exec-0.15.0-SNAPSHOT.jar (No such file 
 or directory)), was the --addJars option used?
 15/01/20 00:27:31 ERROR spark.SparkContext: Error adding jar 
 (java.io.FileNotFoundException: opennlp-maxent-3.0.3.jar (No such file or 
 directory)), was the --addJars option used?
 15/01/20 00:27:31 ERROR spark.SparkContext: Error adding jar 
 (java.io.FileNotFoundException: bigbenchqueriesmr.jar (No such file or 
 directory)), was the --addJars option used?
 15/01/20 00:27:31 ERROR spark.SparkContext: Error adding jar 
 (java.io.FileNotFoundException: opennlp-tools-1.5.3.jar (No such file or 
 directory)), was the --addJars option used?
 15/01/20 00:27:31 ERROR spark.SparkContext: Error adding jar 
 (java.io.FileNotFoundException: jcl-over-slf4j-1.7.5.jar (No such file or 
 directory)), was the --addJars option used?
 15/01/20 00:27:31 INFO client.RemoteDriver: Received job request 
 fef081b0-5408-4804-9531-d131fdd628e6
 15/01/20 00:27:31 INFO Configuration.deprecation: mapred.max.split.size is 
 deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
 15/01/20 00:27:31 INFO Configuration.deprecation: mapred.min.split.size is 
 deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
 15/01/20 00:27:31 INFO client.RemoteDriver: Failed to run job 
 fef081b0-5408-4804-9531-d131fdd628e6
 org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find 
 class: de.bankmark.bigbench.queries.q10.SentimentUDF
 Serialization trace:
 genericUDTF (org.apache.hadoop.hive.ql.plan.UDTFDesc)
 conf (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 invertedWorkGraph (org.apache.hadoop.hive.ql.plan.SparkWork)
   at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
   at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
 It seems the additional Jar files are not uploaded to DistributedCache, so 
 that the Driver cannot access it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9410) ClassNotFoundException occurs during hive query case execution with UDF defined [Spark Branch]

2015-02-03 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14304382#comment-14304382
 ] 

Chengxiang Li commented on HIVE-9410:
-

Not actually, as you can see from the patch, i stored added jar paths to a list 
in JobContextImpl, and add the jar paths in JobContextImpl to current thead 
context class loader while execute JobStatusJob each time, as JobContextImpl is 
a singleton instance for RemoteDriver service, so later request thead could get 
the jar paths as well. 

 ClassNotFoundException occurs during hive query case execution with UDF 
 defined [Spark Branch]
 --

 Key: HIVE-9410
 URL: https://issues.apache.org/jira/browse/HIVE-9410
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
 Environment: CentOS 6.5
 JDK1.7
Reporter: Xin Hao
Assignee: Chengxiang Li
 Fix For: spark-branch, 1.1.0

 Attachments: HIVE-9410.1-spark.patch, HIVE-9410.2-spark.patch, 
 HIVE-9410.3-spark.patch, HIVE-9410.4-spark.patch, HIVE-9410.4-spark.patch


 We have a hive query case with UDF defined (i.e. BigBench case Q10, Q18 
 etc.). It will be passed for default Hive (on MR) mode, while failed for Hive 
 On Spark mode (both Standalone and Yarn-Client). 
 Although we use 'add jar .jar;' to add the UDF jar explicitly, the issue 
 still exists. 
 BTW, if we put the UDF jar into $HIVE_HOME/lib dir, the case will be passed.
 Detail Error Message is as below (NOTE: 
 de.bankmark.bigbench.queries.q10.SentimentUDF is the UDF which contained in 
 jar bigbenchqueriesmr.jar, and we have add command like 'add jar 
 /location/to/bigbenchqueriesmr.jar;' into .sql explicitly)
 {code}
 INFO  [pool-1-thread-1]: client.RemoteDriver (RemoteDriver.java:call(316)) - 
 Failed to run job 8dd120cb-1a4d-4d1c-ba31-61eac648c27d
 org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find 
 class: de.bankmark.bigbench.queries.q10.SentimentUDF
 Serialization trace:
 genericUDTF (org.apache.hadoop.hive.ql.plan.UDTFDesc)
 conf (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.MapJoinOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.FilterOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 right (org.apache.commons.lang3.tuple.ImmutablePair)
 edgeProperties (org.apache.hadoop.hive.ql.plan.SparkWork)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:99)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 ...
 Caused by: java.lang.ClassNotFoundException: 
 de.bankmark.bigbench.queries.q10.SentimentUDF
 at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
 at 

[jira] [Created] (HIVE-9540) Enable infer_bucket_sort_dyn_part.q for TestMiniSparkOnYarnCliDriver test. [Spark Branch]

2015-02-01 Thread Chengxiang Li (JIRA)
Chengxiang Li created HIVE-9540:
---

 Summary: Enable infer_bucket_sort_dyn_part.q for 
TestMiniSparkOnYarnCliDriver test. [Spark Branch]
 Key: HIVE-9540
 URL: https://issues.apache.org/jira/browse/HIVE-9540
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li


infer_bucket_sort_dyn_part.q output changes on TestMiniSparkOnYarnCliDriver 
test, we should figure out why and try to enable it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9211) Research on build mini HoS cluster on YARN for unit test[Spark Branch]

2015-02-01 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9211:

Attachment: HIVE-9211.6-spark.patch

[~xuefuz], the output of infer_bucket_sort_dyn_part.q changes during the test, 
so i remote it from the miniSparkOnYarn.query.files, and created extra 
HIVE-9540 to track it.

 Research on build mini HoS cluster on YARN for unit test[Spark Branch]
 --

 Key: HIVE-9211
 URL: https://issues.apache.org/jira/browse/HIVE-9211
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M5
 Attachments: HIVE-9211.1-spark.patch, HIVE-9211.2-spark.patch, 
 HIVE-9211.2-spark.patch, HIVE-9211.3-spark.patch, HIVE-9211.4-spark.patch, 
 HIVE-9211.5-spark.patch, HIVE-9211.6-spark.patch


 HoS on YARN is a common use case in product environment, we'd better enable 
 unit test for this case. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9542) SparkSessionImpl calcualte wrong cores number in TestSparkCliDriver [Spark Branch]

2015-02-01 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9542:

Summary: SparkSessionImpl calcualte wrong cores number in 
TestSparkCliDriver [Spark Branch]  (was: SparkSessionImpl calcualte wrong 
number of cores number in TestSparkCliDriver [Spark Branch])

 SparkSessionImpl calcualte wrong cores number in TestSparkCliDriver [Spark 
 Branch]
 --

 Key: HIVE-9542
 URL: https://issues.apache.org/jira/browse/HIVE-9542
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li

 TestSparkCliDriver launch local spark cluster with [2,2,1024], which means 2 
 executor with 2 cores for each execuotr, HoS get the core number as 2 instead 
 of 4.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9211) Research on build mini HoS cluster on YARN for unit test[Spark Branch]

2015-02-01 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9211:

Attachment: HIVE-9211.7-spark.patch

TestSparkCliDriver launch local spark cluster with \[2,2,1024\], which means 2 
executor with 2 cores for each execuotr, while HoS use spark.executor.cores 
values to caculate all cores number, so TestSparkCliDriver set reduce partition 
number as 2 instead of 4. Currently caculation logic of cores number is 
spark-invaded and easy to be broken, we may handle it in a better way after 
SPARK-5080 is resoved. groupby2.q and join1.q is failed due to the previous 
reason during EXPLAIN queries, and HIVE-9542 is created for this issue.

ql_rewrite_gbtoidx_cbo_2.q failed on TestMinimrCliDriver as i add result order 
tag to the qfile before and did not update TestMinimrCliDriver output.

encryption_join_with_different_encryption_keys.q failure should not related to 
this patch from the log file.

 Research on build mini HoS cluster on YARN for unit test[Spark Branch]
 --

 Key: HIVE-9211
 URL: https://issues.apache.org/jira/browse/HIVE-9211
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M5
 Attachments: HIVE-9211.1-spark.patch, HIVE-9211.2-spark.patch, 
 HIVE-9211.2-spark.patch, HIVE-9211.3-spark.patch, HIVE-9211.4-spark.patch, 
 HIVE-9211.5-spark.patch, HIVE-9211.6-spark.patch, HIVE-9211.7-spark.patch


 HoS on YARN is a common use case in product environment, we'd better enable 
 unit test for this case. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9542) SparkSessionImpl calcualte wrong number of cores number in TestSparkCliDriver [Spark Branch]

2015-02-01 Thread Chengxiang Li (JIRA)
Chengxiang Li created HIVE-9542:
---

 Summary: SparkSessionImpl calcualte wrong number of cores number 
in TestSparkCliDriver [Spark Branch]
 Key: HIVE-9542
 URL: https://issues.apache.org/jira/browse/HIVE-9542
 Project: Hive
  Issue Type: Sub-task
Reporter: Chengxiang Li


TestSparkCliDriver launch local spark cluster with [2,2,1024], which means 2 
executor with 2 cores for each execuotr, HoS get the core number as 2 instead 
of 4.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9211) Research on build mini HoS cluster on YARN for unit test[Spark Branch]

2015-01-30 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9211:

Attachment: HIVE-9211.5-spark.patch

 Research on build mini HoS cluster on YARN for unit test[Spark Branch]
 --

 Key: HIVE-9211
 URL: https://issues.apache.org/jira/browse/HIVE-9211
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M5
 Attachments: HIVE-9211.1-spark.patch, HIVE-9211.2-spark.patch, 
 HIVE-9211.2-spark.patch, HIVE-9211.3-spark.patch, HIVE-9211.4-spark.patch, 
 HIVE-9211.5-spark.patch


 HoS on YARN is a common use case in product environment, we'd better enable 
 unit test for this case. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9211) Research on build mini HoS cluster on YARN for unit test[Spark Branch]

2015-01-30 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14298642#comment-14298642
 ] 

Chengxiang Li commented on HIVE-9211:
-

I build spark v1.2.0 with -Dhadoop.version=2.6.0 locally, and remove embedded 
hadoop packages, it works.
Besides, why we remove hadoop packages from spark assembly jar? try to avoid 
potential hadoop conflict? 

 Research on build mini HoS cluster on YARN for unit test[Spark Branch]
 --

 Key: HIVE-9211
 URL: https://issues.apache.org/jira/browse/HIVE-9211
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M5
 Attachments: HIVE-9211.1-spark.patch, HIVE-9211.2-spark.patch, 
 HIVE-9211.2-spark.patch, HIVE-9211.3-spark.patch, HIVE-9211.4-spark.patch, 
 HIVE-9211.5-spark.patch


 HoS on YARN is a common use case in product environment, we'd better enable 
 unit test for this case. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9449) Push YARN configuration to Spark while deply Spark on YARN[Spark Branch]

2015-01-30 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9449:

Attachment: HiveonSparkconfiguration.pdf

 Push YARN configuration to Spark while deply Spark on YARN[Spark Branch]
 

 Key: HIVE-9449
 URL: https://issues.apache.org/jira/browse/HIVE-9449
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
 Fix For: spark-branch

 Attachments: HIVE-9449.1-spark.patch, HIVE-9449.1-spark.patch, 
 HIVE-9449.2-spark.patch, HiveonSparkconfiguration.pdf


 We only push Spark configuration and RSC configuration to Spark while launch 
 Spark cluster now, for Spark on YARN mode, Spark need extra YARN 
 configuration to launch Spark cluster. Besides this, to support dynamically 
 configuration setting for RSC configuration/YARN configuration, we need to 
 recreate SparkSession while RSC configuration/YARN configuration update as 
 well, as they may influence the Spark cluster deployment as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9211) Research on build mini HoS cluster on YARN for unit test[Spark Branch]

2015-01-30 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14298426#comment-14298426
 ] 

Chengxiang Li commented on HIVE-9211:
-

Hi, [~brocknoland], the missed class is from commons-collections jar, i left 
the exception stace at the last, the Spark assembly from current Spark tarball 
does not include commons-collections jar. I build spark v1.2.0 on my own 
environment, it does include commons-collections jar.
{noformat}
Exception in thread main java.lang.NoClassDefFoundError: 
org/apache/commons/collections/map/UnmodifiableMap
at 
org.apache.hadoop.conf.Configuration$DeprecationContext.init(Configuration.java:398)
at org.apache.hadoop.conf.Configuration.clinit(Configuration.java:438)
at 
org.apache.spark.deploy.yarn.YarnSparkHadoopUtil.newConfiguration(YarnSparkHadoopUtil.scala:57)
at 
org.apache.spark.deploy.SparkHadoopUtil.init(SparkHadoopUtil.scala:42)
at 
org.apache.spark.deploy.yarn.YarnSparkHadoopUtil.init(YarnSparkHadoopUtil.scala:45)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
at java.lang.Class.newInstance0(Class.java:374)
at java.lang.Class.newInstance(Class.java:327)
at 
org.apache.spark.deploy.SparkHadoopUtil$.liftedTree1$1(SparkHadoopUtil.scala:196)
at 
org.apache.spark.deploy.SparkHadoopUtil$.init(SparkHadoopUtil.scala:194)
at 
org.apache.spark.deploy.SparkHadoopUtil$.clinit(SparkHadoopUtil.scala)
at 
org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:115)
at 
org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:161)
at 
org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
Caused by: java.lang.ClassNotFoundException: 
org.apache.commons.collections.map.UnmodifiableMap
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang
{noformat}

 Research on build mini HoS cluster on YARN for unit test[Spark Branch]
 --

 Key: HIVE-9211
 URL: https://issues.apache.org/jira/browse/HIVE-9211
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M5
 Attachments: HIVE-9211.1-spark.patch, HIVE-9211.2-spark.patch, 
 HIVE-9211.2-spark.patch, HIVE-9211.3-spark.patch, HIVE-9211.4-spark.patch


 HoS on YARN is a common use case in product environment, we'd better enable 
 unit test for this case. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9211) Research on build mini HoS cluster on YARN for unit test[Spark Branch]

2015-01-29 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9211:

Attachment: HIVE-9211.4-spark.patch

[~brocknoland], what code base is our current Spark installation built upon?  I 
run into some inconsistent jar dependency issue in test, and update Spark 
installation based latest Spark branch-1.2 code fix it. The Hive spark branch 
depends on Hadoop 2.6.0 for hadoop2 now, we may need to build spark consistent 
with it.

 Research on build mini HoS cluster on YARN for unit test[Spark Branch]
 --

 Key: HIVE-9211
 URL: https://issues.apache.org/jira/browse/HIVE-9211
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M5
 Attachments: HIVE-9211.1-spark.patch, HIVE-9211.2-spark.patch, 
 HIVE-9211.2-spark.patch, HIVE-9211.3-spark.patch, HIVE-9211.4-spark.patch


 HoS on YARN is a common use case in product environment, we'd better enable 
 unit test for this case. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9487) Make Remote Spark Context secure [Spark Branch]

2015-01-28 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14296395#comment-14296395
 ] 

Chengxiang Li commented on HIVE-9487:
-

+1, the patch looks good to me.

 Make Remote Spark Context secure [Spark Branch]
 ---

 Key: HIVE-9487
 URL: https://issues.apache.org/jira/browse/HIVE-9487
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Marcelo Vanzin
Assignee: Marcelo Vanzin
 Attachments: HIVE-9487.1-spark.patch


 The RSC currently uses an ad-hoc, insecure authentication mechanism. We 
 should instead use a proper auth mechanism and add encryption to the mix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9211) Research on build mini HoS cluster on YARN for unit test[Spark Branch]

2015-01-27 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9211:

Attachment: HIVE-9211.3-spark.patch

 Research on build mini HoS cluster on YARN for unit test[Spark Branch]
 --

 Key: HIVE-9211
 URL: https://issues.apache.org/jira/browse/HIVE-9211
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M5
 Attachments: HIVE-9211.1-spark.patch, HIVE-9211.2-spark.patch, 
 HIVE-9211.2-spark.patch, HIVE-9211.3-spark.patch


 HoS on YARN is a common use case in product environment, we'd better enable 
 unit test for this case. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9211) Research on build mini HoS cluster on YARN for unit test[Spark Branch]

2015-01-27 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293613#comment-14293613
 ] 

Chengxiang Li commented on HIVE-9211:
-

No log files found in the container log directory,quite strange, need further 
research.

 Research on build mini HoS cluster on YARN for unit test[Spark Branch]
 --

 Key: HIVE-9211
 URL: https://issues.apache.org/jira/browse/HIVE-9211
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M5
 Attachments: HIVE-9211.1-spark.patch, HIVE-9211.2-spark.patch, 
 HIVE-9211.2-spark.patch, HIVE-9211.3-spark.patch


 HoS on YARN is a common use case in product environment, we'd better enable 
 unit test for this case. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9211) Research on build mini HoS cluster on YARN for unit test[Spark Branch]

2015-01-26 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9211:

Attachment: HIVE-9211.2-spark.patch

 Research on build mini HoS cluster on YARN for unit test[Spark Branch]
 --

 Key: HIVE-9211
 URL: https://issues.apache.org/jira/browse/HIVE-9211
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M5
 Attachments: HIVE-9211.1-spark.patch, HIVE-9211.2-spark.patch


 HoS on YARN is a common use case in product environment, we'd better enable 
 unit test for this case. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9211) Research on build mini HoS cluster on YARN for unit test[Spark Branch]

2015-01-26 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9211:

Attachment: HIVE-9211.2-spark.patch

 Research on build mini HoS cluster on YARN for unit test[Spark Branch]
 --

 Key: HIVE-9211
 URL: https://issues.apache.org/jira/browse/HIVE-9211
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M5
 Attachments: HIVE-9211.1-spark.patch, HIVE-9211.2-spark.patch


 HoS on YARN is a common use case in product environment, we'd better enable 
 unit test for this case. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9211) Research on build mini HoS cluster on YARN for unit test[Spark Branch]

2015-01-26 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9211:

Attachment: (was: HIVE-9211.2-spark.patch)

 Research on build mini HoS cluster on YARN for unit test[Spark Branch]
 --

 Key: HIVE-9211
 URL: https://issues.apache.org/jira/browse/HIVE-9211
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M5
 Attachments: HIVE-9211.1-spark.patch, HIVE-9211.2-spark.patch


 HoS on YARN is a common use case in product environment, we'd better enable 
 unit test for this case. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9211) Research on build mini HoS cluster on YARN for unit test[Spark Branch]

2015-01-26 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293057#comment-14293057
 ] 

Chengxiang Li commented on HIVE-9211:
-

I work on Linux. 

 Research on build mini HoS cluster on YARN for unit test[Spark Branch]
 --

 Key: HIVE-9211
 URL: https://issues.apache.org/jira/browse/HIVE-9211
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M5
 Attachments: HIVE-9211.1-spark.patch, HIVE-9211.2-spark.patch


 HoS on YARN is a common use case in product environment, we'd better enable 
 unit test for this case. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-9425) External Function Jar files are not available for Driver when running with yarn-cluster mode [Spark Branch]

2015-01-26 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li reassigned HIVE-9425:
---

Assignee: Chengxiang Li

 External Function Jar files are not available for Driver when running with 
 yarn-cluster mode [Spark Branch]
 ---

 Key: HIVE-9425
 URL: https://issues.apache.org/jira/browse/HIVE-9425
 Project: Hive
  Issue Type: Sub-task
  Components: spark-branch
Reporter: Xiaomin Zhang
Assignee: Chengxiang Li

 15/01/20 00:27:31 INFO cluster.YarnClusterScheduler: 
 YarnClusterScheduler.postStartHook done
 15/01/20 00:27:31 ERROR spark.SparkContext: Error adding jar 
 (java.io.FileNotFoundException: hive-exec-0.15.0-SNAPSHOT.jar (No such file 
 or directory)), was the --addJars option used?
 15/01/20 00:27:31 ERROR spark.SparkContext: Error adding jar 
 (java.io.FileNotFoundException: opennlp-maxent-3.0.3.jar (No such file or 
 directory)), was the --addJars option used?
 15/01/20 00:27:31 ERROR spark.SparkContext: Error adding jar 
 (java.io.FileNotFoundException: bigbenchqueriesmr.jar (No such file or 
 directory)), was the --addJars option used?
 15/01/20 00:27:31 ERROR spark.SparkContext: Error adding jar 
 (java.io.FileNotFoundException: opennlp-tools-1.5.3.jar (No such file or 
 directory)), was the --addJars option used?
 15/01/20 00:27:31 ERROR spark.SparkContext: Error adding jar 
 (java.io.FileNotFoundException: jcl-over-slf4j-1.7.5.jar (No such file or 
 directory)), was the --addJars option used?
 15/01/20 00:27:31 INFO client.RemoteDriver: Received job request 
 fef081b0-5408-4804-9531-d131fdd628e6
 15/01/20 00:27:31 INFO Configuration.deprecation: mapred.max.split.size is 
 deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
 15/01/20 00:27:31 INFO Configuration.deprecation: mapred.min.split.size is 
 deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
 15/01/20 00:27:31 INFO client.RemoteDriver: Failed to run job 
 fef081b0-5408-4804-9531-d131fdd628e6
 org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find 
 class: de.bankmark.bigbench.queries.q10.SentimentUDF
 Serialization trace:
 genericUDTF (org.apache.hadoop.hive.ql.plan.UDTFDesc)
 conf (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 invertedWorkGraph (org.apache.hadoop.hive.ql.plan.SparkWork)
   at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
   at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
 It seems the additional Jar files are not uploaded to DistributedCache, so 
 that the Driver cannot access it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9211) Research on build mini HoS cluster on YARN for unit test[Spark Branch]

2015-01-26 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9211:

Status: Patch Available  (was: Open)

 Research on build mini HoS cluster on YARN for unit test[Spark Branch]
 --

 Key: HIVE-9211
 URL: https://issues.apache.org/jira/browse/HIVE-9211
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M5
 Attachments: HIVE-9211.1-spark.patch, HIVE-9211.2-spark.patch


 HoS on YARN is a common use case in product environment, we'd better enable 
 unit test for this case. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9211) Research on build mini HoS cluster on YARN for unit test[Spark Branch]

2015-01-26 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293022#comment-14293022
 ] 

Chengxiang Li commented on HIVE-9211:
-

From hive.log, seems like some error happens in yarn container, but i can't 
reproduce it in my own machine. The container log is located at 
\{HIVE_HOME\}/itests/qtest-spark/target/sparkOnYarn/SparkOnYarn-logDir-nm-\*\_\*/application\_\*/container\_\*,
 [~xuefuz], is there any chance these container logs can be accessed through 
http service as hive.log?

 Research on build mini HoS cluster on YARN for unit test[Spark Branch]
 --

 Key: HIVE-9211
 URL: https://issues.apache.org/jira/browse/HIVE-9211
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M5
 Attachments: HIVE-9211.1-spark.patch, HIVE-9211.2-spark.patch


 HoS on YARN is a common use case in product environment, we'd better enable 
 unit test for this case. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9211) Research on build mini HoS cluster on YARN for unit test[Spark Branch]

2015-01-26 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293070#comment-14293070
 ] 

Chengxiang Li commented on HIVE-9211:
-

Great, thanks, [~brocknoland].

 Research on build mini HoS cluster on YARN for unit test[Spark Branch]
 --

 Key: HIVE-9211
 URL: https://issues.apache.org/jira/browse/HIVE-9211
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M5
 Attachments: HIVE-9211.1-spark.patch, HIVE-9211.2-spark.patch, 
 HIVE-9211.2-spark.patch


 HoS on YARN is a common use case in product environment, we'd better enable 
 unit test for this case. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9449) Push YARN configuration to Spark while deply Spark on YARN[Spark Branch]

2015-01-25 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9449:

Attachment: HIVE-9449.2-spark.patch

 Push YARN configuration to Spark while deply Spark on YARN[Spark Branch]
 

 Key: HIVE-9449
 URL: https://issues.apache.org/jira/browse/HIVE-9449
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
 Attachments: HIVE-9449.1-spark.patch, HIVE-9449.1-spark.patch, 
 HIVE-9449.2-spark.patch


 We only push Spark configuration and RSC configuration to Spark while launch 
 Spark cluster now, for Spark on YARN mode, Spark need extra YARN 
 configuration to launch Spark cluster. Besides this, to support dynamically 
 configuration setting for RSC configuration/YARN configuration, we need to 
 recreate SparkSession while RSC configuration/YARN configuration update as 
 well, as they may influence the Spark cluster deployment as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9211) Research on build mini HoS cluster on YARN for unit test[Spark Branch]

2015-01-25 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9211:

Attachment: HIVE-9211.1-spark.patch

 Research on build mini HoS cluster on YARN for unit test[Spark Branch]
 --

 Key: HIVE-9211
 URL: https://issues.apache.org/jira/browse/HIVE-9211
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M5
 Attachments: HIVE-9211.1-spark.patch


 HoS on YARN is a common use case in product environment, we'd better enable 
 unit test for this case. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9370) SparkJobMonitor timeout as sortByKey would launch extra Spark job before original job get submitted [Spark Branch]

2015-01-22 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9370:

Attachment: HIVE-9370.1-spark.patch

 SparkJobMonitor timeout as sortByKey would launch extra Spark job before 
 original job get submitted [Spark Branch]
 --

 Key: HIVE-9370
 URL: https://issues.apache.org/jira/browse/HIVE-9370
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: yuyun.chen
Assignee: Chengxiang Li
 Attachments: HIVE-9370.1-spark.patch


 enable hive on spark and run BigBench Query 8 then got the following 
 exception:
 2015-01-14 11:43:46,057 INFO  [main]: impl.RemoteSparkJobStatus 
 (RemoteSparkJobStatus.java:getSparkJobInfo(143)) - Job hasn't been submitted 
 after 30s. Aborting it.
 2015-01-14 11:43:46,061 INFO  [main]: impl.RemoteSparkJobStatus 
 (RemoteSparkJobStatus.java:getSparkJobInfo(143)) - Job hasn't been submitted 
 after 30s. Aborting it.
 2015-01-14 11:43:46,061 ERROR [main]: status.SparkJobMonitor 
 (SessionState.java:printError(839)) - Status: Failed
 2015-01-14 11:43:46,062 INFO  [main]: log.PerfLogger 
 (PerfLogger.java:PerfLogEnd(148)) - /PERFLOG method=SparkRunJob 
 start=1421206996052 end=1421207026062 duration=30010 
 from=org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) - 15/01/14 11:43:46 INFO RemoteDriver: Failed 
 to run job 0a9a7782-0e0b-4561-8468-959a6d8df0a3
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) - java.lang.InterruptedException
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at java.lang.Object.wait(Native 
 Method)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 java.lang.Object.wait(Object.java:503)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.scheduler.JobWaiter.awaitResult(JobWaiter.scala:73)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:514)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1282)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1300)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1314)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1328)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.rdd.RDD.collect(RDD.scala:780)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.RangePartitioner$.sketch(Partitioner.scala:262)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.RangePartitioner.init(Partitioner.scala:124)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.rdd.OrderedRDDFunctions.sortByKey(OrderedRDDFunctions.scala:63)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.api.java.JavaPairRDD.sortByKey(JavaPairRDD.scala:894)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.api.java.JavaPairRDD.sortByKey(JavaPairRDD.scala:864)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.hadoop.hive.ql.exec.spark.SortByShuffler.shuffle(SortByShuffler.java:48)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.hadoop.hive.ql.exec.spark.ShuffleTran.transform(ShuffleTran.java:45)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 

[jira] [Updated] (HIVE-9370) SparkJobMonitor timeout as sortByKey would launch extra Spark job before original job get submitted [Spark Branch]

2015-01-22 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9370:

Status: Patch Available  (was: Open)

 SparkJobMonitor timeout as sortByKey would launch extra Spark job before 
 original job get submitted [Spark Branch]
 --

 Key: HIVE-9370
 URL: https://issues.apache.org/jira/browse/HIVE-9370
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: yuyun.chen
Assignee: Chengxiang Li
 Attachments: HIVE-9370.1-spark.patch


 enable hive on spark and run BigBench Query 8 then got the following 
 exception:
 2015-01-14 11:43:46,057 INFO  [main]: impl.RemoteSparkJobStatus 
 (RemoteSparkJobStatus.java:getSparkJobInfo(143)) - Job hasn't been submitted 
 after 30s. Aborting it.
 2015-01-14 11:43:46,061 INFO  [main]: impl.RemoteSparkJobStatus 
 (RemoteSparkJobStatus.java:getSparkJobInfo(143)) - Job hasn't been submitted 
 after 30s. Aborting it.
 2015-01-14 11:43:46,061 ERROR [main]: status.SparkJobMonitor 
 (SessionState.java:printError(839)) - Status: Failed
 2015-01-14 11:43:46,062 INFO  [main]: log.PerfLogger 
 (PerfLogger.java:PerfLogEnd(148)) - /PERFLOG method=SparkRunJob 
 start=1421206996052 end=1421207026062 duration=30010 
 from=org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) - 15/01/14 11:43:46 INFO RemoteDriver: Failed 
 to run job 0a9a7782-0e0b-4561-8468-959a6d8df0a3
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) - java.lang.InterruptedException
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at java.lang.Object.wait(Native 
 Method)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 java.lang.Object.wait(Object.java:503)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.scheduler.JobWaiter.awaitResult(JobWaiter.scala:73)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:514)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1282)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1300)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1314)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1328)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.rdd.RDD.collect(RDD.scala:780)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.RangePartitioner$.sketch(Partitioner.scala:262)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.RangePartitioner.init(Partitioner.scala:124)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.rdd.OrderedRDDFunctions.sortByKey(OrderedRDDFunctions.scala:63)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.api.java.JavaPairRDD.sortByKey(JavaPairRDD.scala:894)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.api.java.JavaPairRDD.sortByKey(JavaPairRDD.scala:864)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.hadoop.hive.ql.exec.spark.SortByShuffler.shuffle(SortByShuffler.java:48)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.hadoop.hive.ql.exec.spark.ShuffleTran.transform(ShuffleTran.java:45)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 

[jira] [Updated] (HIVE-9410) ClassNotFoundException occurs during hive query case execution with UDF defined [Spark Branch]

2015-01-22 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9410:

Attachment: HIVE-9410.3-spark.patch

 ClassNotFoundException occurs during hive query case execution with UDF 
 defined [Spark Branch]
 --

 Key: HIVE-9410
 URL: https://issues.apache.org/jira/browse/HIVE-9410
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
 Environment: CentOS 6.5
 JDK1.7
Reporter: Xin Hao
Assignee: Chengxiang Li
 Attachments: HIVE-9410.1-spark.patch, HIVE-9410.2-spark.patch, 
 HIVE-9410.3-spark.patch


 We have a hive query case with UDF defined (i.e. BigBench case Q10, Q18 
 etc.). It will be passed for default Hive (on MR) mode, while failed for Hive 
 On Spark mode (both Standalone and Yarn-Client). 
 Although we use 'add jar .jar;' to add the UDF jar explicitly, the issue 
 still exists. 
 BTW, if we put the UDF jar into $HIVE_HOME/lib dir, the case will be passed.
 Detail Error Message is as below (NOTE: 
 de.bankmark.bigbench.queries.q10.SentimentUDF is the UDF which contained in 
 jar bigbenchqueriesmr.jar, and we have add command like 'add jar 
 /location/to/bigbenchqueriesmr.jar;' into .sql explicitly)
 INFO  [pool-1-thread-1]: client.RemoteDriver (RemoteDriver.java:call(316)) - 
 Failed to run job 8dd120cb-1a4d-4d1c-ba31-61eac648c27d
 org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find 
 class: de.bankmark.bigbench.queries.q10.SentimentUDF
 Serialization trace:
 genericUDTF (org.apache.hadoop.hive.ql.plan.UDTFDesc)
 conf (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.MapJoinOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.FilterOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 right (org.apache.commons.lang3.tuple.ImmutablePair)
 edgeProperties (org.apache.hadoop.hive.ql.plan.SparkWork)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:99)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 ...
 Caused by: java.lang.ClassNotFoundException: 
 de.bankmark.bigbench.queries.q10.SentimentUDF
 at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
 at java.lang.Class.forName0(Native Method)
 at java.lang.Class.forName(Class.java:270)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:136)
 ... 55 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9410) ClassNotFoundException occurs during hive query case execution with UDF defined [Spark Branch]

2015-01-22 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14287161#comment-14287161
 ] 

Chengxiang Li commented on HIVE-9410:
-

[~xuefuz], all contrib related qtest is launched with TestContribCliDriver, we 
can not enable these qtests in TestSparkCliDriver directly, I'm not sure how 
how to do it yet, and it's should be beyond this JIRA's scope, i think we may 
create another JIRA to track it.

 ClassNotFoundException occurs during hive query case execution with UDF 
 defined [Spark Branch]
 --

 Key: HIVE-9410
 URL: https://issues.apache.org/jira/browse/HIVE-9410
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
 Environment: CentOS 6.5
 JDK1.7
Reporter: Xin Hao
Assignee: Chengxiang Li
 Attachments: HIVE-9410.1-spark.patch, HIVE-9410.2-spark.patch, 
 HIVE-9410.3-spark.patch


 We have a hive query case with UDF defined (i.e. BigBench case Q10, Q18 
 etc.). It will be passed for default Hive (on MR) mode, while failed for Hive 
 On Spark mode (both Standalone and Yarn-Client). 
 Although we use 'add jar .jar;' to add the UDF jar explicitly, the issue 
 still exists. 
 BTW, if we put the UDF jar into $HIVE_HOME/lib dir, the case will be passed.
 Detail Error Message is as below (NOTE: 
 de.bankmark.bigbench.queries.q10.SentimentUDF is the UDF which contained in 
 jar bigbenchqueriesmr.jar, and we have add command like 'add jar 
 /location/to/bigbenchqueriesmr.jar;' into .sql explicitly)
 INFO  [pool-1-thread-1]: client.RemoteDriver (RemoteDriver.java:call(316)) - 
 Failed to run job 8dd120cb-1a4d-4d1c-ba31-61eac648c27d
 org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find 
 class: de.bankmark.bigbench.queries.q10.SentimentUDF
 Serialization trace:
 genericUDTF (org.apache.hadoop.hive.ql.plan.UDTFDesc)
 conf (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.MapJoinOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.FilterOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 right (org.apache.commons.lang3.tuple.ImmutablePair)
 edgeProperties (org.apache.hadoop.hive.ql.plan.SparkWork)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:99)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 ...
 Caused by: java.lang.ClassNotFoundException: 
 de.bankmark.bigbench.queries.q10.SentimentUDF
 at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
 at java.lang.Class.forName0(Native Method)
 at java.lang.Class.forName(Class.java:270)
 at 
 

[jira] [Commented] (HIVE-9370) SparkJobMonitor timeout as sortByKey would launch extra Spark job before original job get submitted [Spark Branch]

2015-01-22 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14288684#comment-14288684
 ] 

Chengxiang Li commented on HIVE-9370:
-

RSC have timeout in netty level, so if remote spark context do not response in 
netty level, we would get the exception. One question is that the sparksession 
is still alive, use could still submit queries but failed to execute as PRC 
channel is already closed, user need to restart Hive CLI or use a tricky way to 
new remote spark context, like update spark configuration.

 SparkJobMonitor timeout as sortByKey would launch extra Spark job before 
 original job get submitted [Spark Branch]
 --

 Key: HIVE-9370
 URL: https://issues.apache.org/jira/browse/HIVE-9370
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: yuyun.chen
Assignee: Chengxiang Li
 Fix For: spark-branch

 Attachments: HIVE-9370.1-spark.patch


 enable hive on spark and run BigBench Query 8 then got the following 
 exception:
 2015-01-14 11:43:46,057 INFO  [main]: impl.RemoteSparkJobStatus 
 (RemoteSparkJobStatus.java:getSparkJobInfo(143)) - Job hasn't been submitted 
 after 30s. Aborting it.
 2015-01-14 11:43:46,061 INFO  [main]: impl.RemoteSparkJobStatus 
 (RemoteSparkJobStatus.java:getSparkJobInfo(143)) - Job hasn't been submitted 
 after 30s. Aborting it.
 2015-01-14 11:43:46,061 ERROR [main]: status.SparkJobMonitor 
 (SessionState.java:printError(839)) - Status: Failed
 2015-01-14 11:43:46,062 INFO  [main]: log.PerfLogger 
 (PerfLogger.java:PerfLogEnd(148)) - /PERFLOG method=SparkRunJob 
 start=1421206996052 end=1421207026062 duration=30010 
 from=org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) - 15/01/14 11:43:46 INFO RemoteDriver: Failed 
 to run job 0a9a7782-0e0b-4561-8468-959a6d8df0a3
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) - java.lang.InterruptedException
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at java.lang.Object.wait(Native 
 Method)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 java.lang.Object.wait(Object.java:503)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.scheduler.JobWaiter.awaitResult(JobWaiter.scala:73)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:514)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1282)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1300)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1314)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1328)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.rdd.RDD.collect(RDD.scala:780)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.RangePartitioner$.sketch(Partitioner.scala:262)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.RangePartitioner.init(Partitioner.scala:124)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.rdd.OrderedRDDFunctions.sortByKey(OrderedRDDFunctions.scala:63)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.api.java.JavaPairRDD.sortByKey(JavaPairRDD.scala:894)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.api.java.JavaPairRDD.sortByKey(JavaPairRDD.scala:864)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 

[jira] [Commented] (HIVE-9410) ClassNotFoundException occurs during hive query case execution with UDF defined [Spark Branch]

2015-01-22 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14288797#comment-14288797
 ] 

Chengxiang Li commented on HIVE-9410:
-

Yes, Spark would address this issue more properly, I've create SPARK-5377 for 
this. About the unit test, udf_example_add.q should not suitable to verify this 
issue, as Hive does not need to load UDF class during SparkWork serialization, 
i would try to enable some UTDF unit test for this.

 ClassNotFoundException occurs during hive query case execution with UDF 
 defined [Spark Branch]
 --

 Key: HIVE-9410
 URL: https://issues.apache.org/jira/browse/HIVE-9410
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
 Environment: CentOS 6.5
 JDK1.7
Reporter: Xin Hao
Assignee: Chengxiang Li
 Attachments: HIVE-9410.1-spark.patch, HIVE-9410.2-spark.patch, 
 HIVE-9410.3-spark.patch


 We have a hive query case with UDF defined (i.e. BigBench case Q10, Q18 
 etc.). It will be passed for default Hive (on MR) mode, while failed for Hive 
 On Spark mode (both Standalone and Yarn-Client). 
 Although we use 'add jar .jar;' to add the UDF jar explicitly, the issue 
 still exists. 
 BTW, if we put the UDF jar into $HIVE_HOME/lib dir, the case will be passed.
 Detail Error Message is as below (NOTE: 
 de.bankmark.bigbench.queries.q10.SentimentUDF is the UDF which contained in 
 jar bigbenchqueriesmr.jar, and we have add command like 'add jar 
 /location/to/bigbenchqueriesmr.jar;' into .sql explicitly)
 INFO  [pool-1-thread-1]: client.RemoteDriver (RemoteDriver.java:call(316)) - 
 Failed to run job 8dd120cb-1a4d-4d1c-ba31-61eac648c27d
 org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find 
 class: de.bankmark.bigbench.queries.q10.SentimentUDF
 Serialization trace:
 genericUDTF (org.apache.hadoop.hive.ql.plan.UDTFDesc)
 conf (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.MapJoinOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.FilterOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 right (org.apache.commons.lang3.tuple.ImmutablePair)
 edgeProperties (org.apache.hadoop.hive.ql.plan.SparkWork)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:99)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 ...
 Caused by: java.lang.ClassNotFoundException: 
 de.bankmark.bigbench.queries.q10.SentimentUDF
 at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
 at java.lang.Class.forName0(Native Method)
 at java.lang.Class.forName(Class.java:270)
 at 
 

[jira] [Commented] (HIVE-9410) ClassNotFoundException occurs during hive query case execution with UDF defined [Spark Branch]

2015-01-22 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14288848#comment-14288848
 ] 

Chengxiang Li commented on HIVE-9410:
-

As ser/deser between Hive driver and remote spark context is beyond spark, we 
still need this fix even SPARK-5377 is resolved.

 ClassNotFoundException occurs during hive query case execution with UDF 
 defined [Spark Branch]
 --

 Key: HIVE-9410
 URL: https://issues.apache.org/jira/browse/HIVE-9410
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
 Environment: CentOS 6.5
 JDK1.7
Reporter: Xin Hao
Assignee: Chengxiang Li
 Attachments: HIVE-9410.1-spark.patch, HIVE-9410.2-spark.patch, 
 HIVE-9410.3-spark.patch


 We have a hive query case with UDF defined (i.e. BigBench case Q10, Q18 
 etc.). It will be passed for default Hive (on MR) mode, while failed for Hive 
 On Spark mode (both Standalone and Yarn-Client). 
 Although we use 'add jar .jar;' to add the UDF jar explicitly, the issue 
 still exists. 
 BTW, if we put the UDF jar into $HIVE_HOME/lib dir, the case will be passed.
 Detail Error Message is as below (NOTE: 
 de.bankmark.bigbench.queries.q10.SentimentUDF is the UDF which contained in 
 jar bigbenchqueriesmr.jar, and we have add command like 'add jar 
 /location/to/bigbenchqueriesmr.jar;' into .sql explicitly)
 INFO  [pool-1-thread-1]: client.RemoteDriver (RemoteDriver.java:call(316)) - 
 Failed to run job 8dd120cb-1a4d-4d1c-ba31-61eac648c27d
 org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find 
 class: de.bankmark.bigbench.queries.q10.SentimentUDF
 Serialization trace:
 genericUDTF (org.apache.hadoop.hive.ql.plan.UDTFDesc)
 conf (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.MapJoinOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.FilterOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 right (org.apache.commons.lang3.tuple.ImmutablePair)
 edgeProperties (org.apache.hadoop.hive.ql.plan.SparkWork)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:99)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 ...
 Caused by: java.lang.ClassNotFoundException: 
 de.bankmark.bigbench.queries.q10.SentimentUDF
 at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
 at java.lang.Class.forName0(Native Method)
 at java.lang.Class.forName(Class.java:270)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:136)
 ... 55 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9410) ClassNotFoundException occurs during hive query case execution with UDF defined [Spark Branch]

2015-01-22 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9410:

Attachment: (was: HIVE-9410.4-spark.patch)

 ClassNotFoundException occurs during hive query case execution with UDF 
 defined [Spark Branch]
 --

 Key: HIVE-9410
 URL: https://issues.apache.org/jira/browse/HIVE-9410
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
 Environment: CentOS 6.5
 JDK1.7
Reporter: Xin Hao
Assignee: Chengxiang Li
 Attachments: HIVE-9410.1-spark.patch, HIVE-9410.2-spark.patch, 
 HIVE-9410.3-spark.patch


 We have a hive query case with UDF defined (i.e. BigBench case Q10, Q18 
 etc.). It will be passed for default Hive (on MR) mode, while failed for Hive 
 On Spark mode (both Standalone and Yarn-Client). 
 Although we use 'add jar .jar;' to add the UDF jar explicitly, the issue 
 still exists. 
 BTW, if we put the UDF jar into $HIVE_HOME/lib dir, the case will be passed.
 Detail Error Message is as below (NOTE: 
 de.bankmark.bigbench.queries.q10.SentimentUDF is the UDF which contained in 
 jar bigbenchqueriesmr.jar, and we have add command like 'add jar 
 /location/to/bigbenchqueriesmr.jar;' into .sql explicitly)
 INFO  [pool-1-thread-1]: client.RemoteDriver (RemoteDriver.java:call(316)) - 
 Failed to run job 8dd120cb-1a4d-4d1c-ba31-61eac648c27d
 org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find 
 class: de.bankmark.bigbench.queries.q10.SentimentUDF
 Serialization trace:
 genericUDTF (org.apache.hadoop.hive.ql.plan.UDTFDesc)
 conf (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.MapJoinOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.FilterOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 right (org.apache.commons.lang3.tuple.ImmutablePair)
 edgeProperties (org.apache.hadoop.hive.ql.plan.SparkWork)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:99)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 ...
 Caused by: java.lang.ClassNotFoundException: 
 de.bankmark.bigbench.queries.q10.SentimentUDF
 at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
 at java.lang.Class.forName0(Native Method)
 at java.lang.Class.forName(Class.java:270)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:136)
 ... 55 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9410) ClassNotFoundException occurs during hive query case execution with UDF defined [Spark Branch]

2015-01-22 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9410:

Attachment: HIVE-9410.4-spark.patch

 ClassNotFoundException occurs during hive query case execution with UDF 
 defined [Spark Branch]
 --

 Key: HIVE-9410
 URL: https://issues.apache.org/jira/browse/HIVE-9410
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
 Environment: CentOS 6.5
 JDK1.7
Reporter: Xin Hao
Assignee: Chengxiang Li
 Attachments: HIVE-9410.1-spark.patch, HIVE-9410.2-spark.patch, 
 HIVE-9410.3-spark.patch, HIVE-9410.4-spark.patch


 We have a hive query case with UDF defined (i.e. BigBench case Q10, Q18 
 etc.). It will be passed for default Hive (on MR) mode, while failed for Hive 
 On Spark mode (both Standalone and Yarn-Client). 
 Although we use 'add jar .jar;' to add the UDF jar explicitly, the issue 
 still exists. 
 BTW, if we put the UDF jar into $HIVE_HOME/lib dir, the case will be passed.
 Detail Error Message is as below (NOTE: 
 de.bankmark.bigbench.queries.q10.SentimentUDF is the UDF which contained in 
 jar bigbenchqueriesmr.jar, and we have add command like 'add jar 
 /location/to/bigbenchqueriesmr.jar;' into .sql explicitly)
 INFO  [pool-1-thread-1]: client.RemoteDriver (RemoteDriver.java:call(316)) - 
 Failed to run job 8dd120cb-1a4d-4d1c-ba31-61eac648c27d
 org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find 
 class: de.bankmark.bigbench.queries.q10.SentimentUDF
 Serialization trace:
 genericUDTF (org.apache.hadoop.hive.ql.plan.UDTFDesc)
 conf (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.MapJoinOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.FilterOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 right (org.apache.commons.lang3.tuple.ImmutablePair)
 edgeProperties (org.apache.hadoop.hive.ql.plan.SparkWork)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:99)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 ...
 Caused by: java.lang.ClassNotFoundException: 
 de.bankmark.bigbench.queries.q10.SentimentUDF
 at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
 at java.lang.Class.forName0(Native Method)
 at java.lang.Class.forName(Class.java:270)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:136)
 ... 55 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9449) Push YARN configuration to Spark while deply Spark on YARN[Spark Branch]

2015-01-22 Thread Chengxiang Li (JIRA)
Chengxiang Li created HIVE-9449:
---

 Summary: Push YARN configuration to Spark while deply Spark on 
YARN[Spark Branch]
 Key: HIVE-9449
 URL: https://issues.apache.org/jira/browse/HIVE-9449
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li


We only push Spark configuration and RSC configuration to Spark while launch 
Spark cluster now, for Spark on YARN mode, Spark need extra YARN configuration 
to launch Spark cluster. Besides this, to support dynamically configuration 
setting for RSC configuration/YARN configuration, we need to recreate 
SparkSession while RSC configuration/YARN configuration update as well, as they 
may influence the Spark cluster deployment as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9449) Push YARN configuration to Spark while deply Spark on YARN[Spark Branch]

2015-01-22 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9449:

Attachment: HIVE-9449.1-spark.patch

 Push YARN configuration to Spark while deply Spark on YARN[Spark Branch]
 

 Key: HIVE-9449
 URL: https://issues.apache.org/jira/browse/HIVE-9449
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
 Attachments: HIVE-9449.1-spark.patch


 We only push Spark configuration and RSC configuration to Spark while launch 
 Spark cluster now, for Spark on YARN mode, Spark need extra YARN 
 configuration to launch Spark cluster. Besides this, to support dynamically 
 configuration setting for RSC configuration/YARN configuration, we need to 
 recreate SparkSession while RSC configuration/YARN configuration update as 
 well, as they may influence the Spark cluster deployment as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9449) Push YARN configuration to Spark while deply Spark on YARN[Spark Branch]

2015-01-22 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9449:

Status: Patch Available  (was: Open)

 Push YARN configuration to Spark while deply Spark on YARN[Spark Branch]
 

 Key: HIVE-9449
 URL: https://issues.apache.org/jira/browse/HIVE-9449
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
 Attachments: HIVE-9449.1-spark.patch


 We only push Spark configuration and RSC configuration to Spark while launch 
 Spark cluster now, for Spark on YARN mode, Spark need extra YARN 
 configuration to launch Spark cluster. Besides this, to support dynamically 
 configuration setting for RSC configuration/YARN configuration, we need to 
 recreate SparkSession while RSC configuration/YARN configuration update as 
 well, as they may influence the Spark cluster deployment as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9410) ClassNotFoundException occurs during hive query case execution with UDF defined [Spark Branch]

2015-01-22 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9410:

Attachment: HIVE-9410.4-spark.patch

 ClassNotFoundException occurs during hive query case execution with UDF 
 defined [Spark Branch]
 --

 Key: HIVE-9410
 URL: https://issues.apache.org/jira/browse/HIVE-9410
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
 Environment: CentOS 6.5
 JDK1.7
Reporter: Xin Hao
Assignee: Chengxiang Li
 Attachments: HIVE-9410.1-spark.patch, HIVE-9410.2-spark.patch, 
 HIVE-9410.3-spark.patch, HIVE-9410.4-spark.patch


 We have a hive query case with UDF defined (i.e. BigBench case Q10, Q18 
 etc.). It will be passed for default Hive (on MR) mode, while failed for Hive 
 On Spark mode (both Standalone and Yarn-Client). 
 Although we use 'add jar .jar;' to add the UDF jar explicitly, the issue 
 still exists. 
 BTW, if we put the UDF jar into $HIVE_HOME/lib dir, the case will be passed.
 Detail Error Message is as below (NOTE: 
 de.bankmark.bigbench.queries.q10.SentimentUDF is the UDF which contained in 
 jar bigbenchqueriesmr.jar, and we have add command like 'add jar 
 /location/to/bigbenchqueriesmr.jar;' into .sql explicitly)
 INFO  [pool-1-thread-1]: client.RemoteDriver (RemoteDriver.java:call(316)) - 
 Failed to run job 8dd120cb-1a4d-4d1c-ba31-61eac648c27d
 org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find 
 class: de.bankmark.bigbench.queries.q10.SentimentUDF
 Serialization trace:
 genericUDTF (org.apache.hadoop.hive.ql.plan.UDTFDesc)
 conf (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.MapJoinOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.FilterOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 right (org.apache.commons.lang3.tuple.ImmutablePair)
 edgeProperties (org.apache.hadoop.hive.ql.plan.SparkWork)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:99)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 ...
 Caused by: java.lang.ClassNotFoundException: 
 de.bankmark.bigbench.queries.q10.SentimentUDF
 at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
 at java.lang.Class.forName0(Native Method)
 at java.lang.Class.forName(Class.java:270)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:136)
 ... 55 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9410) ClassNotFoundException occurs during hive query case execution with UDF defined [Spark Branch]

2015-01-21 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286742#comment-14286742
 ] 

Chengxiang Li commented on HIVE-9410:
-

The udf_example_add.q query got a Limit 1, so it would executed locally instead 
of starting a SparkJob, I would take a look how to enhance the qtest to cover 
this issue. For the patch, yes, the change in AddJarJob/JobStatusJob is in hive 
driver side code but the jobs are actually executed in RemoteDriver side. I 
would check with Hao Xin offline to check why it does not fix the problem.

 ClassNotFoundException occurs during hive query case execution with UDF 
 defined [Spark Branch]
 --

 Key: HIVE-9410
 URL: https://issues.apache.org/jira/browse/HIVE-9410
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
 Environment: CentOS 6.5
 JDK1.7
Reporter: Xin Hao
Assignee: Chengxiang Li
 Attachments: HIVE-9410.1-spark.patch


 We have a hive query case with UDF defined (i.e. BigBench case Q10, Q18 
 etc.). It will be passed for default Hive (on MR) mode, while failed for Hive 
 On Spark mode (both Standalone and Yarn-Client). 
 Although we use 'add jar .jar;' to add the UDF jar explicitly, the issue 
 still exists. 
 BTW, if we put the UDF jar into $HIVE_HOME/lib dir, the case will be passed.
 Detail Error Message is as below (NOTE: 
 de.bankmark.bigbench.queries.q10.SentimentUDF is the UDF which contained in 
 jar bigbenchqueriesmr.jar, and we have add command like 'add jar 
 /location/to/bigbenchqueriesmr.jar;' into .sql explicitly)
 INFO  [pool-1-thread-1]: client.RemoteDriver (RemoteDriver.java:call(316)) - 
 Failed to run job 8dd120cb-1a4d-4d1c-ba31-61eac648c27d
 org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find 
 class: de.bankmark.bigbench.queries.q10.SentimentUDF
 Serialization trace:
 genericUDTF (org.apache.hadoop.hive.ql.plan.UDTFDesc)
 conf (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.MapJoinOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.FilterOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 right (org.apache.commons.lang3.tuple.ImmutablePair)
 edgeProperties (org.apache.hadoop.hive.ql.plan.SparkWork)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:99)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 ...
 Caused by: java.lang.ClassNotFoundException: 
 de.bankmark.bigbench.queries.q10.SentimentUDF
 at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
 at java.lang.Class.forName0(Native Method)
 at java.lang.Class.forName(Class.java:270)
 at 
 

[jira] [Updated] (HIVE-9410) ClassNotFoundException occurs during hive query case execution with UDF defined [Spark Branch]

2015-01-21 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9410:

Attachment: HIVE-9410.2-spark.patch

[~xhao1], I updated the patch, and it passed my BigBench environment on Q10, 
please help to verify as well.

 ClassNotFoundException occurs during hive query case execution with UDF 
 defined [Spark Branch]
 --

 Key: HIVE-9410
 URL: https://issues.apache.org/jira/browse/HIVE-9410
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
 Environment: CentOS 6.5
 JDK1.7
Reporter: Xin Hao
Assignee: Chengxiang Li
 Attachments: HIVE-9410.1-spark.patch, HIVE-9410.2-spark.patch


 We have a hive query case with UDF defined (i.e. BigBench case Q10, Q18 
 etc.). It will be passed for default Hive (on MR) mode, while failed for Hive 
 On Spark mode (both Standalone and Yarn-Client). 
 Although we use 'add jar .jar;' to add the UDF jar explicitly, the issue 
 still exists. 
 BTW, if we put the UDF jar into $HIVE_HOME/lib dir, the case will be passed.
 Detail Error Message is as below (NOTE: 
 de.bankmark.bigbench.queries.q10.SentimentUDF is the UDF which contained in 
 jar bigbenchqueriesmr.jar, and we have add command like 'add jar 
 /location/to/bigbenchqueriesmr.jar;' into .sql explicitly)
 INFO  [pool-1-thread-1]: client.RemoteDriver (RemoteDriver.java:call(316)) - 
 Failed to run job 8dd120cb-1a4d-4d1c-ba31-61eac648c27d
 org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find 
 class: de.bankmark.bigbench.queries.q10.SentimentUDF
 Serialization trace:
 genericUDTF (org.apache.hadoop.hive.ql.plan.UDTFDesc)
 conf (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.MapJoinOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.FilterOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 right (org.apache.commons.lang3.tuple.ImmutablePair)
 edgeProperties (org.apache.hadoop.hive.ql.plan.SparkWork)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:99)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 ...
 Caused by: java.lang.ClassNotFoundException: 
 de.bankmark.bigbench.queries.q10.SentimentUDF
 at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
 at java.lang.Class.forName0(Native Method)
 at java.lang.Class.forName(Class.java:270)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:136)
 ... 55 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9410) ClassNotFoundException occurs during hive query case execution with UDF defined [Spark Branch]

2015-01-21 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286920#comment-14286920
 ] 

Chengxiang Li commented on HIVE-9410:
-

By the way, enhanced qtest is not ready yet.

 ClassNotFoundException occurs during hive query case execution with UDF 
 defined [Spark Branch]
 --

 Key: HIVE-9410
 URL: https://issues.apache.org/jira/browse/HIVE-9410
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
 Environment: CentOS 6.5
 JDK1.7
Reporter: Xin Hao
Assignee: Chengxiang Li
 Attachments: HIVE-9410.1-spark.patch, HIVE-9410.2-spark.patch


 We have a hive query case with UDF defined (i.e. BigBench case Q10, Q18 
 etc.). It will be passed for default Hive (on MR) mode, while failed for Hive 
 On Spark mode (both Standalone and Yarn-Client). 
 Although we use 'add jar .jar;' to add the UDF jar explicitly, the issue 
 still exists. 
 BTW, if we put the UDF jar into $HIVE_HOME/lib dir, the case will be passed.
 Detail Error Message is as below (NOTE: 
 de.bankmark.bigbench.queries.q10.SentimentUDF is the UDF which contained in 
 jar bigbenchqueriesmr.jar, and we have add command like 'add jar 
 /location/to/bigbenchqueriesmr.jar;' into .sql explicitly)
 INFO  [pool-1-thread-1]: client.RemoteDriver (RemoteDriver.java:call(316)) - 
 Failed to run job 8dd120cb-1a4d-4d1c-ba31-61eac648c27d
 org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find 
 class: de.bankmark.bigbench.queries.q10.SentimentUDF
 Serialization trace:
 genericUDTF (org.apache.hadoop.hive.ql.plan.UDTFDesc)
 conf (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.MapJoinOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.FilterOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 right (org.apache.commons.lang3.tuple.ImmutablePair)
 edgeProperties (org.apache.hadoop.hive.ql.plan.SparkWork)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:99)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 ...
 Caused by: java.lang.ClassNotFoundException: 
 de.bankmark.bigbench.queries.q10.SentimentUDF
 at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
 at java.lang.Class.forName0(Native Method)
 at java.lang.Class.forName(Class.java:270)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:136)
 ... 55 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9370) SparkJobMonitor timeout as sortByKey would launch extra Spark job before original job get submitted [Spark Branch]

2015-01-21 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9370:

Summary: SparkJobMonitor timeout as sortByKey would launch extra Spark job 
before original job get submitted [Spark Branch]  (was: Enable Hive on Spark 
for BigBench and run Query 8, the test failed [Spark Branch])

 SparkJobMonitor timeout as sortByKey would launch extra Spark job before 
 original job get submitted [Spark Branch]
 --

 Key: HIVE-9370
 URL: https://issues.apache.org/jira/browse/HIVE-9370
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: yuyun.chen
Assignee: Chengxiang Li

 enable hive on spark and run BigBench Query 8 then got the following 
 exception:
 2015-01-14 11:43:46,057 INFO  [main]: impl.RemoteSparkJobStatus 
 (RemoteSparkJobStatus.java:getSparkJobInfo(143)) - Job hasn't been submitted 
 after 30s. Aborting it.
 2015-01-14 11:43:46,061 INFO  [main]: impl.RemoteSparkJobStatus 
 (RemoteSparkJobStatus.java:getSparkJobInfo(143)) - Job hasn't been submitted 
 after 30s. Aborting it.
 2015-01-14 11:43:46,061 ERROR [main]: status.SparkJobMonitor 
 (SessionState.java:printError(839)) - Status: Failed
 2015-01-14 11:43:46,062 INFO  [main]: log.PerfLogger 
 (PerfLogger.java:PerfLogEnd(148)) - /PERFLOG method=SparkRunJob 
 start=1421206996052 end=1421207026062 duration=30010 
 from=org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) - 15/01/14 11:43:46 INFO RemoteDriver: Failed 
 to run job 0a9a7782-0e0b-4561-8468-959a6d8df0a3
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) - java.lang.InterruptedException
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at java.lang.Object.wait(Native 
 Method)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 java.lang.Object.wait(Object.java:503)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.scheduler.JobWaiter.awaitResult(JobWaiter.scala:73)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:514)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1282)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1300)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1314)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1328)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.rdd.RDD.collect(RDD.scala:780)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.RangePartitioner$.sketch(Partitioner.scala:262)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.RangePartitioner.init(Partitioner.scala:124)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.rdd.OrderedRDDFunctions.sortByKey(OrderedRDDFunctions.scala:63)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.api.java.JavaPairRDD.sortByKey(JavaPairRDD.scala:894)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.api.java.JavaPairRDD.sortByKey(JavaPairRDD.scala:864)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.hadoop.hive.ql.exec.spark.SortByShuffler.shuffle(SortByShuffler.java:48)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.hadoop.hive.ql.exec.spark.ShuffleTran.transform(ShuffleTran.java:45)
 2015-01-14 

[jira] [Commented] (HIVE-9342) add num-executors / executor-cores / executor-memory option support for hive on spark in Yarn mode [Spark Branch]

2015-01-20 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14285061#comment-14285061
 ] 

Chengxiang Li commented on HIVE-9342:
-

Wow! this is the first time i see all checks pass from Hive QA, what an 
exciting message!

 add num-executors / executor-cores / executor-memory option support for hive 
 on spark in Yarn mode [Spark Branch]
 -

 Key: HIVE-9342
 URL: https://issues.apache.org/jira/browse/HIVE-9342
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Affects Versions: spark-branch
Reporter: Pierre Yin
Priority: Minor
  Labels: spark
 Fix For: spark-branch

 Attachments: HIVE-9342.1-spark.patch, HIVE-9342.2-spark.patch, 
 HIVE-9342.3-spark.patch


 When I run hive on spark with Yarn mode, I want to control some yarn option, 
 such as --num-executors, --executor-cores, --executor-memory.
 We can append these options into argv in SparkClientImpl.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9410) Spark branch, ClassNotFoundException occurs during hive query case execution with UDF defined [Spark Branch]

2015-01-20 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9410:

Status: Patch Available  (was: Open)

 Spark branch, ClassNotFoundException occurs during hive query case execution 
 with UDF defined [Spark Branch]
 

 Key: HIVE-9410
 URL: https://issues.apache.org/jira/browse/HIVE-9410
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
 Environment: CentOS 6.5
 JDK1.7
Reporter: Xin Hao
Assignee: Chengxiang Li
 Attachments: HIVE-9410.1-spark.patch


 We have a hive query case with UDF defined (i.e. BigBench case Q10, Q18 
 etc.). It will be passed for default Hive (on MR) mode, while failed for Hive 
 On Spark mode (both Standalone and Yarn-Client). 
 Although we use 'add jar .jar;' to add the UDF jar explicitly, the issue 
 still exists. 
 BTW, if we put the UDF jar into $HIVE_HOME/lib dir, the case will be passed.
 Detail Error Message is as below (NOTE: 
 de.bankmark.bigbench.queries.q10.SentimentUDF is the UDF which contained in 
 jar bigbenchqueriesmr.jar, and we have add command like 'add jar 
 /location/to/bigbenchqueriesmr.jar;' into .sql explicitly)
 INFO  [pool-1-thread-1]: client.RemoteDriver (RemoteDriver.java:call(316)) - 
 Failed to run job 8dd120cb-1a4d-4d1c-ba31-61eac648c27d
 org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find 
 class: de.bankmark.bigbench.queries.q10.SentimentUDF
 Serialization trace:
 genericUDTF (org.apache.hadoop.hive.ql.plan.UDTFDesc)
 conf (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.MapJoinOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.FilterOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 right (org.apache.commons.lang3.tuple.ImmutablePair)
 edgeProperties (org.apache.hadoop.hive.ql.plan.SparkWork)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:99)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 ...
 Caused by: java.lang.ClassNotFoundException: 
 de.bankmark.bigbench.queries.q10.SentimentUDF
 at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
 at java.lang.Class.forName0(Native Method)
 at java.lang.Class.forName(Class.java:270)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:136)
 ... 55 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9410) Spark branch, ClassNotFoundException occurs during hive query case execution with UDF defined [Spark Branch]

2015-01-20 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9410:

Attachment: HIVE-9410.1-spark.patch

The RemoteDriver does not contains added jar in it's classpath, so it would 
failed to desrialize SparkWork due to NoClassFoundException. For Hive on MR, 
while use add jar through Hive CLI, Hive add jar into CLI classpath(through 
thread context classloader) and add it to distributed cache as well. Compare to 
Hive on MR, Hive on Spark has an extra RemoteDriver componnet, we should add 
added jar into it's classpath as well.

 Spark branch, ClassNotFoundException occurs during hive query case execution 
 with UDF defined [Spark Branch]
 

 Key: HIVE-9410
 URL: https://issues.apache.org/jira/browse/HIVE-9410
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
 Environment: CentOS 6.5
 JDK1.7
Reporter: Xin Hao
Assignee: Chengxiang Li
 Attachments: HIVE-9410.1-spark.patch


 We have a hive query case with UDF defined (i.e. BigBench case Q10, Q18 
 etc.). It will be passed for default Hive (on MR) mode, while failed for Hive 
 On Spark mode (both Standalone and Yarn-Client). 
 Although we use 'add jar .jar;' to add the UDF jar explicitly, the issue 
 still exists. 
 BTW, if we put the UDF jar into $HIVE_HOME/lib dir, the case will be passed.
 Detail Error Message is as below (NOTE: 
 de.bankmark.bigbench.queries.q10.SentimentUDF is the UDF which contained in 
 jar bigbenchqueriesmr.jar, and we have add command like 'add jar 
 /location/to/bigbenchqueriesmr.jar;' into .sql explicitly)
 INFO  [pool-1-thread-1]: client.RemoteDriver (RemoteDriver.java:call(316)) - 
 Failed to run job 8dd120cb-1a4d-4d1c-ba31-61eac648c27d
 org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find 
 class: de.bankmark.bigbench.queries.q10.SentimentUDF
 Serialization trace:
 genericUDTF (org.apache.hadoop.hive.ql.plan.UDTFDesc)
 conf (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.MapJoinOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.FilterOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 right (org.apache.commons.lang3.tuple.ImmutablePair)
 edgeProperties (org.apache.hadoop.hive.ql.plan.SparkWork)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:99)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 ...
 Caused by: java.lang.ClassNotFoundException: 
 de.bankmark.bigbench.queries.q10.SentimentUDF
 at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
 at java.lang.Class.forName0(Native Method)
 at 

[jira] [Updated] (HIVE-9410) Spark branch, ClassNotFoundException occurs during hive query case execution with UDF defined [Spark Branch]

2015-01-20 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9410:

Attachment: HIVE-9410.1-spark.patch

 Spark branch, ClassNotFoundException occurs during hive query case execution 
 with UDF defined [Spark Branch]
 

 Key: HIVE-9410
 URL: https://issues.apache.org/jira/browse/HIVE-9410
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
 Environment: CentOS 6.5
 JDK1.7
Reporter: Xin Hao
Assignee: Chengxiang Li
 Attachments: HIVE-9410.1-spark.patch


 We have a hive query case with UDF defined (i.e. BigBench case Q10, Q18 
 etc.). It will be passed for default Hive (on MR) mode, while failed for Hive 
 On Spark mode (both Standalone and Yarn-Client). 
 Although we use 'add jar .jar;' to add the UDF jar explicitly, the issue 
 still exists. 
 BTW, if we put the UDF jar into $HIVE_HOME/lib dir, the case will be passed.
 Detail Error Message is as below (NOTE: 
 de.bankmark.bigbench.queries.q10.SentimentUDF is the UDF which contained in 
 jar bigbenchqueriesmr.jar, and we have add command like 'add jar 
 /location/to/bigbenchqueriesmr.jar;' into .sql explicitly)
 INFO  [pool-1-thread-1]: client.RemoteDriver (RemoteDriver.java:call(316)) - 
 Failed to run job 8dd120cb-1a4d-4d1c-ba31-61eac648c27d
 org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find 
 class: de.bankmark.bigbench.queries.q10.SentimentUDF
 Serialization trace:
 genericUDTF (org.apache.hadoop.hive.ql.plan.UDTFDesc)
 conf (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.MapJoinOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.FilterOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 right (org.apache.commons.lang3.tuple.ImmutablePair)
 edgeProperties (org.apache.hadoop.hive.ql.plan.SparkWork)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:99)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 ...
 Caused by: java.lang.ClassNotFoundException: 
 de.bankmark.bigbench.queries.q10.SentimentUDF
 at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
 at java.lang.Class.forName0(Native Method)
 at java.lang.Class.forName(Class.java:270)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:136)
 ... 55 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9410) Spark branch, ClassNotFoundException occurs during hive query case execution with UDF defined [Spark Branch]

2015-01-20 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9410:

Attachment: (was: HIVE-9410.1-spark.patch)

 Spark branch, ClassNotFoundException occurs during hive query case execution 
 with UDF defined [Spark Branch]
 

 Key: HIVE-9410
 URL: https://issues.apache.org/jira/browse/HIVE-9410
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
 Environment: CentOS 6.5
 JDK1.7
Reporter: Xin Hao
Assignee: Chengxiang Li
 Attachments: HIVE-9410.1-spark.patch


 We have a hive query case with UDF defined (i.e. BigBench case Q10, Q18 
 etc.). It will be passed for default Hive (on MR) mode, while failed for Hive 
 On Spark mode (both Standalone and Yarn-Client). 
 Although we use 'add jar .jar;' to add the UDF jar explicitly, the issue 
 still exists. 
 BTW, if we put the UDF jar into $HIVE_HOME/lib dir, the case will be passed.
 Detail Error Message is as below (NOTE: 
 de.bankmark.bigbench.queries.q10.SentimentUDF is the UDF which contained in 
 jar bigbenchqueriesmr.jar, and we have add command like 'add jar 
 /location/to/bigbenchqueriesmr.jar;' into .sql explicitly)
 INFO  [pool-1-thread-1]: client.RemoteDriver (RemoteDriver.java:call(316)) - 
 Failed to run job 8dd120cb-1a4d-4d1c-ba31-61eac648c27d
 org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find 
 class: de.bankmark.bigbench.queries.q10.SentimentUDF
 Serialization trace:
 genericUDTF (org.apache.hadoop.hive.ql.plan.UDTFDesc)
 conf (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.MapJoinOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.FilterOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 right (org.apache.commons.lang3.tuple.ImmutablePair)
 edgeProperties (org.apache.hadoop.hive.ql.plan.SparkWork)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:99)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 ...
 Caused by: java.lang.ClassNotFoundException: 
 de.bankmark.bigbench.queries.q10.SentimentUDF
 at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
 at java.lang.Class.forName0(Native Method)
 at java.lang.Class.forName(Class.java:270)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:136)
 ... 55 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9425) External Function Jar files are not available for Driver when running with yarn-cluster mode [Spark Branch]

2015-01-20 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14285309#comment-14285309
 ] 

Chengxiang Li commented on HIVE-9425:
-

Seems not the same case, in yarn-cluster mode, Spark Driver may located on 
remote container, and AddJarJob only transfer jar path, which may not exists on 
remote node while the jar path is local fs path, the previous adding jar errors 
may related to this.

 External Function Jar files are not available for Driver when running with 
 yarn-cluster mode [Spark Branch]
 ---

 Key: HIVE-9425
 URL: https://issues.apache.org/jira/browse/HIVE-9425
 Project: Hive
  Issue Type: Sub-task
  Components: spark-branch
Reporter: Xiaomin Zhang

 15/01/20 00:27:31 INFO cluster.YarnClusterScheduler: 
 YarnClusterScheduler.postStartHook done
 15/01/20 00:27:31 ERROR spark.SparkContext: Error adding jar 
 (java.io.FileNotFoundException: hive-exec-0.15.0-SNAPSHOT.jar (No such file 
 or directory)), was the --addJars option used?
 15/01/20 00:27:31 ERROR spark.SparkContext: Error adding jar 
 (java.io.FileNotFoundException: opennlp-maxent-3.0.3.jar (No such file or 
 directory)), was the --addJars option used?
 15/01/20 00:27:31 ERROR spark.SparkContext: Error adding jar 
 (java.io.FileNotFoundException: bigbenchqueriesmr.jar (No such file or 
 directory)), was the --addJars option used?
 15/01/20 00:27:31 ERROR spark.SparkContext: Error adding jar 
 (java.io.FileNotFoundException: opennlp-tools-1.5.3.jar (No such file or 
 directory)), was the --addJars option used?
 15/01/20 00:27:31 ERROR spark.SparkContext: Error adding jar 
 (java.io.FileNotFoundException: jcl-over-slf4j-1.7.5.jar (No such file or 
 directory)), was the --addJars option used?
 15/01/20 00:27:31 INFO client.RemoteDriver: Received job request 
 fef081b0-5408-4804-9531-d131fdd628e6
 15/01/20 00:27:31 INFO Configuration.deprecation: mapred.max.split.size is 
 deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
 15/01/20 00:27:31 INFO Configuration.deprecation: mapred.min.split.size is 
 deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
 15/01/20 00:27:31 INFO client.RemoteDriver: Failed to run job 
 fef081b0-5408-4804-9531-d131fdd628e6
 org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find 
 class: de.bankmark.bigbench.queries.q10.SentimentUDF
 Serialization trace:
 genericUDTF (org.apache.hadoop.hive.ql.plan.UDTFDesc)
 conf (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 invertedWorkGraph (org.apache.hadoop.hive.ql.plan.SparkWork)
   at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
   at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
 It seems the additional Jar files are not uploaded to DistributedCache, so 
 that the Driver cannot access it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9337) Move more hive.spark.* configurations to HiveConf

2015-01-20 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14285054#comment-14285054
 ] 

Chengxiang Li commented on HIVE-9337:
-

+1

 Move more hive.spark.* configurations to HiveConf
 -

 Key: HIVE-9337
 URL: https://issues.apache.org/jira/browse/HIVE-9337
 Project: Hive
  Issue Type: Task
  Components: Spark
Reporter: Szehon Ho
Assignee: Szehon Ho
 Attachments: HIVE-9337-spark.patch, HIVE-9337.2-spark.patch


 Some hive.spark configurations have been added to HiveConf, but there are 
 some like hive.spark.log.dir that are not there.
 Also some configurations in RpcConfiguration.java might be eligible to be 
 moved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9395) Make WAIT_SUBMISSION_TIMEOUT configuable and check timeout in SparkJobMonitor level.[Spark Branch]

2015-01-20 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14285243#comment-14285243
 ] 

Chengxiang Li commented on HIVE-9395:
-

Added hive.spark.job.monitor.timeout in HiveConf.

 Make WAIT_SUBMISSION_TIMEOUT configuable and check timeout in SparkJobMonitor 
 level.[Spark Branch]
 --

 Key: HIVE-9395
 URL: https://issues.apache.org/jira/browse/HIVE-9395
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M5, TODOC-SPARK
 Fix For: spark-branch

 Attachments: HIVE-9395.1-spark.patch, HIVE-9395.2-spark.patch


 SparkJobMonitor may hang if job state return null all the times, we should 
 move the timeout check here to avoid it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9410) Spark branch, ClassNotFoundException occurs during hive query case execution with UDF defined [Spark Branch]

2015-01-20 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14285291#comment-14285291
 ] 

Chengxiang Li commented on HIVE-9410:
-

[~xhao1], please help to verify this patch on your environment as well if 
available.

 Spark branch, ClassNotFoundException occurs during hive query case execution 
 with UDF defined [Spark Branch]
 

 Key: HIVE-9410
 URL: https://issues.apache.org/jira/browse/HIVE-9410
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
 Environment: CentOS 6.5
 JDK1.7
Reporter: Xin Hao
Assignee: Chengxiang Li
 Attachments: HIVE-9410.1-spark.patch


 We have a hive query case with UDF defined (i.e. BigBench case Q10, Q18 
 etc.). It will be passed for default Hive (on MR) mode, while failed for Hive 
 On Spark mode (both Standalone and Yarn-Client). 
 Although we use 'add jar .jar;' to add the UDF jar explicitly, the issue 
 still exists. 
 BTW, if we put the UDF jar into $HIVE_HOME/lib dir, the case will be passed.
 Detail Error Message is as below (NOTE: 
 de.bankmark.bigbench.queries.q10.SentimentUDF is the UDF which contained in 
 jar bigbenchqueriesmr.jar, and we have add command like 'add jar 
 /location/to/bigbenchqueriesmr.jar;' into .sql explicitly)
 INFO  [pool-1-thread-1]: client.RemoteDriver (RemoteDriver.java:call(316)) - 
 Failed to run job 8dd120cb-1a4d-4d1c-ba31-61eac648c27d
 org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find 
 class: de.bankmark.bigbench.queries.q10.SentimentUDF
 Serialization trace:
 genericUDTF (org.apache.hadoop.hive.ql.plan.UDTFDesc)
 conf (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.MapJoinOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.FilterOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 right (org.apache.commons.lang3.tuple.ImmutablePair)
 edgeProperties (org.apache.hadoop.hive.ql.plan.SparkWork)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:99)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 ...
 Caused by: java.lang.ClassNotFoundException: 
 de.bankmark.bigbench.queries.q10.SentimentUDF
 at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
 at java.lang.Class.forName0(Native Method)
 at java.lang.Class.forName(Class.java:270)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:136)
 ... 55 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9395) Make WAIT_SUBMISSION_TIMEOUT configuable and check timeout in SparkJobMonitor level.[Spark Branch]

2015-01-20 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9395:

Labels: Spark-M5 TODOC-SPARK  (was: Spark-M5)

 Make WAIT_SUBMISSION_TIMEOUT configuable and check timeout in SparkJobMonitor 
 level.[Spark Branch]
 --

 Key: HIVE-9395
 URL: https://issues.apache.org/jira/browse/HIVE-9395
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M5, TODOC-SPARK
 Fix For: spark-branch

 Attachments: HIVE-9395.1-spark.patch, HIVE-9395.2-spark.patch


 SparkJobMonitor may hang if job state return null all the times, we should 
 move the timeout check here to avoid it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9395) Make WAIT_SUBMISSION_TIMEOUT configuable and check timeout in SparkJobMonitor level.[Spark Branch]

2015-01-19 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283298#comment-14283298
 ] 

Chengxiang Li commented on HIVE-9395:
-

From SparkJobMonitor side, if job state is always null, it should timeout 
after certain interval, otherwise it would hangs forever.

 Make WAIT_SUBMISSION_TIMEOUT configuable and check timeout in SparkJobMonitor 
 level.[Spark Branch]
 --

 Key: HIVE-9395
 URL: https://issues.apache.org/jira/browse/HIVE-9395
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M5
 Attachments: HIVE-9395.1-spark.patch


 SparkJobMonitor may hang if job state return null all the times, we should 
 move the timeout check here to avoid it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9395) Make WAIT_SUBMISSION_TIMEOUT configuable and check timeout in SparkJobMonitor level.[Spark Branch]

2015-01-19 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283310#comment-14283310
 ] 

Chengxiang Li commented on HIVE-9395:
-

That's a good question, Hive submit spark job asynchronously, and monitor the 
job status with SparkJobMonitor, all kinds of errors may happens before job get 
executed on Spark cluster, so we need to add timeout in SparkJobMonitor which 
would make sure it would not hang while could not get job state all the times, 
this should be quite important for our unit test, as once SparkJobMonitor 
hangs, it would blocks all the following tests.
When should we decide to timeout while we could not get state of job, after 
30s, or 60s? should it configurable to user?
My opinion is that make it configurable to user, as user may know more about 
the real cluster, which helps them to decide whether it's normal that 
SparkJobMonitor could not get job state in certain time.

 Make WAIT_SUBMISSION_TIMEOUT configuable and check timeout in SparkJobMonitor 
 level.[Spark Branch]
 --

 Key: HIVE-9395
 URL: https://issues.apache.org/jira/browse/HIVE-9395
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M5
 Attachments: HIVE-9395.1-spark.patch


 SparkJobMonitor may hang if job state return null all the times, we should 
 move the timeout check here to avoid it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9395) Make WAIT_SUBMISSION_TIMEOUT configuable and check timeout in SparkJobMonitor level.[Spark Branch]

2015-01-19 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283313#comment-14283313
 ] 

Chengxiang Li commented on HIVE-9395:
-

Yes, I agree, the scope is a bit larger than submission.

 Make WAIT_SUBMISSION_TIMEOUT configuable and check timeout in SparkJobMonitor 
 level.[Spark Branch]
 --

 Key: HIVE-9395
 URL: https://issues.apache.org/jira/browse/HIVE-9395
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M5
 Attachments: HIVE-9395.1-spark.patch


 SparkJobMonitor may hang if job state return null all the times, we should 
 move the timeout check here to avoid it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9395) Make WAIT_SUBMISSION_TIMEOUT configuable and check timeout in SparkJobMonitor level.[Spark Branch]

2015-01-19 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9395:

Attachment: HIVE-9395.2-spark.patch

 Make WAIT_SUBMISSION_TIMEOUT configuable and check timeout in SparkJobMonitor 
 level.[Spark Branch]
 --

 Key: HIVE-9395
 URL: https://issues.apache.org/jira/browse/HIVE-9395
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M5
 Attachments: HIVE-9395.1-spark.patch, HIVE-9395.2-spark.patch


 SparkJobMonitor may hang if job state return null all the times, we should 
 move the timeout check here to avoid it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9342) add num-executors / executor-cores / executor-memory option support for hive on spark in Yarn mode [Spark Branch]

2015-01-19 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9342:

Component/s: (was: spark-branch)
 Spark

 add num-executors / executor-cores / executor-memory option support for hive 
 on spark in Yarn mode [Spark Branch]
 -

 Key: HIVE-9342
 URL: https://issues.apache.org/jira/browse/HIVE-9342
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Affects Versions: spark-branch
Reporter: Pierre Yin
Priority: Minor
  Labels: spark
 Fix For: spark-branch

 Attachments: HIVE-9342.1-spark.patch, HIVE-9342.2-spark.patch, 
 HIVE-9342.3-spark.patch


 When I run hive on spark with Yarn mode, I want to control some yarn option, 
 such as --num-executors, --executor-cores, --executor-memory.
 We can append these options into argv in SparkClientImpl.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9342) add num-executors / executor-cores / executor-memory option support for hive on spark in Yarn mode [Spark Branch]

2015-01-19 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9342:

Attachment: HIVE-9342.3-spark.patch

rebase the patch to current code base. [~xuefuz], please help to merge this 
patch if it passed the unit test.

 add num-executors / executor-cores / executor-memory option support for hive 
 on spark in Yarn mode [Spark Branch]
 -

 Key: HIVE-9342
 URL: https://issues.apache.org/jira/browse/HIVE-9342
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Affects Versions: spark-branch
Reporter: Pierre Yin
Priority: Minor
  Labels: spark
 Fix For: spark-branch

 Attachments: HIVE-9342.1-spark.patch, HIVE-9342.2-spark.patch, 
 HIVE-9342.3-spark.patch


 When I run hive on spark with Yarn mode, I want to control some yarn option, 
 such as --num-executors, --executor-cores, --executor-memory.
 We can append these options into argv in SparkClientImpl.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9395) Make WAIT_SUBMISSION_TIMEOUT configuable and check timeout in SparkJobMonitor level.[Spark Branch]

2015-01-19 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9395:

Attachment: HIVE-9395.1-spark.patch

 Make WAIT_SUBMISSION_TIMEOUT configuable and check timeout in SparkJobMonitor 
 level.[Spark Branch]
 --

 Key: HIVE-9395
 URL: https://issues.apache.org/jira/browse/HIVE-9395
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M5
 Attachments: HIVE-9395.1-spark.patch


 SparkJobMonitor may hang if job state return null all the times, we should 
 move the timeout check here to avoid it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9395) Make WAIT_SUBMISSION_TIMEOUT configuable and check timeout in SparkJobMonitor level.[Spark Branch]

2015-01-19 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9395:

Status: Patch Available  (was: Open)

 Make WAIT_SUBMISSION_TIMEOUT configuable and check timeout in SparkJobMonitor 
 level.[Spark Branch]
 --

 Key: HIVE-9395
 URL: https://issues.apache.org/jira/browse/HIVE-9395
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M5
 Attachments: HIVE-9395.1-spark.patch


 SparkJobMonitor may hang if job state return null all the times, we should 
 move the timeout check here to avoid it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-9410) Spark branch, ClassNotFoundException occurs during hive query case execution with UDF defined [Spark Branch]

2015-01-19 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li reassigned HIVE-9410:
---

Assignee: Chengxiang Li

 Spark branch, ClassNotFoundException occurs during hive query case execution 
 with UDF defined [Spark Branch]
 

 Key: HIVE-9410
 URL: https://issues.apache.org/jira/browse/HIVE-9410
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
 Environment: CentOS 6.5
 JDK1.7
Reporter: Xin Hao
Assignee: Chengxiang Li

 We have a hive query case with UDF defined (i.e. BigBench case Q10, Q18 
 etc.). It will be passed for default Hive (on MR) mode, while failed for Hive 
 On Spark mode (both Standalone and Yarn-Client). 
 Although we use 'add jar .jar;' to add the UDF jar explicitly, the issue 
 still exists. 
 BTW, if we put the UDF jar into $HIVE_HOME/lib dir, the case will be passed.
 Detail Error Message is as below (NOTE: 
 de.bankmark.bigbench.queries.q10.SentimentUDF is the UDF which contained in 
 jar bigbenchqueriesmr.jar, and we have add command like 'add jar 
 /location/to/bigbenchqueriesmr.jar;' into .sql explicitly)
 INFO  [pool-1-thread-1]: client.RemoteDriver (RemoteDriver.java:call(316)) - 
 Failed to run job 8dd120cb-1a4d-4d1c-ba31-61eac648c27d
 org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find 
 class: de.bankmark.bigbench.queries.q10.SentimentUDF
 Serialization trace:
 genericUDTF (org.apache.hadoop.hive.ql.plan.UDTFDesc)
 conf (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.MapJoinOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.FilterOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 right (org.apache.commons.lang3.tuple.ImmutablePair)
 edgeProperties (org.apache.hadoop.hive.ql.plan.SparkWork)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:99)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 ...
 Caused by: java.lang.ClassNotFoundException: 
 de.bankmark.bigbench.queries.q10.SentimentUDF
 at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
 at java.lang.Class.forName0(Native Method)
 at java.lang.Class.forName(Class.java:270)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:136)
 ... 55 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9395) Make WAIT_SUBMISSION_TIMEOUT configuable and check timeout in SparkJobMonitor level.[Spark Branch]

2015-01-19 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282468#comment-14282468
 ] 

Chengxiang Li commented on HIVE-9395:
-

From the abstract level, job status should just wrap the job/stage status, and 
monitor would decide when to timeout. And in current implementation, we indeed 
need a timeout in monitor to break the monitor loop while failed to get job 
state, SparkJobStatus does know more information about job state, but I think 
monitor does not need extra information to timeout, I mean when monitor can 
not get job state in timeout interval, something definitely goes wrong during 
job submission, job monitor should timeout for use experience.

 Make WAIT_SUBMISSION_TIMEOUT configuable and check timeout in SparkJobMonitor 
 level.[Spark Branch]
 --

 Key: HIVE-9395
 URL: https://issues.apache.org/jira/browse/HIVE-9395
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M5
 Attachments: HIVE-9395.1-spark.patch


 SparkJobMonitor may hang if job state return null all the times, we should 
 move the timeout check here to avoid it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9395) Make WAIT_SUBMISSION_TIMEOUT configuable and check timeout in SparkJobMonitor level.[Spark Branch]

2015-01-19 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282510#comment-14282510
 ] 

Chengxiang Li commented on HIVE-9395:
-

SparkJobMonitor would hang in some case, like jobId is available(so never 
timeout in SparkJobStatus) and SparkJobInfo is null. The risk here is that, we 
loop in SparkJobMonitor and timeout in SparkJobStatus which would add 
unnecessary complexity. we can add interface for SparkJobStatus if we indeed 
need more information from it.

 Make WAIT_SUBMISSION_TIMEOUT configuable and check timeout in SparkJobMonitor 
 level.[Spark Branch]
 --

 Key: HIVE-9395
 URL: https://issues.apache.org/jira/browse/HIVE-9395
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M5
 Attachments: HIVE-9395.1-spark.patch


 SparkJobMonitor may hang if job state return null all the times, we should 
 move the timeout check here to avoid it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-9370) Enable Hive on Spark for BigBench and run Query 8, the test failed [Spark Branch]

2015-01-18 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li reassigned HIVE-9370:
---

Assignee: Chengxiang Li

 Enable Hive on Spark for BigBench and run Query 8, the test failed [Spark 
 Branch]
 -

 Key: HIVE-9370
 URL: https://issues.apache.org/jira/browse/HIVE-9370
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: yuyun.chen
Assignee: Chengxiang Li

 enable hive on spark and run BigBench Query 8 then got the following 
 exception:
 2015-01-14 11:43:46,057 INFO  [main]: impl.RemoteSparkJobStatus 
 (RemoteSparkJobStatus.java:getSparkJobInfo(143)) - Job hasn't been submitted 
 after 30s. Aborting it.
 2015-01-14 11:43:46,061 INFO  [main]: impl.RemoteSparkJobStatus 
 (RemoteSparkJobStatus.java:getSparkJobInfo(143)) - Job hasn't been submitted 
 after 30s. Aborting it.
 2015-01-14 11:43:46,061 ERROR [main]: status.SparkJobMonitor 
 (SessionState.java:printError(839)) - Status: Failed
 2015-01-14 11:43:46,062 INFO  [main]: log.PerfLogger 
 (PerfLogger.java:PerfLogEnd(148)) - /PERFLOG method=SparkRunJob 
 start=1421206996052 end=1421207026062 duration=30010 
 from=org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) - 15/01/14 11:43:46 INFO RemoteDriver: Failed 
 to run job 0a9a7782-0e0b-4561-8468-959a6d8df0a3
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) - java.lang.InterruptedException
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at java.lang.Object.wait(Native 
 Method)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 java.lang.Object.wait(Object.java:503)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.scheduler.JobWaiter.awaitResult(JobWaiter.scala:73)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:514)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1282)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1300)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1314)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1328)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.rdd.RDD.collect(RDD.scala:780)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.RangePartitioner$.sketch(Partitioner.scala:262)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.RangePartitioner.init(Partitioner.scala:124)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.rdd.OrderedRDDFunctions.sortByKey(OrderedRDDFunctions.scala:63)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.api.java.JavaPairRDD.sortByKey(JavaPairRDD.scala:894)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.api.java.JavaPairRDD.sortByKey(JavaPairRDD.scala:864)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.hadoop.hive.ql.exec.spark.SortByShuffler.shuffle(SortByShuffler.java:48)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.hadoop.hive.ql.exec.spark.ShuffleTran.transform(ShuffleTran.java:45)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.hadoop.hive.ql.exec.spark.SparkPlan.generateGraph(SparkPlan.java:69)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 

[jira] [Commented] (HIVE-9370) Enable Hive on Spark for BigBench and run Query 8, the test failed [Spark Branch]

2015-01-18 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282098#comment-14282098
 ] 

Chengxiang Li commented on HIVE-9370:
-

[~xuefuz], I would work on this after HIVE-9179 is resolved.

 Enable Hive on Spark for BigBench and run Query 8, the test failed [Spark 
 Branch]
 -

 Key: HIVE-9370
 URL: https://issues.apache.org/jira/browse/HIVE-9370
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: yuyun.chen

 enable hive on spark and run BigBench Query 8 then got the following 
 exception:
 2015-01-14 11:43:46,057 INFO  [main]: impl.RemoteSparkJobStatus 
 (RemoteSparkJobStatus.java:getSparkJobInfo(143)) - Job hasn't been submitted 
 after 30s. Aborting it.
 2015-01-14 11:43:46,061 INFO  [main]: impl.RemoteSparkJobStatus 
 (RemoteSparkJobStatus.java:getSparkJobInfo(143)) - Job hasn't been submitted 
 after 30s. Aborting it.
 2015-01-14 11:43:46,061 ERROR [main]: status.SparkJobMonitor 
 (SessionState.java:printError(839)) - Status: Failed
 2015-01-14 11:43:46,062 INFO  [main]: log.PerfLogger 
 (PerfLogger.java:PerfLogEnd(148)) - /PERFLOG method=SparkRunJob 
 start=1421206996052 end=1421207026062 duration=30010 
 from=org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) - 15/01/14 11:43:46 INFO RemoteDriver: Failed 
 to run job 0a9a7782-0e0b-4561-8468-959a6d8df0a3
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) - java.lang.InterruptedException
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at java.lang.Object.wait(Native 
 Method)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 java.lang.Object.wait(Object.java:503)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.scheduler.JobWaiter.awaitResult(JobWaiter.scala:73)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:514)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1282)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1300)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1314)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1328)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.rdd.RDD.collect(RDD.scala:780)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.RangePartitioner$.sketch(Partitioner.scala:262)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.RangePartitioner.init(Partitioner.scala:124)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.rdd.OrderedRDDFunctions.sortByKey(OrderedRDDFunctions.scala:63)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.api.java.JavaPairRDD.sortByKey(JavaPairRDD.scala:894)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.api.java.JavaPairRDD.sortByKey(JavaPairRDD.scala:864)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.hadoop.hive.ql.exec.spark.SortByShuffler.shuffle(SortByShuffler.java:48)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.hadoop.hive.ql.exec.spark.ShuffleTran.transform(ShuffleTran.java:45)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.hadoop.hive.ql.exec.spark.SparkPlan.generateGraph(SparkPlan.java:69)
 2015-01-14 11:43:46,073 

[jira] [Updated] (HIVE-9409) Spark branch, ClassNotFoundException: org.apache.commons.logging.impl.SLF4JLocationAwareLog occurs during some hive query case execution [Spark Branch]

2015-01-18 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9409:

Component/s: (was: spark-branch)
 Spark

 Spark branch, ClassNotFoundException: 
 org.apache.commons.logging.impl.SLF4JLocationAwareLog occurs during some hive 
 query case execution [Spark Branch]
 ---

 Key: HIVE-9409
 URL: https://issues.apache.org/jira/browse/HIVE-9409
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Affects Versions: spark-branch
 Environment: CentOS6.5  
 Java version: 1.7.0_67
Reporter: Xin Hao

 When we use current [Spark Branch] to build hive package. deploy it on our 
 cluster and execute hive queries (e.g. BigBench case Q10, Q18, Q19, Q27),  
 Error 'java.lang.ClassNotFoundException: 
 org.apache.commons.logging.impl.SLF4JLocationAwareLog' will occurs.
 For other released apache or CDH hive version(e.g. apache hive 0.14), there 
 is no this issue.
 By the way, if we use 'add jar /location/to/jcl-over-slf4j-1.7.5.jar' before 
 hive query execution, the issue will be workaround. 
 The detail diagnostic messages are as below:
 ==
 Diagnostic Messages for this Task:
 Error: java.lang.RuntimeException: Failed to load plan: 
 hdfs://bhx1:8020/tmp/hive/root/4a4cbeb2-cf42-4eb7-a78a-7ecea6af2aff/hive_2015-01-17_10-45-51_360_5581900288096206774-1/-mr-10004/1c6c4667-8b81-41ed-a42e-fe099ae3379f/map.xml:
  org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find 
 class: org.apache.commons.logging.impl.SLF4JLocationAwareLog
 Serialization trace:
 LOG (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 at 
 org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:431)
 at 
 org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:287)
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:268)
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:484)
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:477)
 at 
 org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:657)
 at 
 org.apache.hadoop.mapred.MapTask$TrackedRecordReader.init(MapTask.java:169)
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
 at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
 Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to 
 find cl
 Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to 
 find class: org.apache.commons.logging.impl.SLF4JLocationAwareLog
 Serialization trace:
 LOG (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:99)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 

[jira] [Updated] (HIVE-9409) Spark branch, ClassNotFoundException: org.apache.commons.logging.impl.SLF4JLocationAwareLog occurs during some hive query case execution [Spark Branch]

2015-01-18 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9409:

Fix Version/s: (was: spark-branch)

 Spark branch, ClassNotFoundException: 
 org.apache.commons.logging.impl.SLF4JLocationAwareLog occurs during some hive 
 query case execution [Spark Branch]
 ---

 Key: HIVE-9409
 URL: https://issues.apache.org/jira/browse/HIVE-9409
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Affects Versions: spark-branch
 Environment: CentOS6.5  
 Java version: 1.7.0_67
Reporter: Xin Hao

 When we use current [Spark Branch] to build hive package. deploy it on our 
 cluster and execute hive queries (e.g. BigBench case Q10, Q18, Q19, Q27),  
 Error 'java.lang.ClassNotFoundException: 
 org.apache.commons.logging.impl.SLF4JLocationAwareLog' will occurs.
 For other released apache or CDH hive version(e.g. apache hive 0.14), there 
 is no this issue.
 By the way, if we use 'add jar /location/to/jcl-over-slf4j-1.7.5.jar' before 
 hive query execution, the issue will be workaround. 
 The detail diagnostic messages are as below:
 ==
 Diagnostic Messages for this Task:
 Error: java.lang.RuntimeException: Failed to load plan: 
 hdfs://bhx1:8020/tmp/hive/root/4a4cbeb2-cf42-4eb7-a78a-7ecea6af2aff/hive_2015-01-17_10-45-51_360_5581900288096206774-1/-mr-10004/1c6c4667-8b81-41ed-a42e-fe099ae3379f/map.xml:
  org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find 
 class: org.apache.commons.logging.impl.SLF4JLocationAwareLog
 Serialization trace:
 LOG (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 at 
 org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:431)
 at 
 org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:287)
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:268)
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:484)
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:477)
 at 
 org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:657)
 at 
 org.apache.hadoop.mapred.MapTask$TrackedRecordReader.init(MapTask.java:169)
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
 at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
 Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to 
 find cl
 Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to 
 find class: org.apache.commons.logging.impl.SLF4JLocationAwareLog
 Serialization trace:
 LOG (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:99)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 

[jira] [Updated] (HIVE-9409) Spark branch, ClassNotFoundException: org.apache.commons.logging.impl.SLF4JLocationAwareLog occurs during some hive query case execution [Spark Branch]

2015-01-18 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9409:

Affects Version/s: (was: spark-branch)

 Spark branch, ClassNotFoundException: 
 org.apache.commons.logging.impl.SLF4JLocationAwareLog occurs during some hive 
 query case execution [Spark Branch]
 ---

 Key: HIVE-9409
 URL: https://issues.apache.org/jira/browse/HIVE-9409
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
 Environment: CentOS6.5  
 Java version: 1.7.0_67
Reporter: Xin Hao

 When we use current [Spark Branch] to build hive package. deploy it on our 
 cluster and execute hive queries (e.g. BigBench case Q10, Q18, Q19, Q27),  
 Error 'java.lang.ClassNotFoundException: 
 org.apache.commons.logging.impl.SLF4JLocationAwareLog' will occurs.
 For other released apache or CDH hive version(e.g. apache hive 0.14), there 
 is no this issue.
 By the way, if we use 'add jar /location/to/jcl-over-slf4j-1.7.5.jar' before 
 hive query execution, the issue will be workaround. 
 The detail diagnostic messages are as below:
 ==
 Diagnostic Messages for this Task:
 Error: java.lang.RuntimeException: Failed to load plan: 
 hdfs://bhx1:8020/tmp/hive/root/4a4cbeb2-cf42-4eb7-a78a-7ecea6af2aff/hive_2015-01-17_10-45-51_360_5581900288096206774-1/-mr-10004/1c6c4667-8b81-41ed-a42e-fe099ae3379f/map.xml:
  org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find 
 class: org.apache.commons.logging.impl.SLF4JLocationAwareLog
 Serialization trace:
 LOG (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 at 
 org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:431)
 at 
 org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:287)
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:268)
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:484)
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:477)
 at 
 org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:657)
 at 
 org.apache.hadoop.mapred.MapTask$TrackedRecordReader.init(MapTask.java:169)
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
 at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
 Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to 
 find cl
 Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to 
 find class: org.apache.commons.logging.impl.SLF4JLocationAwareLog
 Serialization trace:
 LOG (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:99)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 

[jira] [Created] (HIVE-9395) Make WAIT_SUBMISSION_TIMEOUT configuable and check timeout in SparkJobMonitor level.[Spark Branch]

2015-01-16 Thread Chengxiang Li (JIRA)
Chengxiang Li created HIVE-9395:
---

 Summary: Make WAIT_SUBMISSION_TIMEOUT configuable and check 
timeout in SparkJobMonitor level.[Spark Branch]
 Key: HIVE-9395
 URL: https://issues.apache.org/jira/browse/HIVE-9395
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li


SparkJobMonitor may hang if job state return null all the times, we should move 
the timeout check here to avoid it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9370) Enable Hive on Spark for BigBench and run Query 8, the test failed [Spark Branch]

2015-01-16 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14280027#comment-14280027
 ] 

Chengxiang Li commented on HIVE-9370:
-

HIVE-9179 should help here, we can add a listener to do this.

 Enable Hive on Spark for BigBench and run Query 8, the test failed [Spark 
 Branch]
 -

 Key: HIVE-9370
 URL: https://issues.apache.org/jira/browse/HIVE-9370
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: yuyun.chen

 enable hive on spark and run BigBench Query 8 then got the following 
 exception:
 2015-01-14 11:43:46,057 INFO  [main]: impl.RemoteSparkJobStatus 
 (RemoteSparkJobStatus.java:getSparkJobInfo(143)) - Job hasn't been submitted 
 after 30s. Aborting it.
 2015-01-14 11:43:46,061 INFO  [main]: impl.RemoteSparkJobStatus 
 (RemoteSparkJobStatus.java:getSparkJobInfo(143)) - Job hasn't been submitted 
 after 30s. Aborting it.
 2015-01-14 11:43:46,061 ERROR [main]: status.SparkJobMonitor 
 (SessionState.java:printError(839)) - Status: Failed
 2015-01-14 11:43:46,062 INFO  [main]: log.PerfLogger 
 (PerfLogger.java:PerfLogEnd(148)) - /PERFLOG method=SparkRunJob 
 start=1421206996052 end=1421207026062 duration=30010 
 from=org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) - 15/01/14 11:43:46 INFO RemoteDriver: Failed 
 to run job 0a9a7782-0e0b-4561-8468-959a6d8df0a3
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) - java.lang.InterruptedException
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at java.lang.Object.wait(Native 
 Method)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 java.lang.Object.wait(Object.java:503)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.scheduler.JobWaiter.awaitResult(JobWaiter.scala:73)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:514)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1282)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1300)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1314)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1328)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.rdd.RDD.collect(RDD.scala:780)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.RangePartitioner$.sketch(Partitioner.scala:262)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.RangePartitioner.init(Partitioner.scala:124)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.rdd.OrderedRDDFunctions.sortByKey(OrderedRDDFunctions.scala:63)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.api.java.JavaPairRDD.sortByKey(JavaPairRDD.scala:894)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.api.java.JavaPairRDD.sortByKey(JavaPairRDD.scala:864)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.hadoop.hive.ql.exec.spark.SortByShuffler.shuffle(SortByShuffler.java:48)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.hadoop.hive.ql.exec.spark.ShuffleTran.transform(ShuffleTran.java:45)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.hadoop.hive.ql.exec.spark.SparkPlan.generateGraph(SparkPlan.java:69)
 2015-01-14 11:43:46,073 

[jira] [Commented] (HIVE-9342) add num-executors / executor-cores / executor-memory option support for hive on spark in Yarn mode

2015-01-14 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14276646#comment-14276646
 ] 

Chengxiang Li commented on HIVE-9342:
-

Thanks, [~fangxi.yin]. The patch looks good to me.
[~xuefuz], you may take a look at this either.

 add num-executors / executor-cores / executor-memory option support for hive 
 on spark in Yarn mode
 --

 Key: HIVE-9342
 URL: https://issues.apache.org/jira/browse/HIVE-9342
 Project: Hive
  Issue Type: Improvement
  Components: spark-branch
Affects Versions: spark-branch
Reporter: Pierre Yin
Priority: Minor
  Labels: spark
 Fix For: spark-branch

 Attachments: HIVE-9342.1-spark.patch, HIVE-9342.2-spark.patch


 When I run hive on spark with Yarn mode, I want to control some yarn option, 
 such as --num-executors, --executor-cores, --executor-memory.
 We can append these options into argv in SparkClientImpl.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9370) Enable Hive on Spark for BigBench and run Query 8, the test failed [Spark Branch]

2015-01-13 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9370:

Summary: Enable Hive on Spark for BigBench and run Query 8, the test failed 
[Spark Branch]  (was: Enable Hive on Spark for BigBench and run Query 8, the 
test failed )

 Enable Hive on Spark for BigBench and run Query 8, the test failed [Spark 
 Branch]
 -

 Key: HIVE-9370
 URL: https://issues.apache.org/jira/browse/HIVE-9370
 Project: Hive
  Issue Type: Bug
  Components: Spark
Reporter: yuyun.chen

 enable hive on spark and run BigBench Query 8 then got the following 
 exception:
 2015-01-14 11:43:46,057 INFO  [main]: impl.RemoteSparkJobStatus 
 (RemoteSparkJobStatus.java:getSparkJobInfo(143)) - Job hasn't been submitted 
 after 30s. Aborting it.
 2015-01-14 11:43:46,061 INFO  [main]: impl.RemoteSparkJobStatus 
 (RemoteSparkJobStatus.java:getSparkJobInfo(143)) - Job hasn't been submitted 
 after 30s. Aborting it.
 2015-01-14 11:43:46,061 ERROR [main]: status.SparkJobMonitor 
 (SessionState.java:printError(839)) - Status: Failed
 2015-01-14 11:43:46,062 INFO  [main]: log.PerfLogger 
 (PerfLogger.java:PerfLogEnd(148)) - /PERFLOG method=SparkRunJob 
 start=1421206996052 end=1421207026062 duration=30010 
 from=org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) - 15/01/14 11:43:46 INFO RemoteDriver: Failed 
 to run job 0a9a7782-0e0b-4561-8468-959a6d8df0a3
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) - java.lang.InterruptedException
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at java.lang.Object.wait(Native 
 Method)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 java.lang.Object.wait(Object.java:503)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.scheduler.JobWaiter.awaitResult(JobWaiter.scala:73)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:514)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1282)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1300)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1314)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1328)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.rdd.RDD.collect(RDD.scala:780)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.RangePartitioner$.sketch(Partitioner.scala:262)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.RangePartitioner.init(Partitioner.scala:124)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.rdd.OrderedRDDFunctions.sortByKey(OrderedRDDFunctions.scala:63)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.api.java.JavaPairRDD.sortByKey(JavaPairRDD.scala:894)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.api.java.JavaPairRDD.sortByKey(JavaPairRDD.scala:864)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.hadoop.hive.ql.exec.spark.SortByShuffler.shuffle(SortByShuffler.java:48)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.hadoop.hive.ql.exec.spark.ShuffleTran.transform(ShuffleTran.java:45)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 

[jira] [Updated] (HIVE-9370) Enable Hive on Spark for BigBench and run Query 8, the test failed [Spark Branch]

2015-01-13 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9370:

Issue Type: Sub-task  (was: Bug)
Parent: HIVE-7292

 Enable Hive on Spark for BigBench and run Query 8, the test failed [Spark 
 Branch]
 -

 Key: HIVE-9370
 URL: https://issues.apache.org/jira/browse/HIVE-9370
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: yuyun.chen

 enable hive on spark and run BigBench Query 8 then got the following 
 exception:
 2015-01-14 11:43:46,057 INFO  [main]: impl.RemoteSparkJobStatus 
 (RemoteSparkJobStatus.java:getSparkJobInfo(143)) - Job hasn't been submitted 
 after 30s. Aborting it.
 2015-01-14 11:43:46,061 INFO  [main]: impl.RemoteSparkJobStatus 
 (RemoteSparkJobStatus.java:getSparkJobInfo(143)) - Job hasn't been submitted 
 after 30s. Aborting it.
 2015-01-14 11:43:46,061 ERROR [main]: status.SparkJobMonitor 
 (SessionState.java:printError(839)) - Status: Failed
 2015-01-14 11:43:46,062 INFO  [main]: log.PerfLogger 
 (PerfLogger.java:PerfLogEnd(148)) - /PERFLOG method=SparkRunJob 
 start=1421206996052 end=1421207026062 duration=30010 
 from=org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) - 15/01/14 11:43:46 INFO RemoteDriver: Failed 
 to run job 0a9a7782-0e0b-4561-8468-959a6d8df0a3
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) - java.lang.InterruptedException
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at java.lang.Object.wait(Native 
 Method)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 java.lang.Object.wait(Object.java:503)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.scheduler.JobWaiter.awaitResult(JobWaiter.scala:73)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:514)
 2015-01-14 11:43:46,071 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1282)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1300)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1314)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.SparkContext.runJob(SparkContext.scala:1328)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.rdd.RDD.collect(RDD.scala:780)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.RangePartitioner$.sketch(Partitioner.scala:262)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.RangePartitioner.init(Partitioner.scala:124)
 2015-01-14 11:43:46,072 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.rdd.OrderedRDDFunctions.sortByKey(OrderedRDDFunctions.scala:63)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.api.java.JavaPairRDD.sortByKey(JavaPairRDD.scala:894)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.spark.api.java.JavaPairRDD.sortByKey(JavaPairRDD.scala:864)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.hadoop.hive.ql.exec.spark.SortByShuffler.shuffle(SortByShuffler.java:48)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.hadoop.hive.ql.exec.spark.ShuffleTran.transform(ShuffleTran.java:45)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 (SparkClientImpl.java:run(436)) -at 
 org.apache.hadoop.hive.ql.exec.spark.SparkPlan.generateGraph(SparkPlan.java:69)
 2015-01-14 11:43:46,073 INFO  [stderr-redir-1]: client.SparkClientImpl 
 

[jira] [Commented] (HIVE-9178) Create a separate API for remote Spark Context RPC other than job submission [Spark Branch]

2015-01-13 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14276425#comment-14276425
 ] 

Chengxiang Li commented on HIVE-9178:
-

[~vanzin], how do we send SyncJobRequest result back to SparkClient? I don't 
see any related code in RemoteDriver.

 Create a separate API for remote Spark Context RPC other than job submission 
 [Spark Branch]
 ---

 Key: HIVE-9178
 URL: https://issues.apache.org/jira/browse/HIVE-9178
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xuefu Zhang
Assignee: Marcelo Vanzin
 Attachments: HIVE-9178.1-spark.patch, HIVE-9178.1-spark.patch, 
 HIVE-9178.2-spark.patch


 Based on discussions in HIVE-8972, it seems making sense to create a separate 
 API for RPCs, such as addJar and getExecutorCounter. These jobs are different 
 from a query submission in that they don't need to be queued in the backend 
 and can be executed right away.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9342) add num-executors / executor-cores / executor-memory option support for hive on spark in Yarn mode

2015-01-13 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275180#comment-14275180
 ] 

Chengxiang Li commented on HIVE-9342:
-

Thanks for verification, [~fangxi.yin]. From you previous finding, instead of 
transfer executor cores/instances/memory configuration into spark-submit 
command line options, it seems more make sense to make Spark support executor 
cores/instances/memory configuration on yarn-cluster mode as well if possible.
If we try to resolve this in Hive, here is a few suggestions about the patch:
# use spark.executor.memory/cores/instances instead of 
spark.yarn.executor.memory/cores/instances, as previous configurations already 
exists in spark, we'd better keep consistent with them.
# add condition check of spark.master, as we only need to transfer them into 
spark-submit command line options on yarn-cluster mode.
# the patch is not well formatted.


 add num-executors / executor-cores / executor-memory option support for hive 
 on spark in Yarn mode
 --

 Key: HIVE-9342
 URL: https://issues.apache.org/jira/browse/HIVE-9342
 Project: Hive
  Issue Type: Improvement
  Components: spark-branch
Affects Versions: spark-branch
Reporter: Pierre Yin
Priority: Minor
  Labels: spark
 Fix For: spark-branch

 Attachments: HIVE-9342.1-spark.patch


 When I run hive on spark with Yarn mode, I want to control some yarn option, 
 such as --num-executors, --executor-cores, --executor-memory.
 We can append these options into argv in SparkClientImpl.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9337) Move more hive.spark.* configurations to HiveConf

2015-01-12 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273398#comment-14273398
 ] 

Chengxiang Li commented on HIVE-9337:
-

[~szehon], yes, i agree you that we should move all the related configuration 
to HiveConf. I think one possible gap here is that RPCConfiguration is used in 
RemoteDriver side as well, we need to push these configurations to RemoteDriver 
while launch RSC processor.

 Move more hive.spark.* configurations to HiveConf
 -

 Key: HIVE-9337
 URL: https://issues.apache.org/jira/browse/HIVE-9337
 Project: Hive
  Issue Type: Task
  Components: Spark
Reporter: Szehon Ho

 Some hive.spark configurations have been added to HiveConf, but there are 
 some like hive.spark.log.dir that are not there.
 Also some configurations in RpcConfiguration.java might be eligible to be 
 moved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9178) Create a separate API for remote Spark Context RPC other than job submission [Spark Branch]

2015-01-12 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14274807#comment-14274807
 ] 

Chengxiang Li commented on HIVE-9178:
-

The patch mostly looks good to me, I left a comment about the API style on the 
RB.

 Create a separate API for remote Spark Context RPC other than job submission 
 [Spark Branch]
 ---

 Key: HIVE-9178
 URL: https://issues.apache.org/jira/browse/HIVE-9178
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xuefu Zhang
Assignee: Marcelo Vanzin
 Attachments: HIVE-9178.1-spark.patch


 Based on discussions in HIVE-8972, it seems making sense to create a separate 
 API for RPCs, such as addJar and getExecutorCounter. These jobs are different 
 from a query submission in that they don't need to be queued in the backend 
 and can be executed right away.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9342) add num-executors / executor-cores / executor-memory option support for hive on spark in Yarn mode

2015-01-12 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14274831#comment-14274831
 ] 

Chengxiang Li commented on HIVE-9342:
-

Thanks, [~fangxi.yin] bring this up. While we launch spark client with 
spark-submit script, it load configurations from 2 ways: spark configuration 
file and command line options. Hive on Spark actually write all spark related 
configurations into a property file and add it to spark-submit's 
--properties-file option. For the 3 executor options you mentioned, there 
should already corresponding configurations, like:
# --num-executors - spark.executor.instances
# --executor-cores - spark.executor.cores
# --executor-memory - spark.executor.memory

So theoreticly, you can configure these properties through hive configuration 
file or CLI, while it's possible that these configuration does not work in 
certain deploy mode due to spark implementation. I think we shoud verify if it 
works in yarn-client or yarn-cluster mode first.

 add num-executors / executor-cores / executor-memory option support for hive 
 on spark in Yarn mode
 --

 Key: HIVE-9342
 URL: https://issues.apache.org/jira/browse/HIVE-9342
 Project: Hive
  Issue Type: Improvement
  Components: spark-branch
Affects Versions: spark-branch
Reporter: Pierre Yin
Priority: Minor
  Labels: spark
 Fix For: spark-branch

 Attachments: HIVE-9342.1-spark.patch


 When I run hive on spark with Yarn mode, I want to control some yarn option, 
 such as --num-executors, --executor-cores, --executor-memory.
 We can append these options into argv in SparkClientImpl.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9326) BaseProtocol.Error failed to deserialization due to NPE.[Spark Branch]

2015-01-09 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9326:

Status: Patch Available  (was: Open)

 BaseProtocol.Error failed to deserialization due to NPE.[Spark Branch]
 --

 Key: HIVE-9326
 URL: https://issues.apache.org/jira/browse/HIVE-9326
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M5
 Attachments: HIVE-9326.1-spark.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9326) BaseProtocol.Error failed to deserialization due to NPE.[Spark Branch]

2015-01-09 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9326:

Description: Throwables.getStackTraceAsString(cause) throw NPE if cause is 
null.

 BaseProtocol.Error failed to deserialization due to NPE.[Spark Branch]
 --

 Key: HIVE-9326
 URL: https://issues.apache.org/jira/browse/HIVE-9326
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M5
 Attachments: HIVE-9326.1-spark.patch


 Throwables.getStackTraceAsString(cause) throw NPE if cause is null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9326) BaseProtocol.Error failed to deserialization due to NPE.[Spark Branch]

2015-01-09 Thread Chengxiang Li (JIRA)
Chengxiang Li created HIVE-9326:
---

 Summary: BaseProtocol.Error failed to deserialization due to 
NPE.[Spark Branch]
 Key: HIVE-9326
 URL: https://issues.apache.org/jira/browse/HIVE-9326
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9326) BaseProtocol.Error failed to deserialization due to NPE.[Spark Branch]

2015-01-09 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9326:

Attachment: HIVE-9326.1-spark.patch

 BaseProtocol.Error failed to deserialization due to NPE.[Spark Branch]
 --

 Key: HIVE-9326
 URL: https://issues.apache.org/jira/browse/HIVE-9326
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M5
 Attachments: HIVE-9326.1-spark.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9323) Merge from trunk to spark 1/8/2015

2015-01-08 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270719#comment-14270719
 ] 

Chengxiang Li commented on HIVE-9323:
-

[~Szehon], I take a look at the hive log, the failed reason is quite strange 
and a little different from HIVE-9094. HIVE-9094 failed due to get executor 
count timeout because of spark cluster launch time is longer than spark client 
future timeout interval(5s, and 30s after HIVE-9094), while this timeout 
failure is due to RemoteDriver is not response in time(spark client wait 10s 
for RemoteDriver to register).
From the hive.log, RemoteDriver processor is launched at 2015-01-08 
18:43:03,938
{noformat}
2015-01-08 18:43:03,938 DEBUG [main]: client.SparkClientImpl 
(SparkClientImpl.java:startDriver(298)) - Running client driver with argv: 
/home/hiveptest/54.177.142.77-hiveptest-1/apache-svn-spark-source/itests/qtest-spark/../../itests/qtest-spark/target/spark/bin/spark-submit
 --properties-file 
/home/hiveptest/54.177.142.77-hiveptest-1/apache-svn-spark-source/itests/qtest-spark/target/tmp/spark-submit.1097041260552550316.properties
 --class org.apache.hive.spark.client.RemoteDriver 
/home/hiveptest/54.177.142.77-hiveptest-1/maven/org/apache/hive/hive-exec/0.15.0-SNAPSHOT/hive-exec-0.15.0-SNAPSHOT.jar
 --remote-host ip-10-228-130-250.us-west-1.compute.internal --remote-port 40406
{noformat}
In spark.log, RemoteDriver register back to SparkClient at 2015-01-08 
18:43:13,891 which should just more than timeout interval which is 10s.
{noformat}
2015-01-08 18:43:13,891 DEBUG [Driver-RPC-Handler-0]: rpc.RpcDispatcher 
(RpcDispatcher.java:registerRpc(185)) - [DriverProtocol] Registered outstanding 
rpc 0 (org.apache.hive.spark.client.rpc.Rpc$Hello).
{noformat}
The strange thing is that RemoteDriver processor is unusual slow, as it's 
launched at 2015-01-08 18:43:03,938 but we get it's first debug info at 
2015-01-08 18:43:13,161, RemoteDriver hardly do anything before this debug info.
{noformat}
2015-01-08 18:43:13,161 INFO  [main]: client.RemoteDriver 
(RemoteDriver.java:init(118)) - Connecting to: 
ip-10-228-130-250.us-west-1.compute.internal:40406
{noformat}
I not sure why this happens, but this should be a quite rarely case, we can 
check whether it happens again, besides expand timeout interval, i don't have a 
good solution for this issue now.

 Merge from trunk to spark 1/8/2015
 --

 Key: HIVE-9323
 URL: https://issues.apache.org/jira/browse/HIVE-9323
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Affects Versions: spark-branch
Reporter: Szehon Ho
Assignee: Szehon Ho
 Fix For: spark-branch

 Attachments: HIVE-9323-spark.patch, HIVE-9323.2-spark.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9288) TODO cleanup task1.[Spark Branch]

2015-01-07 Thread Chengxiang Li (JIRA)
Chengxiang Li created HIVE-9288:
---

 Summary: TODO cleanup task1.[Spark Branch]
 Key: HIVE-9288
 URL: https://issues.apache.org/jira/browse/HIVE-9288
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
Priority: Minor


cleanup TODO for job status related class if available before merge back to 
trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9288) TODO cleanup task1.[Spark Branch]

2015-01-07 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-9288:

Attachment: HIVE-9288.1-spark.patch

 TODO cleanup task1.[Spark Branch]
 -

 Key: HIVE-9288
 URL: https://issues.apache.org/jira/browse/HIVE-9288
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
Priority: Minor
  Labels: Spark-M5
 Attachments: HIVE-9288.1-spark.patch


 cleanup TODO for job status related class if available before merge back to 
 trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9288) TODO cleanup task1.[Spark Branch]

2015-01-07 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267445#comment-14267445
 ] 

Chengxiang Li commented on HIVE-9288:
-

Remove the following 3 TODO, and I explain why in RB.
# TODO: expose job status? JobHandleImpl.java 
/spark-client/src/main/java/org/apache/hive/spark/client line 121 Java Task
# TODO: are stage IDs unique? Otherwise this won't work. RemoteDriver.java 
/spark-client/src/main/java/org/apache/hive/spark/client line 374 Java Task
# TODO: expose job status? JobHandle.java 
/spark-client/src/main/java/org/apache/hive/spark/client line 57 Java Task

There are 2 remain related TODO, which need further discussion.
# TODO: implement implicit AsyncRDDActions conversion instead of jc.monitor()? 
RemoteDriver.java /spark-client/src/main/java/org/apache/hive/spark/client line 
410 Java Task
# TODO: how to handle stage failures? RemoteDriver.java 
/spark-client/src/main/java/org/apache/hive/spark/client line 411 Java Task


 TODO cleanup task1.[Spark Branch]
 -

 Key: HIVE-9288
 URL: https://issues.apache.org/jira/browse/HIVE-9288
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
Priority: Minor
  Labels: Spark-M5
 Attachments: HIVE-9288.1-spark.patch


 cleanup TODO for job status related class if available before merge back to 
 trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   4   5   6   >