[jira] [Updated] (HIVE-12185) setAutoCommit should only fail if auto commit is being disabled

2015-10-15 Thread Varadharajan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varadharajan updated HIVE-12185:

Attachment: HIVE-12185.1.patch

> setAutoCommit should only fail if auto commit is being disabled
> ---
>
> Key: HIVE-12185
> URL: https://issues.apache.org/jira/browse/HIVE-12185
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 1.2.0, 1.1.0, 1.2.1
>Reporter: Varadharajan
>Assignee: Varadharajan
>Priority: Minor
> Attachments: HIVE-12185.1.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Auto Commit is enabled by default in hive as documented at 
> https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions#HiveTransactions-Limitations.
>  In accordance with this, HiveConnection.java:getAutoCommit method returns 
> true. Similarly, setAutoCommit(true) should pass silently since it is already 
> enabled and a SQL exception should be thrown if its called with false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12179) Add option to not add spark-assembly.jar to Hive classpath

2015-10-15 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958798#comment-14958798
 ] 

Hive QA commented on HIVE-12179:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/1277/HIVE-12179.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 9694 tests executed
*Failed tests:*
{noformat}
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5660/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5660/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5660/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 1277 - PreCommit-HIVE-TRUNK-Build

> Add option to not add spark-assembly.jar to Hive classpath
> --
>
> Key: HIVE-12179
> URL: https://issues.apache.org/jira/browse/HIVE-12179
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-12179.1.patch, HIVE-12179.2.patch
>
>
> After running the following Hive script:
> {noformat}
> add jar hdfs:///tmp/junit-4.11.jar;
> show tables;
> {noformat}
> I can see the following lines getting printed to stdout when Hive exits:
> {noformat}
> WARN: The method class 
> org.apache.commons.logging.impl.SLF4JLogFactory#release() was invoked.
> WARN: Please see http://www.slf4j.org/codes.html#release for an explanation.
> {noformat}
> Also seeing the following warnings in stderr:
> {noformat}
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/2.3.3.0-2981/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/2.3.3.0-2981/spark/lib/spark-assembly-1.4.1.2.3.3.0-2981-hadoop2.7.1.2.3.3.0-2981.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/2.3.3.0-2981/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/2.3.3.0-2981/spark/lib/spark-assembly-1.4.1.2.3.3.0-2981-hadoop2.7.1.2.3.3.0-2981.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> {noformat}
> It looks like this is due to the addition of the shaded spark-assembly.jar to 
> the classpath, which contains classes from icl-over-slf4j.jar (which is 
> causing the stdout messages) and slf4j-log4j12.jar.
> Removing spark-assembly.jar from being added to the classpath causes these 
> messages to go away. It would be good to have a way to specify that Hive not 
> add spark-assembly.jar to the class path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11531) Add mysql-style LIMIT support to Hive, or improve ROW_NUMBER performance-wise

2015-10-15 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958607#comment-14958607
 ] 

Jesus Camacho Rodriguez commented on HIVE-11531:


Awesome, thanks [~huizane]!

Sort operator for CBO (HiveSortLimit) has already support for offset and limit; 
most rules involving Limit (maybe all) should have support for fetch too.

Integration with CBO would imply 1) setting the offset for the Calcite operator 
in SemanticAnalyzer, 2) translating the offset contained in the Calcite 
operator back in ASTConverter, and 3) modifying any rule that might need to be 
updated to work properly with offset (if needed).

I have seen in the patch that 1) is already done. [~huizane], could you 
complete 2) and add tests to offset_limit.q with CBO on to verify that it is 
working properly?
The problem with implementing only 1) is that we would be reading the offset 
from the query and setting it in the HiveSortLimit operator, but unless 2) is 
completed, we would be losing it when we translate back the Calcite operator.

FYI [~jpullokkaran]

> Add mysql-style LIMIT support to Hive, or improve ROW_NUMBER performance-wise
> -
>
> Key: HIVE-11531
> URL: https://issues.apache.org/jira/browse/HIVE-11531
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Hui Zheng
> Attachments: HIVE-11531.WIP.1.patch
>
>
> For any UIs that involve pagination, it is useful to issue queries in the 
> form SELECT ... LIMIT X,Y where X,Y are coordinates inside the result to be 
> paginated (which can be extremely large by itself). At present, ROW_NUMBER 
> can be used to achieve this effect, but optimizations for LIMIT such as TopN 
> in ReduceSink do not apply to ROW_NUMBER. We can add first class support for 
> "skip" to existing limit, or improve ROW_NUMBER for better performance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11693) CommonMergeJoinOperator throws exception with tez

2015-10-15 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HIVE-11693:

Attachment: HIVE-11693.1.patch

attaching query (col names and table names modified) which caused the issue.

{noformat}

SELECT v1.rid,
   v1.`jeid`,
   v1.`jelid`,
   v1.`jeldesc`,
   v1.`jehdesc`,
   v1.`dorc`,
   v1.`la`,
   `lad`,
   `lac`,
   `lacc`,
   v1.`ejei`,
   `activity`,
   v1.`cacid`,
   v1.`cuid`,
   `p`,
   `fn`,
   `ln`,
   `n`,
   `pd`,
   `pr`,
   `pt`
FROM v_flat_journal_entry_ext v1
JOIN
  (SELECT `jeid`,
  `jelid`,
  `ejeid`,
  `cuid`,
  `pid`,
  count(*) AS cnd
   FROM v_flat_journal_entry_ext
   GROUP BY `jeid`,
`jelid`,
`ejeid`,
`cuid`,
`pid`
   HAVING cnd > 1) test ON (test.`pid` = v1.`pid`);   
{noformat}

Basically, CommonMergeJoinOperator::fetchNextGroup had the issue.
e.g it entered the following condition in code (foundNextKeyGroup[t]=false, 
fetchDone[t]=false, t=1, orderLen=2, posBigTable=0)

Fixing it in ReduceRecordSource which deals with the readers.  Need to check if 
the same has to be done in MapRecordSource.

[~gopalv], [~sershe] - Please review when you find time.

> CommonMergeJoinOperator throws exception with tez
> -
>
> Key: HIVE-11693
> URL: https://issues.apache.org/jira/browse/HIVE-11693
> Project: Hive
>  Issue Type: Bug
>Reporter: Rajesh Balamohan
> Attachments: HIVE-11693.1.patch
>
>
> Got this when executing a simple query with latest hive build + tez latest 
> version.
> {noformat}
> Error: Failure while running task: 
> attempt_1439860407967_0291_2_03_45_0:java.lang.RuntimeException: 
> java.lang.RuntimeException: Hive Runtime Error while closing operators: 
> java.lang.RuntimeException: java.io.IOException: Please check if you are 
> invoking moveToNext() even after it returned false.
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:171)
> at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:137)
> at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:349)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:71)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:60)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:60)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:35)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.RuntimeException: Hive Runtime Error while closing 
> operators: java.lang.RuntimeException: java.io.IOException: Please check if 
> you are invoking moveToNext() even after it returned false.
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.close(ReduceRecordProcessor.java:316)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:162)
> ... 14 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.RuntimeException: java.io.IOException: Please check if you are 
> invoking moveToNext() even after it returned false.
> at 
> org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.fetchOneRow(CommonMergeJoinOperator.java:412)
> at 
> org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.fetchNextGroup(CommonMergeJoinOperator.java:375)
> at 
> org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.doFirstFetchIfNeeded(CommonMergeJoinOperator.java:482)
> at 
> org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.joinFinalLeftData(CommonMergeJoinOperator.java:434)
> at 
> org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.closeOp(CommonMergeJoinOperator.java:384)
> at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:616)
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.close(ReduceRecordProcessor.java:292)
> ... 15 more
> Caused by: java.lang.RuntimeException: java.io.IOException: Please check if 
> you are invoking moveToNext() even after it returned false.
> at 
> 

[jira] [Commented] (HIVE-12074) Conditionally turn off hybrid grace hash join based on est. data size, etc

2015-10-15 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958702#comment-14958702
 ] 

Hive QA commented on HIVE-12074:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12766650/HIVE-12074.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 72 failed/errored test(s), 9695 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_hybridgrace_hashjoin_3
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_bucket_map_join_tez1
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_bucket_map_join_tez2
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_dynamic_partition_pruning
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_dynamic_partition_pruning_2
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_hybridgrace_hashjoin_1
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_hybridgrace_hashjoin_2
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_lvj_mapjoin
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_mapjoin_decimal
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_mrr
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_bmj_schema_evolution
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_join_result_complex
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_smb_main
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_union
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_vectorized_dynamic_partition_pruning
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_join1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_join29
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_join30
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_10
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_11
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_12
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_13
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_14
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_15
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_2
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_3
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_4
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_5
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_7
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_8
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_9
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_bucket_map_join_tez1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_bucket_map_join_tez2
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_correlationoptimizer1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cross_join
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cross_product_check_2
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_dynamic_partition_pruning
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_dynamic_partition_pruning_2
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_hybridgrace_hashjoin_1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_hybridgrace_hashjoin_2
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_lvj_mapjoin
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_mapjoin_decimal
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_mapjoin_mapjoin
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_mrr
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_tez_bmj_schema_evolution
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_tez_join_result_complex
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_tez_smb_main
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_tez_union
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_unionDistinct_1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_binary_join_groupby
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_char_mapjoin1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_decimal_mapjoin
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_inner_join

[jira] [Commented] (HIVE-11531) Add mysql-style LIMIT support to Hive, or improve ROW_NUMBER performance-wise

2015-10-15 Thread Hui Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958715#comment-14958715
 ] 

Hui Zheng commented on HIVE-11531:
--

Thanks [~sershe] and [~jcamachorodriguez] for your instructions
I will continue on it.
 

> Add mysql-style LIMIT support to Hive, or improve ROW_NUMBER performance-wise
> -
>
> Key: HIVE-11531
> URL: https://issues.apache.org/jira/browse/HIVE-11531
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Hui Zheng
> Attachments: HIVE-11531.WIP.1.patch
>
>
> For any UIs that involve pagination, it is useful to issue queries in the 
> form SELECT ... LIMIT X,Y where X,Y are coordinates inside the result to be 
> paginated (which can be extremely large by itself). At present, ROW_NUMBER 
> can be used to achieve this effect, but optimizations for LIMIT such as TopN 
> in ReduceSink do not apply to ROW_NUMBER. We can add first class support for 
> "skip" to existing limit, or improve ROW_NUMBER for better performance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11519) kryo.KryoException: Encountered unregistered

2015-10-15 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958785#comment-14958785
 ] 

Feng Yuan commented on HIVE-11519:
--

is these all kryo`s bug beyong 2.22? @[~gopalv] @[~xuefuz]

> kryo.KryoException: Encountered unregistered
> 
>
> Key: HIVE-11519
> URL: https://issues.apache.org/jira/browse/HIVE-11519
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 0.13.1, 1.2.0, 1.2.1
> Environment: hadoop 2.5.0 cdh 5.3.2,hive-0.13.1-cdh5.3.2
>Reporter: duyanlong
>Assignee: duyanlong
>Priority: Critical
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> In the hive when the clients to execute HQL, occasionally the following 
> exception, please help solve, thank you
> Error: java.lang.RuntimeException: 
> org.apache.hive.com.esotericsoftware.kryo.KryoException: Encountered 
> unregistered class ID: 73
> Serialization trace:
> colExprMap (org.apache.hadoop.hive.ql.exec.TableScanOperator)
> aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:366)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:277)
> at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:258)
> at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:451)
> at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:444)
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:588)
> at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.(MapTask.java:169)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
> Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: 
> Encountered unregistered class ID: 73
> Serialization trace:
> colExprMap (org.apache.hadoop.hive.ql.exec.TableScanOperator)
> aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
> at 
> org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:119)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:99)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:139)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:17)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:672)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.deserializeObjectByKryo(Utilities.java:943)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-11757) I want to get the value of HiveKey in my custom partitioner

2015-10-15 Thread apachehadoop (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

apachehadoop resolved HIVE-11757.
-
Resolution: Fixed

> I want to get the value of HiveKey in my custom partitioner
> ---
>
> Key: HIVE-11757
> URL: https://issues.apache.org/jira/browse/HIVE-11757
> Project: Hive
>  Issue Type: Wish
>Reporter: apachehadoop
>
>JAVA CODE AS :
>  // 分配可以到多个reduce
> public int findPartition(K key, V value) {
> int len = splitPoints.length;
> for (int i = 0; i < len; i++) {
> HiveKey hivekey = (HiveKey)key;
> String keystring = new String(hivekey.getBytes());
> LOG.info("HiveKey string > " + keystring);
> LOG.info("splitPoints int > " + splitPoints[i]);
> 
> //IntWritable keyInt = new 
> IntWritable(Integer.parseInt(keystring));
> //int res = 
> ((IntWritable)key).compareTo((IntWritable)splitPoints[i]);
> //int res = keyInt.compareTo((IntWritable)splitPoints[i]);
> int res = 0;
> //int res = 
> ((IntWritable)key).compareTo((IntWritable)splitPoints[i]);
> if (res > 0 && i < len - 1) {
> continue;
> } else if (res <= 0) {
> return i;
> } else if (res > 0 && i == len - 1) {
> return i + 1;
> }
> }
> return 0;
> }
> As above,I can not get the value of key ,please help me ,friends.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-11325) Infinite loop in HiveHFileOutputFormat

2015-10-15 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J reassigned HIVE-11325:
--

Assignee: Harsh J

> Infinite loop in HiveHFileOutputFormat
> --
>
> Key: HIVE-11325
> URL: https://issues.apache.org/jira/browse/HIVE-11325
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Affects Versions: 1.0.0
>Reporter: Harsh J
>Assignee: Harsh J
> Attachments: HIVE-11325.patch
>
>
> No idea why {{hbase_handler_bulk.q}} does not catch this if its being run 
> regularly in Hive builds, but here's the gist of the issue:
> The condition at 
> https://github.com/apache/hive/blob/master/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHFileOutputFormat.java#L152-L164
>  indicates that we will infinitely loop until we find a file whose last path 
> component (the name) is equal to the column family name.
> In execution, however, the iteration enters an actual infinite loop cause the 
> file we end up considering as the srcDir name, is actually the region file, 
> whose name will never match the family name.
> This is an example of the IPC the listing loop of a 100% progress task gets 
> stuck in:
> {code}
> 2015-07-21 10:32:20,662 TRACE [main] org.apache.hadoop.ipc.ProtobufRpcEngine: 
> 1: Call -> cdh54.vm/172.16.29.132:8020: getListing {src: 
> "/user/hive/warehouse/hbase_test/_temporary/1/_temporary/attempt_1436935612068_0011_m_00_0/family/97112ac1c09548ae87bd85af072d2e8c"
>  startAfter: "" needLocation: false}
> 2015-07-21 10:32:20,662 DEBUG [IPC Parameter Sending Thread #1] 
> org.apache.hadoop.ipc.Client: IPC Client (1551465414) connection to 
> cdh54.vm/172.16.29.132:8020 from hive sending #510346
> 2015-07-21 10:32:20,662 DEBUG [IPC Client (1551465414) connection to 
> cdh54.vm/172.16.29.132:8020 from hive] org.apache.hadoop.ipc.Client: IPC 
> Client (1551465414) connection to cdh54.vm/172.16.29.132:8020 from hive got 
> value #510346
> 2015-07-21 10:32:20,662 DEBUG [main] org.apache.hadoop.ipc.ProtobufRpcEngine: 
> Call: getListing took 0ms
> 2015-07-21 10:32:20,662 TRACE [main] org.apache.hadoop.ipc.ProtobufRpcEngine: 
> 1: Response <- cdh54.vm/172.16.29.132:8020: getListing {dirList { 
> partialListing { fileType: IS_FILE path: "" length: 863 permission { perm: 
> 4600 } owner: "hive" group: "hive" modification_time: 1437454718130 
> access_time: 1437454717973 block_replication: 1 blocksize: 134217728 fileId: 
> 33960 childrenNum: 0 storagePolicy: 0 } remainingEntries: 0 }}
> {code}
> The path we are getting out of the listing results is 
> {{/user/hive/warehouse/hbase_test/_temporary/1/_temporary/attempt_1436935612068_0011_m_00_0/family/97112ac1c09548ae87bd85af072d2e8c}},
>  but instead of checking the path's parent {{family}} we're instead looping 
> infinitely over its hashed filename {{97112ac1c09548ae87bd85af072d2e8c}} 
> cause it does not match {{family}}.
> It stays in the infinite loop therefore, until the MR framework kills it away 
> due to an idle task timeout (and then since the subsequent task attempts fail 
> outright, the job fails).
> While doing a {{getPath().getParent()}} will resolve that, is that infinite 
> loop even necessary? Especially given the fact that we throw exceptions if 
> there are no entries or there is more than one entry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10937) LLAP: make ObjectCache for plans work properly in the daemon

2015-10-15 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958521#comment-14958521
 ] 

Hive QA commented on HIVE-10937:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12766641/HIVE-10937.04.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 39 failed/errored test(s), 9694 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_bucket_map_join_tez1
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_bucket_map_join_tez2
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_dynamic_partition_pruning
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_dynamic_partition_pruning_2
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_explainuser_1
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_explainuser_2
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_explainuser_3
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_hybridgrace_hashjoin_1
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_hybridgrace_hashjoin_2
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_llapdecider
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_lvj_mapjoin
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_mapjoin_decimal
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_mrr
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_orc_ppd_basic
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_bmj_schema_evolution
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_dml
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_dynpart_hashjoin_1
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_dynpart_hashjoin_2
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_fsstat
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_insert_overwrite_local_directory_1
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_join_hash
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_join_result_complex
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_join_tests
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_joins_explain
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_multi_union
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_schema_evolution
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_self_join
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_smb_1
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_smb_main
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_union
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_union2
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_union_decimal
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_union_dynamic_partition
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_union_multiinsert
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_vector_dynpart_hashjoin_1
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_vector_dynpart_hashjoin_2
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_vectorized_dynamic_partition_pruning
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5658/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5658/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5658/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 39 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12766641 - PreCommit-HIVE-TRUNK-Build

> LLAP: make ObjectCache for plans work properly in the daemon
> 
>
> Key: HIVE-10937
> URL: https://issues.apache.org/jira/browse/HIVE-10937
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-10937.01.patch, HIVE-10937.02.patch, 
> HIVE-10937.03.patch, HIVE-10937.04.patch, HIVE-10937.patch
>
>
> There's perf hit otherwise, esp. when 

[jira] [Commented] (HIVE-12060) LLAP: create separate variable for llap tests

2015-10-15 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958446#comment-14958446
 ] 

Hive QA commented on HIVE-12060:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12766637/HIVE-12060.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 9691 tests executed
*Failed tests:*
{noformat}
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarDataNucleusUnCaching
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5657/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5657/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5657/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12766637 - PreCommit-HIVE-TRUNK-Build

> LLAP: create separate variable for llap tests
> -
>
> Key: HIVE-12060
> URL: https://issues.apache.org/jira/browse/HIVE-12060
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12060.01.patch, HIVE-12060.patch
>
>
> No real reason to just reuse tez one; also needed to parallelize the tests



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11954) Extend logic to choose side table in MapJoin Conversion algorithm

2015-10-15 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-11954:
---
Attachment: HIVE-11954.10.patch

> Extend logic to choose side table in MapJoin Conversion algorithm
> -
>
> Key: HIVE-11954
> URL: https://issues.apache.org/jira/browse/HIVE-11954
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Affects Versions: 2.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-11954.01.patch, HIVE-11954.02.patch, 
> HIVE-11954.03.patch, HIVE-11954.04.patch, HIVE-11954.05.patch, 
> HIVE-11954.06.patch, HIVE-11954.07.patch, HIVE-11954.08.patch, 
> HIVE-11954.09.patch, HIVE-11954.10.patch, HIVE-11954.patch, HIVE-11954.patch
>
>
> Selection of side table (in memory/hash table) in MapJoin Conversion 
> algorithm needs to be more sophisticated.
> In an N way Map Join, Hive should pick an input stream as side table (in 
> memory table) that has least cost in producing relation (like TS(FIL|Proj)*).
> Cost based choice needs extended cost model; without return path its going to 
> be hard to do this.
> For the time being we could employ a modified cost based algorithm for side 
> table selection.
> New algorithm is described below:
> 1. Identify the candidate set of inputs for side table (in memory/hash table) 
> from the inputs (based on conditional task size)
> 2. For each of the input identify its cost, memory requirement. Cost is 1 for 
> each heavy weight relation op (Join, GB, PTF/Windowing, TF, etc.). Cost for 
> an input is the total no of heavy weight ops in its branch.
> 3. Order set from #1 on cost & memory req (ascending order)
> 4. Pick the first element from #3 as the side table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11981) ORC Schema Evolution Issues (Vectorized, ACID, and Non-Vectorized)

2015-10-15 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959081#comment-14959081
 ] 

Hive QA commented on HIVE-11981:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12766709/HIVE-11981.05.patch

{color:green}SUCCESS:{color} +1 due to 15 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 14 failed/errored test(s), 9704 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_orc_nonvec_fetchwork_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_orc_nonvec_mapwork_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_orc_vec_mapwork_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_text_nonvec_fetchwork_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_schema_evol_text_nonvec_mapwork_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_ppr_all
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.majorCompactAfterAbort
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.majorCompactWhileStreaming
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.minorCompactAfterAbort
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.minorCompactWhileStreaming
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.testStatsAfterCompactionPartTbl
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarDataNucleusUnCaching
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5662/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5662/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5662/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 14 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12766709 - PreCommit-HIVE-TRUNK-Build

> ORC Schema Evolution Issues (Vectorized, ACID, and Non-Vectorized)
> --
>
> Key: HIVE-11981
> URL: https://issues.apache.org/jira/browse/HIVE-11981
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Transactions
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-11981.01.patch, HIVE-11981.02.patch, 
> HIVE-11981.03.patch, HIVE-11981.05.patch, ORC Schema Evolution Issues.docx
>
>
> High priority issues with schema evolution for the ORC file format.
> Schema evolution here is limited to adding new columns and a few cases of 
> column type-widening (e.g. int to bigint).
> Renaming columns, deleting column, moving columns and other schema evolution 
> were not pursued due to lack of importance and lack of time.  Also, it 
> appears a much more sophisticated metadata would be needed to support them.
> The biggest issues for users have been adding new columns for ACID table 
> (HIVE-11421 Support Schema evolution for ACID tables) and vectorization 
> (HIVE-10598 Vectorization borks when column is added to table).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11489) Jenkins PreCommit-HIVE-SPARK-Build fails with TestCliDriver.initializationError

2015-10-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-11489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-11489:
---
Attachment: HIVE-11489.3.patch

> Jenkins PreCommit-HIVE-SPARK-Build fails with 
> TestCliDriver.initializationError
> ---
>
> Key: HIVE-11489
> URL: https://issues.apache.org/jira/browse/HIVE-11489
> Project: Hive
>  Issue Type: Task
>  Components: Testing Infrastructure
>Reporter: Sergio Peña
>Assignee: Sergio Peña
> Attachments: HIVE-11489.2-spark.patch, HIVE-11489.3.patch
>
>
> The Jenkins job {{PreCommit-HIVE-SPARK-Build}} is failing due to many 
> {{TestCliDriver.initializationError}} test results.
> {noformat}
> Error Message
> Unexpected exception java.io.FileNotFoundException: 
> /data/hive-ptest/working/apache-git-source-source/itests/qtest/target/generated-test-sources/java/org/apache/hadoop/hive/cli/TestCliDriverQFileNames.txt
>  (No such file or directory)
>  at java.io.FileInputStream.open(Native Method)
>  at java.io.FileInputStream.(FileInputStream.java:146)
>  at java.io.FileReader.(FileReader.java:72)
>  at 
> org.apache.hadoop.hive.ql.QTestUtil.addTestsToSuiteFromQfileNames(QTestUtil.java:2019)
>  at org.apache.hadoop.hive.cli.TestCliDriver.suite(TestCliDriver.java:120)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:606)
>  at 
> org.junit.internal.runners.SuiteMethod.testFromSuiteMethod(SuiteMethod.java:35)
>  at org.junit.internal.runners.SuiteMethod.(SuiteMethod.java:24)
>  at 
> org.junit.internal.builders.SuiteMethodBuilder.runnerForClass(SuiteMethodBuilder.java:11)
>  at 
> org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
>  at 
> org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(AllDefaultPossibilitiesBuilder.java:26)
>  at 
> org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
>  at org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:26)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:262)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>  at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Stacktrace
> junit.framework.AssertionFailedError: Unexpected exception 
> java.io.FileNotFoundException: 
> /data/hive-ptest/working/apache-git-source-source/itests/qtest/target/generated-test-sources/java/org/apache/hadoop/hive/cli/TestCliDriverQFileNames.txt
>  (No such file or directory)
>   at java.io.FileInputStream.open(Native Method)
>   at java.io.FileInputStream.(FileInputStream.java:146)
>   at java.io.FileReader.(FileReader.java:72)
>   at 
> org.apache.hadoop.hive.ql.QTestUtil.addTestsToSuiteFromQfileNames(QTestUtil.java:2019)
>   at 
> org.apache.hadoop.hive.cli.TestCliDriver.suite(TestCliDriver.java:120)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.internal.runners.SuiteMethod.testFromSuiteMethod(SuiteMethod.java:35)
>   at org.junit.internal.runners.SuiteMethod.(SuiteMethod.java:24)
>   at 
> org.junit.internal.builders.SuiteMethodBuilder.runnerForClass(SuiteMethodBuilder.java:11)
>   at 
> org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
>   at 
> org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(AllDefaultPossibilitiesBuilder.java:26)
>   at 
> org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
>   at 
> org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:26)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:262)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> 

[jira] [Commented] (HIVE-11710) Beeline embedded mode doesn't output query progress after setting any session property

2015-10-15 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959103#comment-14959103
 ] 

Aihua Xu commented on HIVE-11710:
-

Seems HIVE-11579 fixed one related issue, but the current incorrect behavior 
still exists.  1. There could be temp file handler leaking from 
HIveCommandOption class if somehow there are exception. 2. We still need to 
reset SessionState.out/info/err to System.out/err in SQLOperation, otherwise 
for embedded mode, the beeline client output will be redirected to the files, 
not to the console. 

[~Ferd] You worked on HIVE-11579 recently and probably know that code well. 
There is a possibility that the temp file handlers are not closed if there is 
an exception during the following code, correct? And also we need to flush the 
output rather than closing the stream if the stream points to System.out/.err. 
right?
{noformat}
  sessionState.out =
  new PrintStream(new 
FileOutputStream(sessionState.getTmpOutputFile()), true, CharEncoding.UTF_8);
  sessionState.err =
  new PrintStream(new 
FileOutputStream(sessionState.getTmpErrOutputFile()), true,CharEncoding.UTF_8);
{noformat}

[~xuefuz]  Sorry. Didn't get time to work on that.  MyPrintStream class will be 
cleaner, but it's not easy to differentiate if it's file based or not from the 
stream itself since System.out or System.err can also point to file based 
stream as well. So it's tight to the class HiveCommandOperation class 
themselves and we may need to pass a flag "flushOnClose" to the MyPrintStream 
class. Let me look into that.

> Beeline embedded mode doesn't output query progress after setting any session 
> property
> --
>
> Key: HIVE-11710
> URL: https://issues.apache.org/jira/browse/HIVE-11710
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 2.0.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-11710.2.patch, HIVE-11710.3.patch, HIVE-11710.patch
>
>
> Connect to beeline embedded mode {{beeline -u jdbc:hive2://}}. Then set 
> anything in the session like {{set aa=true;}}.
> After that, any query like {{select count(*) from src;}} will only output 
> result but no query progress.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HIVE-11489) Jenkins PreCommit-HIVE-SPARK-Build fails with TestCliDriver.initializationError

2015-10-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-11489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-11489:
---
Comment: was deleted

(was: 

{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12765078/HIVE-11489.1-spark.patch

{color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 7455 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.initializationError
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_inner_join
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join2
org.apache.hadoop.hive.cli.TestMinimrCliDriver.initializationError
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
org.apache.hive.hcatalog.streaming.TestStreaming.testRemainingTransactions
org.apache.hive.hcatalog.streaming.TestStreaming.testTransactionBatchEmptyCommit
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/959/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/959/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-959/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 7 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12765078 - PreCommit-HIVE-SPARK-Build)

> Jenkins PreCommit-HIVE-SPARK-Build fails with 
> TestCliDriver.initializationError
> ---
>
> Key: HIVE-11489
> URL: https://issues.apache.org/jira/browse/HIVE-11489
> Project: Hive
>  Issue Type: Task
>  Components: Testing Infrastructure
>Reporter: Sergio Peña
>Assignee: Sergio Peña
> Attachments: HIVE-11489.2-spark.patch
>
>
> The Jenkins job {{PreCommit-HIVE-SPARK-Build}} is failing due to many 
> {{TestCliDriver.initializationError}} test results.
> {noformat}
> Error Message
> Unexpected exception java.io.FileNotFoundException: 
> /data/hive-ptest/working/apache-git-source-source/itests/qtest/target/generated-test-sources/java/org/apache/hadoop/hive/cli/TestCliDriverQFileNames.txt
>  (No such file or directory)
>  at java.io.FileInputStream.open(Native Method)
>  at java.io.FileInputStream.(FileInputStream.java:146)
>  at java.io.FileReader.(FileReader.java:72)
>  at 
> org.apache.hadoop.hive.ql.QTestUtil.addTestsToSuiteFromQfileNames(QTestUtil.java:2019)
>  at org.apache.hadoop.hive.cli.TestCliDriver.suite(TestCliDriver.java:120)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:606)
>  at 
> org.junit.internal.runners.SuiteMethod.testFromSuiteMethod(SuiteMethod.java:35)
>  at org.junit.internal.runners.SuiteMethod.(SuiteMethod.java:24)
>  at 
> org.junit.internal.builders.SuiteMethodBuilder.runnerForClass(SuiteMethodBuilder.java:11)
>  at 
> org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
>  at 
> org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(AllDefaultPossibilitiesBuilder.java:26)
>  at 
> org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
>  at org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:26)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:262)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>  at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Stacktrace
> junit.framework.AssertionFailedError: Unexpected exception 
> java.io.FileNotFoundException: 
> /data/hive-ptest/working/apache-git-source-source/itests/qtest/target/generated-test-sources/java/org/apache/hadoop/hive/cli/TestCliDriverQFileNames.txt
>  (No such file or directory)
>   at java.io.FileInputStream.open(Native Method)
>   at java.io.FileInputStream.(FileInputStream.java:146)
>   at java.io.FileReader.(FileReader.java:72)
>   at 
> 

[jira] [Updated] (HIVE-11489) Jenkins PreCommit-HIVE-SPARK-Build fails with TestCliDriver.initializationError

2015-10-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-11489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-11489:
---
Attachment: (was: HIVE-11489.1-spark.patch)

> Jenkins PreCommit-HIVE-SPARK-Build fails with 
> TestCliDriver.initializationError
> ---
>
> Key: HIVE-11489
> URL: https://issues.apache.org/jira/browse/HIVE-11489
> Project: Hive
>  Issue Type: Task
>  Components: Testing Infrastructure
>Reporter: Sergio Peña
>Assignee: Sergio Peña
> Attachments: HIVE-11489.2-spark.patch
>
>
> The Jenkins job {{PreCommit-HIVE-SPARK-Build}} is failing due to many 
> {{TestCliDriver.initializationError}} test results.
> {noformat}
> Error Message
> Unexpected exception java.io.FileNotFoundException: 
> /data/hive-ptest/working/apache-git-source-source/itests/qtest/target/generated-test-sources/java/org/apache/hadoop/hive/cli/TestCliDriverQFileNames.txt
>  (No such file or directory)
>  at java.io.FileInputStream.open(Native Method)
>  at java.io.FileInputStream.(FileInputStream.java:146)
>  at java.io.FileReader.(FileReader.java:72)
>  at 
> org.apache.hadoop.hive.ql.QTestUtil.addTestsToSuiteFromQfileNames(QTestUtil.java:2019)
>  at org.apache.hadoop.hive.cli.TestCliDriver.suite(TestCliDriver.java:120)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:606)
>  at 
> org.junit.internal.runners.SuiteMethod.testFromSuiteMethod(SuiteMethod.java:35)
>  at org.junit.internal.runners.SuiteMethod.(SuiteMethod.java:24)
>  at 
> org.junit.internal.builders.SuiteMethodBuilder.runnerForClass(SuiteMethodBuilder.java:11)
>  at 
> org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
>  at 
> org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(AllDefaultPossibilitiesBuilder.java:26)
>  at 
> org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
>  at org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:26)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:262)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>  at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Stacktrace
> junit.framework.AssertionFailedError: Unexpected exception 
> java.io.FileNotFoundException: 
> /data/hive-ptest/working/apache-git-source-source/itests/qtest/target/generated-test-sources/java/org/apache/hadoop/hive/cli/TestCliDriverQFileNames.txt
>  (No such file or directory)
>   at java.io.FileInputStream.open(Native Method)
>   at java.io.FileInputStream.(FileInputStream.java:146)
>   at java.io.FileReader.(FileReader.java:72)
>   at 
> org.apache.hadoop.hive.ql.QTestUtil.addTestsToSuiteFromQfileNames(QTestUtil.java:2019)
>   at 
> org.apache.hadoop.hive.cli.TestCliDriver.suite(TestCliDriver.java:120)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.internal.runners.SuiteMethod.testFromSuiteMethod(SuiteMethod.java:35)
>   at org.junit.internal.runners.SuiteMethod.(SuiteMethod.java:24)
>   at 
> org.junit.internal.builders.SuiteMethodBuilder.runnerForClass(SuiteMethodBuilder.java:11)
>   at 
> org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
>   at 
> org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(AllDefaultPossibilitiesBuilder.java:26)
>   at 
> org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
>   at 
> org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:26)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:262)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> 

[jira] [Issue Comment Deleted] (HIVE-11489) Jenkins PreCommit-HIVE-SPARK-Build fails with TestCliDriver.initializationError

2015-10-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-11489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-11489:
---
Comment: was deleted

(was: 

{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12765188/HIVE-11489.2-spark.patch

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 7455 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.initializationError
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_inner_join
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join2
org.apache.hadoop.hive.cli.TestMinimrCliDriver.initializationError
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/960/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/960/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-960/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12765188 - PreCommit-HIVE-SPARK-Build)

> Jenkins PreCommit-HIVE-SPARK-Build fails with 
> TestCliDriver.initializationError
> ---
>
> Key: HIVE-11489
> URL: https://issues.apache.org/jira/browse/HIVE-11489
> Project: Hive
>  Issue Type: Task
>  Components: Testing Infrastructure
>Reporter: Sergio Peña
>Assignee: Sergio Peña
> Attachments: HIVE-11489.2-spark.patch
>
>
> The Jenkins job {{PreCommit-HIVE-SPARK-Build}} is failing due to many 
> {{TestCliDriver.initializationError}} test results.
> {noformat}
> Error Message
> Unexpected exception java.io.FileNotFoundException: 
> /data/hive-ptest/working/apache-git-source-source/itests/qtest/target/generated-test-sources/java/org/apache/hadoop/hive/cli/TestCliDriverQFileNames.txt
>  (No such file or directory)
>  at java.io.FileInputStream.open(Native Method)
>  at java.io.FileInputStream.(FileInputStream.java:146)
>  at java.io.FileReader.(FileReader.java:72)
>  at 
> org.apache.hadoop.hive.ql.QTestUtil.addTestsToSuiteFromQfileNames(QTestUtil.java:2019)
>  at org.apache.hadoop.hive.cli.TestCliDriver.suite(TestCliDriver.java:120)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:606)
>  at 
> org.junit.internal.runners.SuiteMethod.testFromSuiteMethod(SuiteMethod.java:35)
>  at org.junit.internal.runners.SuiteMethod.(SuiteMethod.java:24)
>  at 
> org.junit.internal.builders.SuiteMethodBuilder.runnerForClass(SuiteMethodBuilder.java:11)
>  at 
> org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
>  at 
> org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(AllDefaultPossibilitiesBuilder.java:26)
>  at 
> org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
>  at org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:26)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:262)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>  at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Stacktrace
> junit.framework.AssertionFailedError: Unexpected exception 
> java.io.FileNotFoundException: 
> /data/hive-ptest/working/apache-git-source-source/itests/qtest/target/generated-test-sources/java/org/apache/hadoop/hive/cli/TestCliDriverQFileNames.txt
>  (No such file or directory)
>   at java.io.FileInputStream.open(Native Method)
>   at java.io.FileInputStream.(FileInputStream.java:146)
>   at java.io.FileReader.(FileReader.java:72)
>   at 
> org.apache.hadoop.hive.ql.QTestUtil.addTestsToSuiteFromQfileNames(QTestUtil.java:2019)
>   at 
> org.apache.hadoop.hive.cli.TestCliDriver.suite(TestCliDriver.java:120)
>   at 

[jira] [Comment Edited] (HIVE-12167) HBase metastore causes massive number of ZK exceptions in MiniTez tests

2015-10-15 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959122#comment-14959122
 ] 

Alan Gates edited comment on HIVE-12167 at 10/15/15 4:00 PM:
-

It seems that HIVE-12170 aims to fix this correctly, so why put in a piece meal 
fix here, that I don't think will result in the correct behavior?


was (Author: alangates):
It seems that HIVE-12170 aims to fix this correctly, so why put in a piece meal 
fix here, that I don't think will result in the correct behavior.

> HBase metastore causes massive number of ZK exceptions in MiniTez tests
> ---
>
> Key: HIVE-12167
> URL: https://issues.apache.org/jira/browse/HIVE-12167
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12167.patch
>
>
> I ran some random test (vectorization_10) with HBase metastore for unrelated 
> reason, and I see large number of exceptions in hive.log
> {noformat}
> $ grep -c "ConnectionLoss" hive.log
> 52
> $ grep -c "Connection refused" hive.log
> 1014
> {noformat}
> These log lines' count has increased by ~33% since merging llap branch, but 
> it is still high before that (39/~700) for the same test). These lines are 
> not present if I disable HBase metastore.
> The exceptions are:
> {noformat}
> 2015-10-13T17:51:06,232 WARN  [Thread-359-SendThread(localhost:2181)]: 
> zookeeper.ClientCnxn (ClientCnxn.java:run(1102)) - Session 0x0 for server 
> null, unexpected error, closing socket connection and attempting reconnect
> java.net.ConnectException: Connection refused
>   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 
> ~[?:1.8.0_45]
>   at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) 
> ~[?:1.8.0_45]
>   at 
> org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
>  ~[zookeeper-3.4.6.jar:3.4.6-1569965]
>   at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) 
> [zookeeper-3.4.6.jar:3.4.6-1569965]
> {noformat}
> that is retried for some seconds and then
> {noformat}
> 2015-10-13T17:51:22,867 WARN  [Thread-359]: zookeeper.ZKUtil 
> (ZKUtil.java:checkExists(544)) - hconnection-0x1da6ef180x0, 
> quorum=localhost:2181, baseZNode=/hbase Unable to set watcher on znode 
> (/hbase/hbaseid)
> org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
> = ConnectionLoss for /hbase/hbaseid
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) 
> ~[zookeeper-3.4.6.jar:3.4.6-1569965]
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) 
> ~[zookeeper-3.4.6.jar:3.4.6-1569965]
>   at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045) 
> ~[zookeeper-3.4.6.jar:3.4.6-1569965]
>   at 
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:222)
>  ~[hbase-client-1.1.1.jar:1.1.1]
>   at 
> org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:541) 
> [hbase-client-1.1.1.jar:1.1.1]
>   at 
> org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65)
>  [hbase-client-1.1.1.jar:1.1.1]
>   at 
> org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:105)
>  [hbase-client-1.1.1.jar:1.1.1]
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:879)
>  [hbase-client-1.1.1.jar:1.1.1]
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.(ConnectionManager.java:635)
>  [hbase-client-1.1.1.jar:1.1.1]
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method) ~[?:1.8.0_45]
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  [?:1.8.0_45]
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  [?:1.8.0_45]
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:422) 
> [?:1.8.0_45]
>   at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
>  [hbase-client-1.1.1.jar:1.1.1]
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:420)
>  [hbase-client-1.1.1.jar:1.1.1]
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager.createConnectionInternal(ConnectionManager.java:329)
>  [hbase-client-1.1.1.jar:1.1.1]
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:144)
>  [hbase-client-1.1.1.jar:1.1.1]
>   at 
> 

[jira] [Commented] (HIVE-12167) HBase metastore causes massive number of ZK exceptions in MiniTez tests

2015-10-15 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959122#comment-14959122
 ] 

Alan Gates commented on HIVE-12167:
---

It seems that HIVE-12170 aims to fix this correctly, so why put in a piece meal 
fix here, that I don't think will result in the correct behavior.

> HBase metastore causes massive number of ZK exceptions in MiniTez tests
> ---
>
> Key: HIVE-12167
> URL: https://issues.apache.org/jira/browse/HIVE-12167
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12167.patch
>
>
> I ran some random test (vectorization_10) with HBase metastore for unrelated 
> reason, and I see large number of exceptions in hive.log
> {noformat}
> $ grep -c "ConnectionLoss" hive.log
> 52
> $ grep -c "Connection refused" hive.log
> 1014
> {noformat}
> These log lines' count has increased by ~33% since merging llap branch, but 
> it is still high before that (39/~700) for the same test). These lines are 
> not present if I disable HBase metastore.
> The exceptions are:
> {noformat}
> 2015-10-13T17:51:06,232 WARN  [Thread-359-SendThread(localhost:2181)]: 
> zookeeper.ClientCnxn (ClientCnxn.java:run(1102)) - Session 0x0 for server 
> null, unexpected error, closing socket connection and attempting reconnect
> java.net.ConnectException: Connection refused
>   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 
> ~[?:1.8.0_45]
>   at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) 
> ~[?:1.8.0_45]
>   at 
> org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
>  ~[zookeeper-3.4.6.jar:3.4.6-1569965]
>   at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) 
> [zookeeper-3.4.6.jar:3.4.6-1569965]
> {noformat}
> that is retried for some seconds and then
> {noformat}
> 2015-10-13T17:51:22,867 WARN  [Thread-359]: zookeeper.ZKUtil 
> (ZKUtil.java:checkExists(544)) - hconnection-0x1da6ef180x0, 
> quorum=localhost:2181, baseZNode=/hbase Unable to set watcher on znode 
> (/hbase/hbaseid)
> org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
> = ConnectionLoss for /hbase/hbaseid
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) 
> ~[zookeeper-3.4.6.jar:3.4.6-1569965]
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) 
> ~[zookeeper-3.4.6.jar:3.4.6-1569965]
>   at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045) 
> ~[zookeeper-3.4.6.jar:3.4.6-1569965]
>   at 
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:222)
>  ~[hbase-client-1.1.1.jar:1.1.1]
>   at 
> org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:541) 
> [hbase-client-1.1.1.jar:1.1.1]
>   at 
> org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65)
>  [hbase-client-1.1.1.jar:1.1.1]
>   at 
> org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:105)
>  [hbase-client-1.1.1.jar:1.1.1]
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:879)
>  [hbase-client-1.1.1.jar:1.1.1]
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.(ConnectionManager.java:635)
>  [hbase-client-1.1.1.jar:1.1.1]
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method) ~[?:1.8.0_45]
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  [?:1.8.0_45]
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  [?:1.8.0_45]
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:422) 
> [?:1.8.0_45]
>   at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
>  [hbase-client-1.1.1.jar:1.1.1]
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:420)
>  [hbase-client-1.1.1.jar:1.1.1]
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager.createConnectionInternal(ConnectionManager.java:329)
>  [hbase-client-1.1.1.jar:1.1.1]
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:144)
>  [hbase-client-1.1.1.jar:1.1.1]
>   at 
> org.apache.hadoop.hive.metastore.hbase.VanillaHBaseConnection.connect(VanillaHBaseConnection.java:56)
>  [hive-metastore-2.0.0-SNAPSHOT.jar:?]
>   at 
> org.apache.hadoop.hive.metastore.hbase.HBaseReadWrite.(HBaseReadWrite.java:227)
>  [hive-metastore-2.0.0-SNAPSHOT.jar:?]
>   at 
> 

[jira] [Commented] (HIVE-12180) Use MapJoinDesc::isHybridHashJoin() instead of the HiveConf lookup in Vectorizer

2015-10-15 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958929#comment-14958929
 ] 

Hive QA commented on HIVE-12180:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12766654/HIVE-12180.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 9694 tests executed
*Failed tests:*
{noformat}
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5661/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5661/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5661/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12766654 - PreCommit-HIVE-TRUNK-Build

> Use MapJoinDesc::isHybridHashJoin() instead of the HiveConf lookup in 
> Vectorizer
> 
>
> Key: HIVE-12180
> URL: https://issues.apache.org/jira/browse/HIVE-12180
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Wei Zheng
>Assignee: Wei Zheng
> Attachments: HIVE-12180.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11985) don't store type names in metastore when metastore type names are not used

2015-10-15 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959575#comment-14959575
 ] 

Hive QA commented on HIVE-11985:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12766685/HIVE-11985.03.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 9697 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter1
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_explainuser_1
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_explainuser_3
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5665/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5665/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5665/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12766685 - PreCommit-HIVE-TRUNK-Build

> don't store type names in metastore when metastore type names are not used
> --
>
> Key: HIVE-11985
> URL: https://issues.apache.org/jira/browse/HIVE-11985
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-11985.01.patch, HIVE-11985.02.patch, 
> HIVE-11985.03.patch, HIVE-11985.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12195) Unknown zones should cause an error instead of silently failing

2015-10-15 Thread Ryan Blue (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Blue updated HIVE-12195:
-
Description: 
Using an unknown time zone with the {{from_utc_timestamp}} or 
{{to_utc_timetamp}} methods returns the time un-adjusted instead of throwing an 
error:

{code}
hive> select from_utc_timestamp('2015-04-11 12:24:34.535', 'panda');
OK
2015-04-11 12:24:34.535
{code}

This should be an error because users may attempt to adjust to valid but 
unknown zones, like PDT or MDT. This would produce incorrect results with no 
warning or error.

  was:
Using an unknown time zone with the {{from_utc_timestamp}} or 
{{to_utc_timetamp}} methods returns the time un-adjusted instead of throwing an 
error:

{code}
hive> select from_utc_timestamp('2015-04-11 12:24:34.535', 'panda');
2015-04-11 12:24:34.535
{code}

This should be an error because users may attempt to adjust to valid but 
unknown zones, like PDT or MDT. This would produce incorrect results with no 
warning or error.


> Unknown zones should cause an error instead of silently failing
> ---
>
> Key: HIVE-12195
> URL: https://issues.apache.org/jira/browse/HIVE-12195
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Ryan Blue
>
> Using an unknown time zone with the {{from_utc_timestamp}} or 
> {{to_utc_timetamp}} methods returns the time un-adjusted instead of throwing 
> an error:
> {code}
> hive> select from_utc_timestamp('2015-04-11 12:24:34.535', 'panda');
> OK
> 2015-04-11 12:24:34.535
> {code}
> This should be an error because users may attempt to adjust to valid but 
> unknown zones, like PDT or MDT. This would produce incorrect results with no 
> warning or error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12182) ALTER TABLE PARTITION COLUMN does not set partition column comments

2015-10-15 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-12182:
-
Attachment: HIVE-12182.1.patch.txt

> ALTER TABLE PARTITION COLUMN does not set partition column comments
> ---
>
> Key: HIVE-12182
> URL: https://issues.apache.org/jira/browse/HIVE-12182
> Project: Hive
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 1.2.1
>Reporter: Lenni Kuff
>Assignee: Navis
> Attachments: HIVE-12182.1.patch.txt
>
>
> ALTER TABLE PARTITION COLUMN does not set partition column comments. The 
> syntax is accepted, but the COMMENT for the column is ignored.
> {code}
> 0: jdbc:hive2://localhost:1/default> create table part_test(i int comment 
> 'HELLO') partitioned by (j int comment 'WORLD');
> No rows affected (0.104 seconds)
> 0: jdbc:hive2://localhost:1/default> describe part_test;
> +--+---+---+--+
> | col_name |   data_type   |comment|
> +--+---+---+--+
> | i| int   | HELLO |
> | j| int   | WORLD |
> |  | NULL  | NULL  |
> | # Partition Information  | NULL  | NULL  |
> | # col_name   | data_type | comment   |
> |  | NULL  | NULL  |
> | j| int   | WORLD |
> +--+---+---+--+
> 7 rows selected (0.109 seconds)
> 0: jdbc:hive2://localhost:1/default> alter table part_test partition 
> column (j int comment 'WIDE');
> No rows affected (0.121 seconds)
> 0: jdbc:hive2://localhost:1/default> describe part_test;
> +--+---+---+--+
> | col_name |   data_type   |comment|
> +--+---+---+--+
> | i| int   | HELLO |
> | j| int   |   |
> |  | NULL  | NULL  |
> | # Partition Information  | NULL  | NULL  |
> | # col_name   | data_type | comment   |
> |  | NULL  | NULL  |
> | j| int   |   |
> +--+---+---+--+
> 7 rows selected (0.108 seconds)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-12182) ALTER TABLE PARTITION COLUMN does not set partition column comments

2015-10-15 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis reassigned HIVE-12182:


Assignee: Navis  (was: Naveen Gangam)

> ALTER TABLE PARTITION COLUMN does not set partition column comments
> ---
>
> Key: HIVE-12182
> URL: https://issues.apache.org/jira/browse/HIVE-12182
> Project: Hive
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 1.2.1
>Reporter: Lenni Kuff
>Assignee: Navis
> Attachments: HIVE-12182.1.patch.txt
>
>
> ALTER TABLE PARTITION COLUMN does not set partition column comments. The 
> syntax is accepted, but the COMMENT for the column is ignored.
> {code}
> 0: jdbc:hive2://localhost:1/default> create table part_test(i int comment 
> 'HELLO') partitioned by (j int comment 'WORLD');
> No rows affected (0.104 seconds)
> 0: jdbc:hive2://localhost:1/default> describe part_test;
> +--+---+---+--+
> | col_name |   data_type   |comment|
> +--+---+---+--+
> | i| int   | HELLO |
> | j| int   | WORLD |
> |  | NULL  | NULL  |
> | # Partition Information  | NULL  | NULL  |
> | # col_name   | data_type | comment   |
> |  | NULL  | NULL  |
> | j| int   | WORLD |
> +--+---+---+--+
> 7 rows selected (0.109 seconds)
> 0: jdbc:hive2://localhost:1/default> alter table part_test partition 
> column (j int comment 'WIDE');
> No rows affected (0.121 seconds)
> 0: jdbc:hive2://localhost:1/default> describe part_test;
> +--+---+---+--+
> | col_name |   data_type   |comment|
> +--+---+---+--+
> | i| int   | HELLO |
> | j| int   |   |
> |  | NULL  | NULL  |
> | # Partition Information  | NULL  | NULL  |
> | # col_name   | data_type | comment   |
> |  | NULL  | NULL  |
> | j| int   |   |
> +--+---+---+--+
> 7 rows selected (0.108 seconds)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11822) vectorize NVL UDF

2015-10-15 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959985#comment-14959985
 ] 

Sergey Shelukhin commented on HIVE-11822:
-

I'll take a look. IIRC it runs java 7 on Jenkins. I will update patch if it 
fails again

> vectorize NVL UDF
> -
>
> Key: HIVE-11822
> URL: https://issues.apache.org/jira/browse/HIVE-11822
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Takanobu Asanuma
> Attachments: HIVE-11822.1.patch, HIVE-11822.2.patch, 
> HIVE-11822.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12201) Tez settings need to be shown in set -v output when execution engine is tez.

2015-10-15 Thread Vikram Dixit K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Dixit K updated HIVE-12201:
--
Attachment: HIVE-12201.1.patch

> Tez settings need to be shown in set -v output when execution engine is tez.
> 
>
> Key: HIVE-12201
> URL: https://issues.apache.org/jira/browse/HIVE-12201
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: 1.0.1, 1.2.1
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
>Priority: Minor
> Attachments: HIVE-12201.1.patch
>
>
> The set -v output currently shows configurations for yarn, hdfs etc. but does 
> not show tez settings when tez is set as the execution engine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12201) Tez settings need to be shown in set -v output when execution engine is tez.

2015-10-15 Thread Vikram Dixit K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Dixit K updated HIVE-12201:
--
Attachment: (was: HIVE-12201.1.patch)

> Tez settings need to be shown in set -v output when execution engine is tez.
> 
>
> Key: HIVE-12201
> URL: https://issues.apache.org/jira/browse/HIVE-12201
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: 1.0.1, 1.2.1
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
>Priority: Minor
>
> The set -v output currently shows configurations for yarn, hdfs etc. but does 
> not show tez settings when tez is set as the execution engine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11985) don't store type names in metastore when metastore type names are not used

2015-10-15 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-11985:

Attachment: HIVE-11985.05.patch

> don't store type names in metastore when metastore type names are not used
> --
>
> Key: HIVE-11985
> URL: https://issues.apache.org/jira/browse/HIVE-11985
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-11985.01.patch, HIVE-11985.02.patch, 
> HIVE-11985.03.patch, HIVE-11985.05.patch, HIVE-11985.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12056) Branch 1.1.1: root pom and itest pom are not linked

2015-10-15 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-12056:

Attachment: HIVE-12056.2.patch

[~csun] Good point. Made the change in v2.

> Branch 1.1.1: root pom and itest pom are not linked
> ---
>
> Key: HIVE-12056
> URL: https://issues.apache.org/jira/browse/HIVE-12056
> Project: Hive
>  Issue Type: Bug
>  Components: Testing Infrastructure
>Affects Versions: 1.1.1
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-12056.1.patch, HIVE-12056.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12181) Change hive.stats.fetch.column.stats default value to true

2015-10-15 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14960016#comment-14960016
 ] 

Hive QA commented on HIVE-12181:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12766696/HIVE-12181.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1028 failed/errored test(s), 9694 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alias_casted_column
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_allcolref_in_udf
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_partition_coltype
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ambiguous_col
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_analyze_table_null_partition
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ansi_sql_arithmetic
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join0
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join10
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join14
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join15
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join16
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join17
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join18
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join18_multi_distinct
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join19
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join20
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join21
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join22
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join23
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join26
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join27
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join28
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join29
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join30
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join31
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join32
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join33
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join9
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join_stats
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join_stats2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join_without_localtask
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_binary_output_format
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cast1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_rp_auto_join17
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_rp_gby2_map_multi_distinct
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_rp_outer_join_ppr
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cluster
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_combine2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_constantPropagateForSubQuery
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_constprog2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_constprog_dp
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer10
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer14

[jira] [Updated] (HIVE-12201) Tez settings need to be shown in set -v output when execution engine is tez.

2015-10-15 Thread Vikram Dixit K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Dixit K updated HIVE-12201:
--
Attachment: HIVE-12201.1.patch

> Tez settings need to be shown in set -v output when execution engine is tez.
> 
>
> Key: HIVE-12201
> URL: https://issues.apache.org/jira/browse/HIVE-12201
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: 1.0.1, 1.2.1
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
>Priority: Minor
> Attachments: HIVE-12201.1.patch
>
>
> The set -v output currently shows configurations for yarn, hdfs etc. but does 
> not show tez settings when tez is set as the execution engine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11676) implement metastore API to do file footer PPD

2015-10-15 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-11676:

Attachment: HIVE-11676.04.patch

> implement metastore API to do file footer PPD
> -
>
> Key: HIVE-11676
> URL: https://issues.apache.org/jira/browse/HIVE-11676
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-11676.01.patch, HIVE-11676.02.patch, 
> HIVE-11676.03.patch, HIVE-11676.04.patch, HIVE-11676.patch
>
>
> Need to pass on the expression/sarg, extract column stats from footer (at 
> write time?) and then apply one to the other.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12182) ALTER TABLE PARTITION COLUMN does not set partition column comments

2015-10-15 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-12182:
-
Assignee: Naveen Gangam  (was: Navis)

> ALTER TABLE PARTITION COLUMN does not set partition column comments
> ---
>
> Key: HIVE-12182
> URL: https://issues.apache.org/jira/browse/HIVE-12182
> Project: Hive
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 1.2.1
>Reporter: Lenni Kuff
>Assignee: Naveen Gangam
>
> ALTER TABLE PARTITION COLUMN does not set partition column comments. The 
> syntax is accepted, but the COMMENT for the column is ignored.
> {code}
> 0: jdbc:hive2://localhost:1/default> create table part_test(i int comment 
> 'HELLO') partitioned by (j int comment 'WORLD');
> No rows affected (0.104 seconds)
> 0: jdbc:hive2://localhost:1/default> describe part_test;
> +--+---+---+--+
> | col_name |   data_type   |comment|
> +--+---+---+--+
> | i| int   | HELLO |
> | j| int   | WORLD |
> |  | NULL  | NULL  |
> | # Partition Information  | NULL  | NULL  |
> | # col_name   | data_type | comment   |
> |  | NULL  | NULL  |
> | j| int   | WORLD |
> +--+---+---+--+
> 7 rows selected (0.109 seconds)
> 0: jdbc:hive2://localhost:1/default> alter table part_test partition 
> column (j int comment 'WIDE');
> No rows affected (0.121 seconds)
> 0: jdbc:hive2://localhost:1/default> describe part_test;
> +--+---+---+--+
> | col_name |   data_type   |comment|
> +--+---+---+--+
> | i| int   | HELLO |
> | j| int   |   |
> |  | NULL  | NULL  |
> | # Partition Information  | NULL  | NULL  |
> | # col_name   | data_type | comment   |
> |  | NULL  | NULL  |
> | j| int   |   |
> +--+---+---+--+
> 7 rows selected (0.108 seconds)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12182) ALTER TABLE PARTITION COLUMN does not set partition column comments

2015-10-15 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-12182:
-
Attachment: (was: HIVE-12182.1.patch.txt)

> ALTER TABLE PARTITION COLUMN does not set partition column comments
> ---
>
> Key: HIVE-12182
> URL: https://issues.apache.org/jira/browse/HIVE-12182
> Project: Hive
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 1.2.1
>Reporter: Lenni Kuff
>Assignee: Naveen Gangam
>
> ALTER TABLE PARTITION COLUMN does not set partition column comments. The 
> syntax is accepted, but the COMMENT for the column is ignored.
> {code}
> 0: jdbc:hive2://localhost:1/default> create table part_test(i int comment 
> 'HELLO') partitioned by (j int comment 'WORLD');
> No rows affected (0.104 seconds)
> 0: jdbc:hive2://localhost:1/default> describe part_test;
> +--+---+---+--+
> | col_name |   data_type   |comment|
> +--+---+---+--+
> | i| int   | HELLO |
> | j| int   | WORLD |
> |  | NULL  | NULL  |
> | # Partition Information  | NULL  | NULL  |
> | # col_name   | data_type | comment   |
> |  | NULL  | NULL  |
> | j| int   | WORLD |
> +--+---+---+--+
> 7 rows selected (0.109 seconds)
> 0: jdbc:hive2://localhost:1/default> alter table part_test partition 
> column (j int comment 'WIDE');
> No rows affected (0.121 seconds)
> 0: jdbc:hive2://localhost:1/default> describe part_test;
> +--+---+---+--+
> | col_name |   data_type   |comment|
> +--+---+---+--+
> | i| int   | HELLO |
> | j| int   |   |
> |  | NULL  | NULL  |
> | # Partition Information  | NULL  | NULL  |
> | # col_name   | data_type | comment   |
> |  | NULL  | NULL  |
> | j| int   |   |
> +--+---+---+--+
> 7 rows selected (0.108 seconds)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12164) Remove jdbc stats collection mechanism

2015-10-15 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-12164:

Attachment: HIVE-12164.1.patch

> Remove jdbc stats collection mechanism
> --
>
> Key: HIVE-12164
> URL: https://issues.apache.org/jira/browse/HIVE-12164
> Project: Hive
>  Issue Type: Task
>  Components: Statistics
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-12164.1.patch, HIVE-12164.patch
>
>
> Though there are some deployments using it, usually its painful to setup 
> since a valid hive-site.xml is needed on all task nodes (containing 
> connection details) and for large tasks (with thousands of tasks) results in 
> a scalability issue with all of them hammering DB at nearly same time.
> Because of these pain points alternative stats collection mechanism were 
> added. FS stats based system is default for some time.
> We should remove jdbc stats collection mechanism as it needlessly adds 
> complexity in TS and FS operators w.r.t key handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11676) implement metastore API to do file footer PPD

2015-10-15 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959790#comment-14959790
 ] 

Sergey Shelukhin commented on HIVE-11676:
-

Future work is handled by HIVE-12051 (pushing into HBase).
As for proto file, it's out of the patch cause I forgot to git add it :) it's 
in 02 patch, will add it back.
The reason for a separate file is because it doesn't fit existing files, it's 
not directly an ORC structure, and not specific to HBase metastore (at least in 
theory).

> implement metastore API to do file footer PPD
> -
>
> Key: HIVE-11676
> URL: https://issues.apache.org/jira/browse/HIVE-11676
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-11676.01.patch, HIVE-11676.02.patch, 
> HIVE-11676.03.patch, HIVE-11676.patch
>
>
> Need to pass on the expression/sarg, extract column stats from footer (at 
> write time?) and then apply one to the other.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12062) enable HBase metastore file metadata cache for tez tests

2015-10-15 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959890#comment-14959890
 ] 

Hive QA commented on HIVE-12062:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12766695/HIVE-12062.01.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 9690 tests executed
*Failed tests:*
{noformat}
TestSSL - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_index_auto_mult_tables_compact
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_explainuser_1
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_explainuser_3
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
org.apache.hive.jdbc.TestJdbcWithLocalClusterSpark.testTempTable
org.apache.tez.dag.app.rm.TestLlapTaskSchedulerService.testSimpleNoLocalityAllocation
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5667/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5667/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5667/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 7 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12766695 - PreCommit-HIVE-TRUNK-Build

> enable HBase metastore file metadata cache for tez tests
> 
>
> Key: HIVE-12062
> URL: https://issues.apache.org/jira/browse/HIVE-12062
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12062.01.patch, HIVE-12062.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-11526) LLAP: implement LLAP UI as a separate service

2015-10-15 Thread Yuya OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuya OZAWA reassigned HIVE-11526:
-

Assignee: Yuya OZAWA  (was: Kai Sasaki)

> LLAP: implement LLAP UI as a separate service
> -
>
> Key: HIVE-11526
> URL: https://issues.apache.org/jira/browse/HIVE-11526
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Yuya OZAWA
> Attachments: llap_monitor_design.pdf
>
>
> The specifics are vague at this point. 
> Hadoop metrics can be output, as well as metrics we collect and output in 
> jmx, as well as those we collect per fragment and log right now. 
> This service can do LLAP-specific views, and per-query aggregation.
> [~gopalv] may have some information on how to reuse existing solutions for 
> part of the work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11110) Reorder applyPreJoinOrderingTransforms, add NotNULL/FilterMerge rules, improve Filter selectivity estimation

2015-10-15 Thread Laljo John Pullokkaran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-0?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laljo John Pullokkaran updated HIVE-0:
--
Attachment: HIVE-0.19.patch

> Reorder applyPreJoinOrderingTransforms, add NotNULL/FilterMerge rules, 
> improve Filter selectivity estimation
> 
>
> Key: HIVE-0
> URL: https://issues.apache.org/jira/browse/HIVE-0
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Laljo John Pullokkaran
> Attachments: HIVE-0-10.patch, HIVE-0-11.patch, 
> HIVE-0-12.patch, HIVE-0-branch-1.2.patch, HIVE-0.1.patch, 
> HIVE-0.13.patch, HIVE-0.14.patch, HIVE-0.15.patch, 
> HIVE-0.16.patch, HIVE-0.17.patch, HIVE-0.18.patch, 
> HIVE-0.19.patch, HIVE-0.2.patch, HIVE-0.4.patch, 
> HIVE-0.5.patch, HIVE-0.6.patch, HIVE-0.7.patch, 
> HIVE-0.8.patch, HIVE-0.9.patch, HIVE-0.91.patch, 
> HIVE-0.92.patch, HIVE-0.patch
>
>
> Query
> {code}
> select  count(*)
>  from store_sales
>  ,store_returns
>  ,date_dim d1
>  ,date_dim d2
>  where d1.d_quarter_name = '2000Q1'
>and d1.d_date_sk = ss_sold_date_sk
>and ss_customer_sk = sr_customer_sk
>and ss_item_sk = sr_item_sk
>and ss_ticket_number = sr_ticket_number
>and sr_returned_date_sk = d2.d_date_sk
>and d2.d_quarter_name in ('2000Q1','2000Q2','2000Q3’);
> {code}
> The store_sales table is partitioned on ss_sold_date_sk, which is also used 
> in a join clause. The join clause should add a filter “filterExpr: 
> ss_sold_date_sk is not null”, which should get pushed the MetaStore when 
> fetching the stats. Currently this is not done in CBO planning, which results 
> in the stats from __HIVE_DEFAULT_PARTITION__ to be fetched and considered in 
> the optimization phase. In particular, this increases the NDV for the join 
> columns and may result in wrong planning.
> Including HiveJoinAddNotNullRule in the optimization phase solves this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11895) CBO: Calcite Operator To Hive Operator (Calcite Return Path): fix udaf_percentile_approx_23.q

2015-10-15 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-11895:
---
Attachment: HIVE-11895.02.patch

> CBO: Calcite Operator To Hive Operator (Calcite Return Path): fix 
> udaf_percentile_approx_23.q
> -
>
> Key: HIVE-11895
> URL: https://issues.apache.org/jira/browse/HIVE-11895
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-11895.01.patch, HIVE-11895.02.patch
>
>
> Due to a type conversion problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11985) don't store type names in metastore when metastore type names are not used

2015-10-15 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959943#comment-14959943
 ] 

Sergey Shelukhin commented on HIVE-11985:
-

I don't understand the (old) alter serde logic at all... it only sets fields if 
the new serde doesn't have metastore-based schema. That doesn't make any sense.

> don't store type names in metastore when metastore type names are not used
> --
>
> Key: HIVE-11985
> URL: https://issues.apache.org/jira/browse/HIVE-11985
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-11985.01.patch, HIVE-11985.02.patch, 
> HIVE-11985.03.patch, HIVE-11985.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10937) LLAP: make ObjectCache for plans work properly in the daemon

2015-10-15 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959965#comment-14959965
 ] 

Sergey Shelukhin commented on HIVE-10937:
-

Hmm, looks like it no longer works, some paths are invalid. Will look later.

> LLAP: make ObjectCache for plans work properly in the daemon
> 
>
> Key: HIVE-10937
> URL: https://issues.apache.org/jira/browse/HIVE-10937
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-10937.01.patch, HIVE-10937.02.patch, 
> HIVE-10937.03.patch, HIVE-10937.04.patch, HIVE-10937.patch
>
>
> There's perf hit otherwise, esp. when stupid planner creates 1009 reducers of 
> 4Mb each.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-11650) Create LLAP Monitor Daemon class and launch scripts

2015-10-15 Thread Yuya OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuya OZAWA reassigned HIVE-11650:
-

Assignee: Yuya OZAWA  (was: Kai Sasaki)

> Create LLAP Monitor Daemon class and launch scripts
> ---
>
> Key: HIVE-11650
> URL: https://issues.apache.org/jira/browse/HIVE-11650
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: llap
>Reporter: Kai Sasaki
>Assignee: Yuya OZAWA
> Attachments: HIVE-11650-llap.00.patch, Screen Shot 2015-08-26 at 
> 16.54.35.png, example.patch
>
>
> This JIRA for creating LLAP Monitor Daemon class and related launching 
> scripts for slider package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11777) implement an option to have single ETL strategy for multiple directories

2015-10-15 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14960110#comment-14960110
 ] 

Hive QA commented on HIVE-11777:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12766697/HIVE-11777.02.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 421 failed/errored test(s), 9694 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_join
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_vectorization
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_vectorization_partition
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_vectorization_project
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_delete
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_delete_own_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_update
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_update_own_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_char_serde
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_date_serde
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_decimal_join2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_delete_all_non_partitioned
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_delete_all_partitioned
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_delete_orig_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_delete_tmp_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_delete_where_no_match
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_delete_where_non_partitioned
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_delete_where_partitioned
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_delete_whole_partition
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dynpart_sort_optimization2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dynpart_sort_optimization_acid
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_implicit_cast_during_insert
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_acid_dynamic_partition
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_acid_not_bucketed
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_into_with_schema
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_nonacid_from_acid
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_orig_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_update_delete
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_values_acid_not_bucketed
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_values_dynamic_partitioned
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_values_non_partitioned
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_values_orig_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_values_partitioned
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_values_tmp_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_llap_partitioned
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_create
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_createas1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_dictionary_threshold
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_diff_part_cols
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_diff_part_cols2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_empty_strings
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_ends_with_nulls
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_file_dump
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_int_type_promotion
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_llap
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_merge1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_merge3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_merge4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_merge5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_merge6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_merge7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_merge_incompat1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_merge_incompat2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_ppd_boolean
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_ppd_char
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_ppd_date
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_ppd_decimal
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_ppd_timestamp
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_ppd_varchar

[jira] [Commented] (HIVE-12179) Add option to not add spark-assembly.jar to Hive classpath

2015-10-15 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959530#comment-14959530
 ] 

Ashutosh Chauhan commented on HIVE-12179:
-

+1 LGTM

> Add option to not add spark-assembly.jar to Hive classpath
> --
>
> Key: HIVE-12179
> URL: https://issues.apache.org/jira/browse/HIVE-12179
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-12179.1.patch, HIVE-12179.2.patch
>
>
> After running the following Hive script:
> {noformat}
> add jar hdfs:///tmp/junit-4.11.jar;
> show tables;
> {noformat}
> I can see the following lines getting printed to stdout when Hive exits:
> {noformat}
> WARN: The method class 
> org.apache.commons.logging.impl.SLF4JLogFactory#release() was invoked.
> WARN: Please see http://www.slf4j.org/codes.html#release for an explanation.
> {noformat}
> Also seeing the following warnings in stderr:
> {noformat}
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/2.3.3.0-2981/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/2.3.3.0-2981/spark/lib/spark-assembly-1.4.1.2.3.3.0-2981-hadoop2.7.1.2.3.3.0-2981.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/2.3.3.0-2981/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/2.3.3.0-2981/spark/lib/spark-assembly-1.4.1.2.3.3.0-2981-hadoop2.7.1.2.3.3.0-2981.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> {noformat}
> It looks like this is due to the addition of the shaded spark-assembly.jar to 
> the classpath, which contains classes from icl-over-slf4j.jar (which is 
> causing the stdout messages) and slf4j-log4j12.jar.
> Removing spark-assembly.jar from being added to the classpath causes these 
> messages to go away. It would be good to have a way to specify that Hive not 
> add spark-assembly.jar to the class path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11710) Beeline embedded mode doesn't output query progress after setting any session property

2015-10-15 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959601#comment-14959601
 ] 

Aihua Xu commented on HIVE-11710:
-

Attached the new patch which will close the streams if there is an exception in 
HIveCommandOperation. Reset the streams for SQLOperation to standard out/err 
when initializing.

> Beeline embedded mode doesn't output query progress after setting any session 
> property
> --
>
> Key: HIVE-11710
> URL: https://issues.apache.org/jira/browse/HIVE-11710
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 2.0.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-11710.2.patch, HIVE-11710.3.patch, 
> HIVE-11710.4.patch, HIVE-11710.patch
>
>
> Connect to beeline embedded mode {{beeline -u jdbc:hive2://}}. Then set 
> anything in the session like {{set aa=true;}}.
> After that, any query like {{select count(*) from src;}} will only output 
> result but no query progress.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12063) Pad Decimal numbers with trailing zeros to the scale of the column

2015-10-15 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-12063:
---
Attachment: HIVE-12063.1.patch

> Pad Decimal numbers with trailing zeros to the scale of the column
> --
>
> Key: HIVE-12063
> URL: https://issues.apache.org/jira/browse/HIVE-12063
> Project: Hive
>  Issue Type: Improvement
>  Components: Types
>Affects Versions: 0.14.0, 1.0.0, 1.2.0, 1.1.0, 0.13
>Reporter: Xuefu Zhang
>Assignee: Xuefu Zhang
> Attachments: HIVE-12063.1.patch, HIVE-12063.patch
>
>
> HIVE-7373 was to address the problems of trimming tailing zeros by Hive, 
> which caused many problems including treating 0.0, 0.00 and so on as 0, which 
> has different precision/scale. Please refer to HIVE-7373 description. 
> However, HIVE-7373 was reverted by HIVE-8745 while the underlying problems 
> remained. HIVE-11835 was resolved recently to address one of the problems, 
> where 0.0, 0.00, and so on cannot be read into decimal(1,1).
> However, HIVE-11835 didn't address the problem of showing as 0 in query 
> result for any decimal values such as 0.0, 0.00, etc. This causes confusion 
> as 0 and 0.0 have different precision/scale than 0.
> The proposal here is to pad zeros for query result to the type's scale. This 
> not only removes the confusion described above, but also aligns with many 
> other DBs. Internal decimal number representation doesn't change, however.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12053) Stats performance regression caused by HIVE-11786

2015-10-15 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-12053:

Fix Version/s: 1.3.0

> Stats performance regression caused by HIVE-11786
> -
>
> Key: HIVE-12053
> URL: https://issues.apache.org/jira/browse/HIVE-12053
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
> Fix For: 1.3.0, 2.0.0
>
> Attachments: HIVE-12053.patch
>
>
> HIVE-11786 tried to normalize table TAB_COL_STATS/PART_COL_STATS but caused 
> performance regression.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12053) Stats performance regression caused by HIVE-11786

2015-10-15 Thread Chaoyu Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959623#comment-14959623
 ] 

Chaoyu Tang commented on HIVE-12053:


Thanks [~sershe].

> Stats performance regression caused by HIVE-11786
> -
>
> Key: HIVE-12053
> URL: https://issues.apache.org/jira/browse/HIVE-12053
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
> Fix For: 1.3.0, 2.0.0
>
> Attachments: HIVE-12053.patch
>
>
> HIVE-11786 tried to normalize table TAB_COL_STATS/PART_COL_STATS but caused 
> performance regression.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12198) LLAP: reader failures when querying uncompressed data

2015-10-15 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-12198:

Attachment: HIVE-12198.patch

> LLAP: reader failures when querying uncompressed data
> -
>
> Key: HIVE-12198
> URL: https://issues.apache.org/jira/browse/HIVE-12198
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Gopal V
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12198.patch
>
>
> {code}
> hive> select sum(l_extendedprice * l_discount) as revenue from 
> testing.lineitem where l_shipdate >= '1993-01-01' and l_shipdate < 
> '1994-01-01' ;
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl$UncompressedCacheChunk
>  cannot be cast to 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$BufferChunk
> at 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.copyAndReplaceUncompressedChunks(EncodedReaderImpl.java:962)
> at 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.preReadUncompressedStream(EncodedReaderImpl.java:890)
> at 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.readEncodedColumns(EncodedReaderImpl.java:319)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.performDataRead(OrcEncodedDataReader.java:413)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:194)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:191)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:191)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:74)
> at 
> org.apache.hadoop.hive.common.CallableWithNdc.call(CallableWithNdc.java:37)
> ... 4 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12198) LLAP: reader failures when querying uncompressed data

2015-10-15 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-12198:

Attachment: (was: HIVE-12171.patch)

> LLAP: reader failures when querying uncompressed data
> -
>
> Key: HIVE-12198
> URL: https://issues.apache.org/jira/browse/HIVE-12198
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Gopal V
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12198.patch
>
>
> {code}
> hive> select sum(l_extendedprice * l_discount) as revenue from 
> testing.lineitem where l_shipdate >= '1993-01-01' and l_shipdate < 
> '1994-01-01' ;
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl$UncompressedCacheChunk
>  cannot be cast to 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$BufferChunk
> at 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.copyAndReplaceUncompressedChunks(EncodedReaderImpl.java:962)
> at 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.preReadUncompressedStream(EncodedReaderImpl.java:890)
> at 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.readEncodedColumns(EncodedReaderImpl.java:319)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.performDataRead(OrcEncodedDataReader.java:413)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:194)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:191)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:191)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:74)
> at 
> org.apache.hadoop.hive.common.CallableWithNdc.call(CallableWithNdc.java:37)
> ... 4 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12053) Stats performance regression caused by HIVE-11786

2015-10-15 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959609#comment-14959609
 ] 

Sergey Shelukhin commented on HIVE-12053:
-

binding +1 :)

> Stats performance regression caused by HIVE-11786
> -
>
> Key: HIVE-12053
> URL: https://issues.apache.org/jira/browse/HIVE-12053
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
> Attachments: HIVE-12053.patch
>
>
> HIVE-11786 tried to normalize table TAB_COL_STATS/PART_COL_STATS but caused 
> performance regression.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12062) enable HBase metastore file metadata cache for tez tests

2015-10-15 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959645#comment-14959645
 ] 

Sergey Shelukhin commented on HIVE-12062:
-

startMiniHBase.. actually sets it to a correct config including the HBase 
settings like the port

> enable HBase metastore file metadata cache for tez tests
> 
>
> Key: HIVE-12062
> URL: https://issues.apache.org/jira/browse/HIVE-12062
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12062.01.patch, HIVE-12062.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12062) enable HBase metastore file metadata cache for tez tests

2015-10-15 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959707#comment-14959707
 ] 

Alan Gates commented on HIVE-12062:
---

Ok, then +1

> enable HBase metastore file metadata cache for tez tests
> 
>
> Key: HIVE-12062
> URL: https://issues.apache.org/jira/browse/HIVE-12062
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12062.01.patch, HIVE-12062.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12170) normalize HBase metastore connection configuration

2015-10-15 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959785#comment-14959785
 ] 

Sergey Shelukhin commented on HIVE-12170:
-

Patch looks good. One question - if the semantics are that we setConf once and 
then always use it, should it detect cases when setConf is called with 
different conf and fail?

> normalize HBase metastore connection configuration
> --
>
> Key: HIVE-12170
> URL: https://issues.apache.org/jira/browse/HIVE-12170
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HIVE-12170.patch
>
>
> Right now there are two ways to get HBaseReadWrite instance in metastore. 
> Both get a threadlocal instance (is there a good reason for that?).
> 1) One is w/o conf and only works if someone called the (2) before, from any 
> thread.
> 2) The other blindly sets a static conf and then gets an instance with that 
> conf, or if someone already happened to call (1) or (2) from this thread, it 
> returns the existing instance with whatever conf was set before (but still 
> resets the current conf to new conf).
> This doesn't make sense even in an already-thread-safe case (like linear 
> CLI-based tests), and can easily lead to bugs as described; the config 
> propagation logic is not good (example - HIVE-12167); some calls just reset 
> config blindly, so there's no point in setting staticConf, other than for the 
> callers of method (1) above who don't have a conf and would rely on the 
> static (which is bad design).
> Having connections with different configs reliably in not possible, and 
> multi-threaded cases would also break - you could even set conf, have it 
> reset and get instance with somebody else's conf. 
> Static should definitely be removed, maybe threadlocal too (HConnection is 
> thread-safe).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11901) StorageBasedAuthorizationProvider requires write permission on table for SELECT statements

2015-10-15 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959584#comment-14959584
 ] 

Thejas M Nair commented on HIVE-11901:
--

[~chengbing.liu] It is better to include the tests with fix as far as possible. 
Otherwise, the tests don't often get added, and we won't notice the regression 
if it happens again.

Please take a look at the test cases in 
TestStorageBasedMetastoreAuthorizationReads or 
TestStorageBasedMetastoreAuthorizationDrops for examples on how to create the 
test case.
Let me know if you need help with that.


> StorageBasedAuthorizationProvider requires write permission on table for 
> SELECT statements
> --
>
> Key: HIVE-11901
> URL: https://issues.apache.org/jira/browse/HIVE-11901
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization
>Affects Versions: 1.2.1
>Reporter: Chengbing Liu
>Assignee: Chengbing Liu
> Attachments: HIVE-11901.01.patch
>
>
> With HIVE-7895, it will require write permission on the table directory even 
> for a SELECT statement.
> Looking at the stacktrace, it seems the method 
> {{StorageBasedAuthorizationProvider#authorize(Table table, Partition part, 
> Privilege[] readRequiredPriv, Privilege[] writeRequiredPriv)}} always treats 
> a null partition as a CREATE statement, which can also be a SELECT.
> We may have to check {{readRequiredPriv}} and {{writeRequiredPriv}} first   
> in order to tell which statement it is.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12194) Incorrect result when using from_utc_timestamp with the local time zone.

2015-10-15 Thread Ryan Blue (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959614#comment-14959614
 ] 

Ryan Blue commented on HIVE-12194:
--

This may be that Hive doesn't know about daylight savings time zones:

{code}
hive> select from_utc_timestamp('2015-04-11 12:24:34.535', 'MDT');
2015-04-11 12:24:34.535
{code}

MST and PST are valid zones and appear to work correctly.

> Incorrect result when using from_utc_timestamp with the local time zone.
> 
>
> Key: HIVE-12194
> URL: https://issues.apache.org/jira/browse/HIVE-12194
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 1.1.1
>Reporter: Ryan Blue
>
> When I call the {{from_utc_timestamp}} function (or {{to_utc_timestamp}}) 
> using my current time zone, the result is incorrect:
> {code}
> // CURRENT SERVER TIME ZONE IS PDT
> hive> select to_utc_timestamp('2015-10-13 09:15:34.101', 'PDT');
> 2015-10-13 09:15:34.101 // NOT CHANGED!
> hive> select to_utc_timestamp('2015-10-13 09:15:34.101', 'PST');
> 2015-10-13 16:15:34.101 // CORRECT VALUE FOR PST
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-10502) Cannot specify log4j.properties file location in Beeline

2015-10-15 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang resolved HIVE-10502.

Resolution: Won't Fix

> Cannot specify log4j.properties file location in Beeline
> 
>
> Key: HIVE-10502
> URL: https://issues.apache.org/jira/browse/HIVE-10502
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 1.1.0
>Reporter: Szehon Ho
>Assignee: Chaoyu Tang
>
> In HiveCLI, HiveServer2, HMS, etc, the following is called early in the 
> startup to initialize log4j logging: LogUtils.initHiveLog4j().
> However, seems like this is not the case in Beeline, which also needs log4j 
> like as follows:
> {noformat}
>   at org.apache.log4j.LogManager.(LogManager.java:127)
>   at org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:66)
>   at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:270)
>   at 
> org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:156)
>   at 
> org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:132)
>   at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:657)
>   at org.apache.hadoop.util.VersionInfo.(VersionInfo.java:37)
> {noformat}
> It would be good to specify it, so it doesn't pick the first one in the 
> classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12167) HBase metastore causes massive number of ZK exceptions in MiniTez tests

2015-10-15 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959639#comment-14959639
 ] 

Alan Gates commented on HIVE-12167:
---

Does now :).  I move we close this as duplicate.

> HBase metastore causes massive number of ZK exceptions in MiniTez tests
> ---
>
> Key: HIVE-12167
> URL: https://issues.apache.org/jira/browse/HIVE-12167
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12167.patch
>
>
> I ran some random test (vectorization_10) with HBase metastore for unrelated 
> reason, and I see large number of exceptions in hive.log
> {noformat}
> $ grep -c "ConnectionLoss" hive.log
> 52
> $ grep -c "Connection refused" hive.log
> 1014
> {noformat}
> These log lines' count has increased by ~33% since merging llap branch, but 
> it is still high before that (39/~700) for the same test). These lines are 
> not present if I disable HBase metastore.
> The exceptions are:
> {noformat}
> 2015-10-13T17:51:06,232 WARN  [Thread-359-SendThread(localhost:2181)]: 
> zookeeper.ClientCnxn (ClientCnxn.java:run(1102)) - Session 0x0 for server 
> null, unexpected error, closing socket connection and attempting reconnect
> java.net.ConnectException: Connection refused
>   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 
> ~[?:1.8.0_45]
>   at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) 
> ~[?:1.8.0_45]
>   at 
> org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
>  ~[zookeeper-3.4.6.jar:3.4.6-1569965]
>   at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) 
> [zookeeper-3.4.6.jar:3.4.6-1569965]
> {noformat}
> that is retried for some seconds and then
> {noformat}
> 2015-10-13T17:51:22,867 WARN  [Thread-359]: zookeeper.ZKUtil 
> (ZKUtil.java:checkExists(544)) - hconnection-0x1da6ef180x0, 
> quorum=localhost:2181, baseZNode=/hbase Unable to set watcher on znode 
> (/hbase/hbaseid)
> org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
> = ConnectionLoss for /hbase/hbaseid
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) 
> ~[zookeeper-3.4.6.jar:3.4.6-1569965]
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) 
> ~[zookeeper-3.4.6.jar:3.4.6-1569965]
>   at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045) 
> ~[zookeeper-3.4.6.jar:3.4.6-1569965]
>   at 
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:222)
>  ~[hbase-client-1.1.1.jar:1.1.1]
>   at 
> org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:541) 
> [hbase-client-1.1.1.jar:1.1.1]
>   at 
> org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65)
>  [hbase-client-1.1.1.jar:1.1.1]
>   at 
> org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:105)
>  [hbase-client-1.1.1.jar:1.1.1]
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:879)
>  [hbase-client-1.1.1.jar:1.1.1]
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.(ConnectionManager.java:635)
>  [hbase-client-1.1.1.jar:1.1.1]
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method) ~[?:1.8.0_45]
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  [?:1.8.0_45]
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  [?:1.8.0_45]
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:422) 
> [?:1.8.0_45]
>   at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
>  [hbase-client-1.1.1.jar:1.1.1]
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:420)
>  [hbase-client-1.1.1.jar:1.1.1]
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager.createConnectionInternal(ConnectionManager.java:329)
>  [hbase-client-1.1.1.jar:1.1.1]
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:144)
>  [hbase-client-1.1.1.jar:1.1.1]
>   at 
> org.apache.hadoop.hive.metastore.hbase.VanillaHBaseConnection.connect(VanillaHBaseConnection.java:56)
>  [hive-metastore-2.0.0-SNAPSHOT.jar:?]
>   at 
> org.apache.hadoop.hive.metastore.hbase.HBaseReadWrite.(HBaseReadWrite.java:227)
>  [hive-metastore-2.0.0-SNAPSHOT.jar:?]
>   at 
> org.apache.hadoop.hive.metastore.hbase.HBaseReadWrite.(HBaseReadWrite.java:83)
>  [hive-metastore-2.0.0-SNAPSHOT.jar:?]
>   at 

[jira] [Updated] (HIVE-12199) LLAP: unnecessary preemption of the same stage

2015-10-15 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-12199:

Assignee: Siddharth Seth

> LLAP: unnecessary preemption of the same stage
> --
>
> Key: HIVE-12199
> URL: https://issues.apache.org/jira/browse/HIVE-12199
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Siddharth Seth
>
> 6-node cluster x 16 executors, nothing else running on it
> {code}
> hive> select sum(l_extendedprice * l_discount) as revenue from 
> testing.lineitem where l_shipdate >= '1993-01-01' and l_shipdate < 
> '1994-01-01' ;
> ...
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED  
> --
> Map 1 ..  llap SUCCEEDED17617600  
>  0   8  
> Reducer 2 ..  llap SUCCEEDED  1  100  
>  0   1 
> {code}
> those killed mappers are preempted:
> {code}
> 015-10-15 18:07:07,154 INFO [Dispatcher thread: Central] 
> history.HistoryEventHandler: 
> [HISTORY][DAG:dag_1442254312093_2158_1][Event:TASK_ATTEMPT_FINISHED]: 
> vertexName=Map 1, taskAttemptId=attempt_1442254312093_2158_1_00_000103_0, 
> startTime=1444946823795, finishTime=1444946827152, timeTaken=3357, 
> status=KILLED, errorEnum=EXTERNAL_PREEMPTION, diagnostics=Attempt preempted, 
> lastDataEventSourceTA=null, lastDataEventTime=0, counters=Counters: 1, 
> org.apache.tez.common.counters.DAGCounter, RACK_LOCAL_TASKS=1
> {code}
> There's no reason for mappers of the same stage to preempt each other in this 
> case
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11565) LLAP: Some counters are incorrect

2015-10-15 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-11565:
--
Attachment: HIVE-11565.1.patch

Re-uploading for jenkins, again.

> LLAP: Some counters are incorrect
> -
>
> Key: HIVE-11565
> URL: https://issues.apache.org/jira/browse/HIVE-11565
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Siddharth Seth
> Attachments: HIVE-11565.1.patch, HIVE-11565.1.patch, HIVE-11565.1.txt
>
>
> 1) Tez counters for LLAP are incorrect.
> 2) Some counters, such as cache hit ratio for a fragment, are not propagated.
> We need to make sure that Tez counters for LLAP are usable. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11676) implement metastore API to do file footer PPD

2015-10-15 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959621#comment-14959621
 ] 

Alan Gates commented on HIVE-11676:
---

Some comments posted to review board.  

Also a question around Metastore.java.  Why did you start a whole new protocol 
buffer class?  And why is the resulting Java code in the patch but not the 
protobuf file?

> implement metastore API to do file footer PPD
> -
>
> Key: HIVE-11676
> URL: https://issues.apache.org/jira/browse/HIVE-11676
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-11676.01.patch, HIVE-11676.02.patch, 
> HIVE-11676.03.patch, HIVE-11676.patch
>
>
> Need to pass on the expression/sarg, extract column stats from footer (at 
> write time?) and then apply one to the other.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12062) enable HBase metastore file metadata cache for tez tests

2015-10-15 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959637#comment-14959637
 ] 

Alan Gates commented on HIVE-12062:
---

{code}
if (useHBaseMetastore) {
  startMiniHBaseCluster();
} else {
  conf = new HiveConf(Driver.class);
}
{code}

Isn't this going to result in an NPE for HBaseMetastore test cases since conf 
won't be set?

> enable HBase metastore file metadata cache for tez tests
> 
>
> Key: HIVE-12062
> URL: https://issues.apache.org/jira/browse/HIVE-12062
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12062.01.patch, HIVE-12062.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12198) LLAP: reader failures when querying uncompressed data

2015-10-15 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-12198:

Attachment: HIVE-12171.patch

> LLAP: reader failures when querying uncompressed data
> -
>
> Key: HIVE-12198
> URL: https://issues.apache.org/jira/browse/HIVE-12198
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Gopal V
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12171.patch
>
>
> {code}
> hive> select sum(l_extendedprice * l_discount) as revenue from 
> testing.lineitem where l_shipdate >= '1993-01-01' and l_shipdate < 
> '1994-01-01' ;
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl$UncompressedCacheChunk
>  cannot be cast to 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$BufferChunk
> at 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.copyAndReplaceUncompressedChunks(EncodedReaderImpl.java:962)
> at 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.preReadUncompressedStream(EncodedReaderImpl.java:890)
> at 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.readEncodedColumns(EncodedReaderImpl.java:319)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.performDataRead(OrcEncodedDataReader.java:413)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:194)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:191)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:191)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:74)
> at 
> org.apache.hadoop.hive.common.CallableWithNdc.call(CallableWithNdc.java:37)
> ... 4 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12198) LLAP: reader failures when querying uncompressed data

2015-10-15 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-12198:

Attachment: HIVE-12171.patch

[~gopalv] can you review? stupid error, we cast the buffer we don't even need 
to check because we do it before the loop counter check

> LLAP: reader failures when querying uncompressed data
> -
>
> Key: HIVE-12198
> URL: https://issues.apache.org/jira/browse/HIVE-12198
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Gopal V
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12171.patch
>
>
> {code}
> hive> select sum(l_extendedprice * l_discount) as revenue from 
> testing.lineitem where l_shipdate >= '1993-01-01' and l_shipdate < 
> '1994-01-01' ;
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl$UncompressedCacheChunk
>  cannot be cast to 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$BufferChunk
> at 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.copyAndReplaceUncompressedChunks(EncodedReaderImpl.java:962)
> at 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.preReadUncompressedStream(EncodedReaderImpl.java:890)
> at 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.readEncodedColumns(EncodedReaderImpl.java:319)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.performDataRead(OrcEncodedDataReader.java:413)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:194)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:191)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:191)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:74)
> at 
> org.apache.hadoop.hive.common.CallableWithNdc.call(CallableWithNdc.java:37)
> ... 4 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12198) LLAP: reader failures when querying uncompressed data

2015-10-15 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-12198:

Attachment: (was: HIVE-12171.patch)

> LLAP: reader failures when querying uncompressed data
> -
>
> Key: HIVE-12198
> URL: https://issues.apache.org/jira/browse/HIVE-12198
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Gopal V
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12171.patch
>
>
> {code}
> hive> select sum(l_extendedprice * l_discount) as revenue from 
> testing.lineitem where l_shipdate >= '1993-01-01' and l_shipdate < 
> '1994-01-01' ;
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl$UncompressedCacheChunk
>  cannot be cast to 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$BufferChunk
> at 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.copyAndReplaceUncompressedChunks(EncodedReaderImpl.java:962)
> at 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.preReadUncompressedStream(EncodedReaderImpl.java:890)
> at 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.readEncodedColumns(EncodedReaderImpl.java:319)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.performDataRead(OrcEncodedDataReader.java:413)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:194)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:191)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:191)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:74)
> at 
> org.apache.hadoop.hive.common.CallableWithNdc.call(CallableWithNdc.java:37)
> ... 4 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11788) Column stats should be preserved after db/table/partition rename

2015-10-15 Thread Chaoyu Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959658#comment-14959658
 ] 

Chaoyu Tang commented on HIVE-11788:


Leaving the columns of db/table/partition name in these stats tables might help 
to improve the performance in retrieving stats data. We need update these 
columns when doing rename db/table/partition.

> Column stats should be preserved after db/table/partition rename
> 
>
> Key: HIVE-11788
> URL: https://issues.apache.org/jira/browse/HIVE-11788
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore, Statistics
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
>
> Currently we simply delete the column stats after renaming a database, table, 
> or partition since there was not an easy way in HMS to update the DB_NAME, 
> TABLE_NAME and PARTITION_NAME in TAB_COL_STATS and PART_COL_STATS. With the 
> removal of these redundant columns in these tables (HIVE-11786), we will 
> still keep column stats in the operation which is not to change a column name 
> or type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-11787) Remove the redundant columns in TAB_COL_STATS and PART_COL_STATS

2015-10-15 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang resolved HIVE-11787.

Resolution: Won't Fix

The normalization of these stats tables has been observed to cause some 
performance degrade. So we leave them as is at this moment as a trade-off to 
the better performance.

> Remove the redundant columns in TAB_COL_STATS and PART_COL_STATS
> 
>
> Key: HIVE-11787
> URL: https://issues.apache.org/jira/browse/HIVE-11787
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
>
> After HIVE-11786 deprecates the use of redundant columns in TAB_COL_STATS and 
> PART_COL_STATS at HMS code level, the column DB_NAME/TABLE_NAME in 
> TAB_COL_STATS and DB_NAME/TABLE_NAME/PARTITION_NAME in PART_COL_STATS are 
> useless and should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12026) Add test case to check permissions when truncating partition

2015-10-15 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-12026:
--
Attachment: HIVE-12026.2.patch

Adding permissions check after TRUNCATE TABLE per [~chinnalalam]'s suggestion

> Add test case to check permissions when truncating partition
> 
>
> Key: HIVE-12026
> URL: https://issues.apache.org/jira/browse/HIVE-12026
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-12026.1.patch, HIVE-12026.2.patch
>
>
> Add to the tests added during HIVE-9474, for TRUNCATE PARTITION



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12198) LLAP: reader failures when querying uncompressed data

2015-10-15 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-12198:

Description: 
{code}
hive> select sum(l_extendedprice * l_discount) as revenue from testing.lineitem 
where l_shipdate >= '1993-01-01' and l_shipdate < '1994-01-01' ;

Caused by: java.lang.ClassCastException: 
org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl$UncompressedCacheChunk
 cannot be cast to org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$BufferChunk
at 
org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.copyAndReplaceUncompressedChunks(EncodedReaderImpl.java:962)
at 
org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.preReadUncompressedStream(EncodedReaderImpl.java:890)
at 
org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.readEncodedColumns(EncodedReaderImpl.java:319)
at 
org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.performDataRead(OrcEncodedDataReader.java:413)
at 
org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:194)
at 
org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:191)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at 
org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:191)
at 
org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:74)
at 
org.apache.hadoop.hive.common.CallableWithNdc.call(CallableWithNdc.java:37)
... 4 more
{code}

  was:
{code}
hive> select sum(l_extendedprice * l_discount) as revenue from testing.lineitem 
where l_shipdate >= '1993-01-01' and l_shipdate < '1994-01-01' ;

Caused by: 
org.apache.hadoop.hive.common.io.Allocator$AllocatorOutOfMemoryException: 
Failed to allocate 492; at 0 out of 1
at 
org.apache.hadoop.hive.llap.cache.BuddyAllocator.allocateMultiple(BuddyAllocator.java:176)
at 
org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.preReadUncompressedStream(EncodedReaderImpl.java:882)
at 
org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.readEncodedColumns(EncodedReaderImpl.java:319)
at 
org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.performDataRead(OrcEncodedDataReader.java:413)
at 
org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:194)
at 
org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:191)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at 
org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:191)
at 
org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:74)
at 
org.apache.hadoop.hive.common.CallableWithNdc.call(CallableWithNdc.java:37)
... 4 more
{code}

{code}
Caused by: java.lang.ClassCastException: 
org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl$UncompressedCacheChunk
 cannot be cast to org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$BufferChunk
at 
org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.copyAndReplaceUncompressedChunks(EncodedReaderImpl.java:962)
at 
org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.preReadUncompressedStream(EncodedReaderImpl.java:890)
at 
org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.readEncodedColumns(EncodedReaderImpl.java:319)
at 
org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.performDataRead(OrcEncodedDataReader.java:413)
at 
org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:194)
at 
org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:191)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at 
org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:191)
at 
org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:74)
at 
org.apache.hadoop.hive.common.CallableWithNdc.call(CallableWithNdc.java:37)
... 4 more
{code}


> LLAP: reader failures when querying uncompressed data
> 

[jira] [Updated] (HIVE-12198) LLAP: reader failures when querying uncompressed data

2015-10-15 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-12198:

Attachment: (was: HIVE-12198.patch)

> LLAP: reader failures when querying uncompressed data
> -
>
> Key: HIVE-12198
> URL: https://issues.apache.org/jira/browse/HIVE-12198
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Gopal V
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12198.patch
>
>
> {code}
> hive> select sum(l_extendedprice * l_discount) as revenue from 
> testing.lineitem where l_shipdate >= '1993-01-01' and l_shipdate < 
> '1994-01-01' ;
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl$UncompressedCacheChunk
>  cannot be cast to 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$BufferChunk
> at 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.copyAndReplaceUncompressedChunks(EncodedReaderImpl.java:962)
> at 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.preReadUncompressedStream(EncodedReaderImpl.java:890)
> at 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.readEncodedColumns(EncodedReaderImpl.java:319)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.performDataRead(OrcEncodedDataReader.java:413)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:194)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:191)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:191)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:74)
> at 
> org.apache.hadoop.hive.common.CallableWithNdc.call(CallableWithNdc.java:37)
> ... 4 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12198) LLAP: reader failures when querying uncompressed data

2015-10-15 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-12198:

Attachment: HIVE-12198.patch

A little fix to debugging output

> LLAP: reader failures when querying uncompressed data
> -
>
> Key: HIVE-12198
> URL: https://issues.apache.org/jira/browse/HIVE-12198
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Gopal V
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12198.patch
>
>
> {code}
> hive> select sum(l_extendedprice * l_discount) as revenue from 
> testing.lineitem where l_shipdate >= '1993-01-01' and l_shipdate < 
> '1994-01-01' ;
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl$UncompressedCacheChunk
>  cannot be cast to 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$BufferChunk
> at 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.copyAndReplaceUncompressedChunks(EncodedReaderImpl.java:962)
> at 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.preReadUncompressedStream(EncodedReaderImpl.java:890)
> at 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.readEncodedColumns(EncodedReaderImpl.java:319)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.performDataRead(OrcEncodedDataReader.java:413)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:194)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:191)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:191)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:74)
> at 
> org.apache.hadoop.hive.common.CallableWithNdc.call(CallableWithNdc.java:37)
> ... 4 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11710) Beeline embedded mode doesn't output query progress after setting any session property

2015-10-15 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-11710:

Attachment: HIVE-11710.4.patch

> Beeline embedded mode doesn't output query progress after setting any session 
> property
> --
>
> Key: HIVE-11710
> URL: https://issues.apache.org/jira/browse/HIVE-11710
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 2.0.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-11710.2.patch, HIVE-11710.3.patch, 
> HIVE-11710.4.patch, HIVE-11710.patch
>
>
> Connect to beeline embedded mode {{beeline -u jdbc:hive2://}}. Then set 
> anything in the session like {{set aa=true;}}.
> After that, any query like {{select count(*) from src;}} will only output 
> result but no query progress.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12194) Incorrect result when using from_utc_timestamp with unknown zone.

2015-10-15 Thread Ryan Blue (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Blue updated HIVE-12194:
-
Summary: Incorrect result when using from_utc_timestamp with unknown zone.  
(was: Incorrect result when using from_utc_timestamp with the local time zone.)

> Incorrect result when using from_utc_timestamp with unknown zone.
> -
>
> Key: HIVE-12194
> URL: https://issues.apache.org/jira/browse/HIVE-12194
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 1.1.1
>Reporter: Ryan Blue
>
> When I call the {{from_utc_timestamp}} function (or {{to_utc_timestamp}}) 
> using my current time zone, the result is incorrect:
> {code}
> // CURRENT SERVER TIME ZONE IS PDT
> hive> select to_utc_timestamp('2015-10-13 09:15:34.101', 'PDT');
> 2015-10-13 09:15:34.101 // NOT CHANGED!
> hive> select to_utc_timestamp('2015-10-13 09:15:34.101', 'PST');
> 2015-10-13 16:15:34.101 // CORRECT VALUE FOR PST
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12053) Stats performance regression caused by HIVE-11786

2015-10-15 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959618#comment-14959618
 ] 

Sergey Shelukhin commented on HIVE-12053:
-

Committed to branch-1.

> Stats performance regression caused by HIVE-11786
> -
>
> Key: HIVE-12053
> URL: https://issues.apache.org/jira/browse/HIVE-12053
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
> Fix For: 1.3.0, 2.0.0
>
> Attachments: HIVE-12053.patch
>
>
> HIVE-11786 tried to normalize table TAB_COL_STATS/PART_COL_STATS but caused 
> performance regression.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12194) Incorrect result when using from_utc_timestamp with unknown zone.

2015-10-15 Thread Ryan Blue (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Blue updated HIVE-12194:
-
Description: 
When I call the {{from_utc_timestamp}} function (or {{to_utc_timestamp}}) using 
my current time zone, the result is incorrect:

{code}
// CURRENT SERVER TIME ZONE IS PDT
hive> select to_utc_timestamp('2015-10-13 09:15:34.101', 'PDT');
2015-10-13 09:15:34.101 // NOT CHANGED!
hive> select to_utc_timestamp('2015-10-13 09:15:34.101', 'PST');
2015-10-13 16:15:34.101 // CORRECT VALUE FOR PST
{code}

*UPDATE*: It appears that happens because the daylight savings zones are not 
recognized.

  was:
When I call the {{from_utc_timestamp}} function (or {{to_utc_timestamp}}) using 
my current time zone, the result is incorrect:

{code}
// CURRENT SERVER TIME ZONE IS PDT
hive> select to_utc_timestamp('2015-10-13 09:15:34.101', 'PDT');
2015-10-13 09:15:34.101 // NOT CHANGED!
hive> select to_utc_timestamp('2015-10-13 09:15:34.101', 'PST');
2015-10-13 16:15:34.101 // CORRECT VALUE FOR PST
{code}


> Incorrect result when using from_utc_timestamp with unknown zone.
> -
>
> Key: HIVE-12194
> URL: https://issues.apache.org/jira/browse/HIVE-12194
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 1.1.1
>Reporter: Ryan Blue
>
> When I call the {{from_utc_timestamp}} function (or {{to_utc_timestamp}}) 
> using my current time zone, the result is incorrect:
> {code}
> // CURRENT SERVER TIME ZONE IS PDT
> hive> select to_utc_timestamp('2015-10-13 09:15:34.101', 'PDT');
> 2015-10-13 09:15:34.101 // NOT CHANGED!
> hive> select to_utc_timestamp('2015-10-13 09:15:34.101', 'PST');
> 2015-10-13 16:15:34.101 // CORRECT VALUE FOR PST
> {code}
> *UPDATE*: It appears that happens because the daylight savings zones are not 
> recognized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12194) Daylight savings zones are not recognized (PDT, MDT)

2015-10-15 Thread Ryan Blue (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Blue updated HIVE-12194:
-
Summary: Daylight savings zones are not recognized (PDT, MDT)  (was: 
Incorrect result when using from_utc_timestamp with unknown zone.)

> Daylight savings zones are not recognized (PDT, MDT)
> 
>
> Key: HIVE-12194
> URL: https://issues.apache.org/jira/browse/HIVE-12194
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 1.1.1
>Reporter: Ryan Blue
>
> When I call the {{from_utc_timestamp}} function (or {{to_utc_timestamp}}) 
> using my current time zone, the result is incorrect:
> {code}
> // CURRENT SERVER TIME ZONE IS PDT
> hive> select to_utc_timestamp('2015-10-13 09:15:34.101', 'PDT');
> 2015-10-13 09:15:34.101 // NOT CHANGED!
> hive> select to_utc_timestamp('2015-10-13 09:15:34.101', 'PST');
> 2015-10-13 16:15:34.101 // CORRECT VALUE FOR PST
> {code}
> *UPDATE*: It appears that happens because the daylight savings zones are not 
> recognized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-12171) LLAP: BuddyAllocator failures when querying uncompressed data

2015-10-15 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin resolved HIVE-12171.
-
Resolution: Won't Fix

This is by design (sortof). Cache size needs to be increased, it's almost full 
and too fragmented to accommodate larger-than-usual uncompressed stream that is 
cached in max-alloc-sized parts. Max alloc in this case is 16Mb, stream is 
4-something Mb, but maximum chunk present in allocation is 4Mb. 
Alternatively we could try to break stream into smaller parts for this case

> LLAP: BuddyAllocator failures when querying uncompressed data
> -
>
> Key: HIVE-12171
> URL: https://issues.apache.org/jira/browse/HIVE-12171
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Gopal V
>Assignee: Sergey Shelukhin
>
> {code}
> hive> select sum(l_extendedprice * l_discount) as revenue from 
> testing.lineitem where l_shipdate >= '1993-01-01' and l_shipdate < 
> '1994-01-01' ;
> Caused by: 
> org.apache.hadoop.hive.common.io.Allocator$AllocatorOutOfMemoryException: 
> Failed to allocate 492; at 0 out of 1
> at 
> org.apache.hadoop.hive.llap.cache.BuddyAllocator.allocateMultiple(BuddyAllocator.java:176)
> at 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.preReadUncompressedStream(EncodedReaderImpl.java:882)
> at 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.readEncodedColumns(EncodedReaderImpl.java:319)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.performDataRead(OrcEncodedDataReader.java:413)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:194)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:191)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:191)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:74)
> at 
> org.apache.hadoop.hive.common.CallableWithNdc.call(CallableWithNdc.java:37)
> ... 4 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-12171) LLAP: BuddyAllocator failures when querying uncompressed data

2015-10-15 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959773#comment-14959773
 ] 

Sergey Shelukhin edited comment on HIVE-12171 at 10/15/15 10:41 PM:


This is by design (sortof). Cache size needs to be increased, it's almost full 
and too fragmented to accommodate larger-than-usual uncompressed stream that is 
cached in max-alloc-sized parts. Max alloc in this case is 16Mb, stream is 
4-something Mb, but maximum chunk present in buddy allocator is 4Mb. 
Alternatively we could try to break stream into smaller parts for this case


was (Author: sershe):
This is by design (sortof). Cache size needs to be increased, it's almost full 
and too fragmented to accommodate larger-than-usual uncompressed stream that is 
cached in max-alloc-sized parts. Max alloc in this case is 16Mb, stream is 
4-something Mb, but maximum chunk present in allocation is 4Mb. 
Alternatively we could try to break stream into smaller parts for this case

> LLAP: BuddyAllocator failures when querying uncompressed data
> -
>
> Key: HIVE-12171
> URL: https://issues.apache.org/jira/browse/HIVE-12171
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Gopal V
>Assignee: Sergey Shelukhin
>
> {code}
> hive> select sum(l_extendedprice * l_discount) as revenue from 
> testing.lineitem where l_shipdate >= '1993-01-01' and l_shipdate < 
> '1994-01-01' ;
> Caused by: 
> org.apache.hadoop.hive.common.io.Allocator$AllocatorOutOfMemoryException: 
> Failed to allocate 492; at 0 out of 1
> at 
> org.apache.hadoop.hive.llap.cache.BuddyAllocator.allocateMultiple(BuddyAllocator.java:176)
> at 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.preReadUncompressedStream(EncodedReaderImpl.java:882)
> at 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.readEncodedColumns(EncodedReaderImpl.java:319)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.performDataRead(OrcEncodedDataReader.java:413)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:194)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:191)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:191)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:74)
> at 
> org.apache.hadoop.hive.common.CallableWithNdc.call(CallableWithNdc.java:37)
> ... 4 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12180) Use MapJoinDesc::isHybridHashJoin() instead of the HiveConf lookup in Vectorizer

2015-10-15 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959776#comment-14959776
 ] 

Matt McCline commented on HIVE-12180:
-

Committed to master.

> Use MapJoinDesc::isHybridHashJoin() instead of the HiveConf lookup in 
> Vectorizer
> 
>
> Key: HIVE-12180
> URL: https://issues.apache.org/jira/browse/HIVE-12180
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Wei Zheng
>Assignee: Wei Zheng
> Attachments: HIVE-12180.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12201) Tez settings need to be shown in set -v output when execution engine is tez.

2015-10-15 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14960195#comment-14960195
 ] 

Gunther Hagleitner commented on HIVE-12201:
---

LGTM +1

> Tez settings need to be shown in set -v output when execution engine is tez.
> 
>
> Key: HIVE-12201
> URL: https://issues.apache.org/jira/browse/HIVE-12201
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: 1.0.1, 1.2.1
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
>Priority: Minor
> Attachments: HIVE-12201.1.patch
>
>
> The set -v output currently shows configurations for yarn, hdfs etc. but does 
> not show tez settings when tez is set as the execution engine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12181) Change hive.stats.fetch.column.stats default value to true

2015-10-15 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-12181:

Attachment: HIVE-12181.1.patch

Patch with golden files update of TestCliDriver

> Change hive.stats.fetch.column.stats default value to true
> --
>
> Key: HIVE-12181
> URL: https://issues.apache.org/jira/browse/HIVE-12181
> Project: Hive
>  Issue Type: Improvement
>  Components: Statistics
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-12181.1.patch, HIVE-12181.patch
>
>
> There was a performance concern earlier, but HIVE-7587 has fixed that. We can 
> change the default to true now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12171) LLAP: BuddyAllocator failures when querying uncompressed data

2015-10-15 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14960163#comment-14960163
 ] 

Gopal V commented on HIVE-12171:


[~sershe]: should this be causing query failures? Right now the query fails.

> LLAP: BuddyAllocator failures when querying uncompressed data
> -
>
> Key: HIVE-12171
> URL: https://issues.apache.org/jira/browse/HIVE-12171
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Gopal V
>Assignee: Sergey Shelukhin
>
> {code}
> hive> select sum(l_extendedprice * l_discount) as revenue from 
> testing.lineitem where l_shipdate >= '1993-01-01' and l_shipdate < 
> '1994-01-01' ;
> Caused by: 
> org.apache.hadoop.hive.common.io.Allocator$AllocatorOutOfMemoryException: 
> Failed to allocate 492; at 0 out of 1
> at 
> org.apache.hadoop.hive.llap.cache.BuddyAllocator.allocateMultiple(BuddyAllocator.java:176)
> at 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.preReadUncompressedStream(EncodedReaderImpl.java:882)
> at 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.readEncodedColumns(EncodedReaderImpl.java:319)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.performDataRead(OrcEncodedDataReader.java:413)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:194)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:191)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:191)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:74)
> at 
> org.apache.hadoop.hive.common.CallableWithNdc.call(CallableWithNdc.java:37)
> ... 4 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12165) wrong result when hive.optimize.sampling.orderby=true with some aggregate functions

2015-10-15 Thread Chetna Chaudhari (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14960158#comment-14960158
 ] 

Chetna Chaudhari commented on HIVE-12165:
-

[~ErwanMAS]: +1 I am able to reproduce this issue.
A workaround to this is to 'set hive.optimize.sampling.orderby.number=1;'. 
After this setting it returned correct results.

> wrong result when hive.optimize.sampling.orderby=true with some aggregate 
> functions
> ---
>
> Key: HIVE-12165
> URL: https://issues.apache.org/jira/browse/HIVE-12165
> Project: Hive
>  Issue Type: Bug
> Environment: hortonworks  2.3
>Reporter: ErwanMAS
>Priority: Critical
>
> This simple query give wrong result , when , i use the parallel order .
> {noformat}
> select count(*) , count(distinct dummyint ) , min(dummyint),max(dummyint) 
> from foobar_1M ;
> {noformat}
> Current wrong result :
> {noformat}
> c0c1  c2  c3
> 32740 32740   0   163695
> 113172113172  163700  729555
> 54088 54088   729560  95
> {noformat}
> Right result :
> {noformat}
> c0c1  c2  c3
> 100   100 0   99
> {noformat}
> The sql script for my test 
> {noformat}
> drop table foobar_1 ;
> create table foobar_1 ( dummyint int  , dummystr string ) ;
> insert into table foobar_1 select count(*),'dummy 0'  from foobar_1 ;
> drop table foobar_1M ;
> create table foobar_1M ( dummyint bigint  , dummystr string ) ;
> insert overwrite table foobar_1M
>select val_int  , concat('dummy ',val_int) from
>  ( select ((d_1*10)+d_2)*10+d_3)*10+d_4)*10+d_5)*10+d_6) as 
> val_int from foobar_1
>  lateral view outer explode(split("0,1,2,3,4,5,6,7,8,9",",")) 
> tbl_1 as d_1
>  lateral view outer explode(split("0,1,2,3,4,5,6,7,8,9",",")) 
> tbl_2 as d_2
>  lateral view outer explode(split("0,1,2,3,4,5,6,7,8,9",",")) 
> tbl_3 as d_3
>  lateral view outer explode(split("0,1,2,3,4,5,6,7,8,9",",")) 
> tbl_4 as d_4
>  lateral view outer explode(split("0,1,2,3,4,5,6,7,8,9",",")) 
> tbl_5 as d_5
>  lateral view outer explode(split("0,1,2,3,4,5,6,7,8,9",",")) 
> tbl_6 as d_6  ) as f ;
> set hive.optimize.sampling.orderby.number=1;
> set hive.optimize.sampling.orderby.percent=0.1f;
> set mapreduce.job.reduces=3 ;
> set hive.optimize.sampling.orderby=false;
> select count(*) , count(distinct dummyint ) , min(dummyint),max(dummyint) 
> from foobar_1M ;
> set hive.optimize.sampling.orderby=true;
> select count(*) , count(distinct dummyint ) , min(dummyint),max(dummyint) 
> from foobar_1M ;
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11676) implement metastore API to do file footer PPD

2015-10-15 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14960183#comment-14960183
 ] 

Hive QA commented on HIVE-11676:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12766915/HIVE-11676.04.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 9694 tests executed
*Failed tests:*
{noformat}
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5670/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5670/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5670/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12766915 - PreCommit-HIVE-TRUNK-Build

> implement metastore API to do file footer PPD
> -
>
> Key: HIVE-11676
> URL: https://issues.apache.org/jira/browse/HIVE-11676
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-11676.01.patch, HIVE-11676.02.patch, 
> HIVE-11676.03.patch, HIVE-11676.04.patch, HIVE-11676.patch
>
>
> Need to pass on the expression/sarg, extract column stats from footer (at 
> write time?) and then apply one to the other.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12181) Change hive.stats.fetch.column.stats default value to true

2015-10-15 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14960177#comment-14960177
 ] 

Ashutosh Chauhan commented on HIVE-12181:
-

Currently following queries throw exception after this change:
{code}
bucketmapjoin11.q
correlationoptimizer1.q
correlationoptimizer10.q
correlationoptimizer14.q
correlationoptimizer15.q
correlationoptimizer2.q
correlationoptimizer3.q
correlationoptimizer6.q
correlationoptimizer7.q
correlationoptimizer8.q
join25.q
join26.q
join27.q
join30.q
join37.q
join39.q
join40.q
join_map_ppr.q
mapjoin1.q
mapjoin_distinct.q
mapjoin_filter_on_outerjoin.q
multiMapJoin2.q
select_transform_hint.q
subquery_exists_having.q
{code}

FYI [~prasanth_j] 

> Change hive.stats.fetch.column.stats default value to true
> --
>
> Key: HIVE-12181
> URL: https://issues.apache.org/jira/browse/HIVE-12181
> Project: Hive
>  Issue Type: Improvement
>  Components: Statistics
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-12181.patch
>
>
> There was a performance concern earlier, but HIVE-7587 has fixed that. We can 
> change the default to true now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10178) DateWritable incorrectly calculates daysSinceEpoch for negative Unix time

2015-10-15 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-10178:
--
Affects Version/s: 0.12.0
   0.13.0
   0.14.0
   1.0.0

> DateWritable incorrectly calculates daysSinceEpoch for negative Unix time
> -
>
> Key: HIVE-10178
> URL: https://issues.apache.org/jira/browse/HIVE-10178
> Project: Hive
>  Issue Type: Bug
>  Components: Types
>Affects Versions: 0.12.0, 0.13.0, 0.14.0, 1.0.0
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
> Fix For: 1.2.0, 1.0.2
>
> Attachments: HIVE-10178.01.patch, HIVE-10178.02.patch, 
> HIVE-10178.03-branch-1.0.patch, HIVE-10178.03.patch
>
>
> For example:
> {code}
> select cast(cast('1966-01-01 00:00:01' as timestamp) as date);
> 1966-01-02
> {code}
> Another example:
> {code}
> select last_day(cast('1966-01-31 00:00:01' as timestamp));
> OK
> 1966-02-28
> {code}
> more details:
> Date: 1966-01-01 00:00:01
> unix time UTC: -126230399
> daysSinceEpoch=−126230399000 / 8640 = -1460.88
> int daysSinceEpoch = -1460
> DateWritable having daysSinceEpoch=-1460 is 1966-01-02
> daysSinceEpoch should be -1461 instead  (1966-01-01)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10178) DateWritable incorrectly calculates daysSinceEpoch for negative Unix time

2015-10-15 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959201#comment-14959201
 ] 

Xuefu Zhang commented on HIVE-10178:


Could we update "Affects version/s"? Thanks.

> DateWritable incorrectly calculates daysSinceEpoch for negative Unix time
> -
>
> Key: HIVE-10178
> URL: https://issues.apache.org/jira/browse/HIVE-10178
> Project: Hive
>  Issue Type: Bug
>  Components: Types
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
> Fix For: 1.2.0, 1.0.2
>
> Attachments: HIVE-10178.01.patch, HIVE-10178.02.patch, 
> HIVE-10178.03-branch-1.0.patch, HIVE-10178.03.patch
>
>
> For example:
> {code}
> select cast(cast('1966-01-01 00:00:01' as timestamp) as date);
> 1966-01-02
> {code}
> Another example:
> {code}
> select last_day(cast('1966-01-31 00:00:01' as timestamp));
> OK
> 1966-02-28
> {code}
> more details:
> Date: 1966-01-01 00:00:01
> unix time UTC: -126230399
> daysSinceEpoch=−126230399000 / 8640 = -1460.88
> int daysSinceEpoch = -1460
> DateWritable having daysSinceEpoch=-1460 is 1966-01-02
> daysSinceEpoch should be -1461 instead  (1966-01-01)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11489) Jenkins PreCommit-HIVE-SPARK-Build fails with TestCliDriver.initializationError

2015-10-15 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959238#comment-14959238
 ] 

Hive QA commented on HIVE-11489:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12766811/HIVE-11489.3.patch

{color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 6802 tests executed
*Failed tests:*
{noformat}
TestCliDriver-udf_testlength2.q-skewjoin_mapjoin7.q-optional_outer.q-and-12-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.initializationError
org.apache.hadoop.hive.cli.TestHBaseCliDriver.initializationError
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.initializationError
org.apache.hadoop.hive.cli.TestMinimrCliDriver.initializationError
org.apache.hadoop.hive.cli.TestNegativeCliDriver.initializationError
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.initializationError
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/965/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/965/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-965/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 9 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12766811 - PreCommit-HIVE-SPARK-Build

> Jenkins PreCommit-HIVE-SPARK-Build fails with 
> TestCliDriver.initializationError
> ---
>
> Key: HIVE-11489
> URL: https://issues.apache.org/jira/browse/HIVE-11489
> Project: Hive
>  Issue Type: Task
>  Components: Testing Infrastructure
>Reporter: Sergio Peña
>Assignee: Sergio Peña
> Attachments: HIVE-11489.2-spark.patch, HIVE-11489.3.patch
>
>
> The Jenkins job {{PreCommit-HIVE-SPARK-Build}} is failing due to many 
> {{TestCliDriver.initializationError}} test results.
> {noformat}
> Error Message
> Unexpected exception java.io.FileNotFoundException: 
> /data/hive-ptest/working/apache-git-source-source/itests/qtest/target/generated-test-sources/java/org/apache/hadoop/hive/cli/TestCliDriverQFileNames.txt
>  (No such file or directory)
>  at java.io.FileInputStream.open(Native Method)
>  at java.io.FileInputStream.(FileInputStream.java:146)
>  at java.io.FileReader.(FileReader.java:72)
>  at 
> org.apache.hadoop.hive.ql.QTestUtil.addTestsToSuiteFromQfileNames(QTestUtil.java:2019)
>  at org.apache.hadoop.hive.cli.TestCliDriver.suite(TestCliDriver.java:120)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:606)
>  at 
> org.junit.internal.runners.SuiteMethod.testFromSuiteMethod(SuiteMethod.java:35)
>  at org.junit.internal.runners.SuiteMethod.(SuiteMethod.java:24)
>  at 
> org.junit.internal.builders.SuiteMethodBuilder.runnerForClass(SuiteMethodBuilder.java:11)
>  at 
> org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
>  at 
> org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(AllDefaultPossibilitiesBuilder.java:26)
>  at 
> org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
>  at org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:26)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:262)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>  at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Stacktrace
> junit.framework.AssertionFailedError: Unexpected exception 
> java.io.FileNotFoundException: 
> /data/hive-ptest/working/apache-git-source-source/itests/qtest/target/generated-test-sources/java/org/apache/hadoop/hive/cli/TestCliDriverQFileNames.txt
>  (No such file or directory)
>   at java.io.FileInputStream.open(Native Method)
>   at 

[jira] [Updated] (HIVE-12188) DoAs does not work properly in non-kerberos secured HS2

2015-10-15 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang updated HIVE-12188:
---
Attachment: HIVE-12188.patch

> DoAs does not work properly in non-kerberos secured HS2
> ---
>
> Key: HIVE-12188
> URL: https://issues.apache.org/jira/browse/HIVE-12188
> Project: Hive
>  Issue Type: Bug
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
> Attachments: HIVE-12188.patch
>
>
> The case with following settings is valid but it seems still not work 
> correctly in current HS2
> ==
> hive.server2.authentication=NONE (or LDAP)
> hive.server2.enable.doAs= true
> hive.metastore.sasl.enabled=true (with HMS Kerberos enabled)
> ==
> Currently HS2 is able to fetch the delegation token to a kerberos secured HMS 
> only when itself is also kerberos secured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11915) BoneCP returns closed connections from the pool

2015-10-15 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959177#comment-14959177
 ] 

Xuefu Zhang commented on HIVE-11915:


Could we update the "affected version" please?

> BoneCP returns closed connections from the pool
> ---
>
> Key: HIVE-11915
> URL: https://issues.apache.org/jira/browse/HIVE-11915
> Project: Hive
>  Issue Type: Bug
>Reporter: Takahiko Saito
>Assignee: Sergey Shelukhin
> Fix For: 1.3.0, 2.0.0
>
> Attachments: HIVE-11915.01.patch, HIVE-11915.02.patch, 
> HIVE-11915.03.patch, HIVE-11915.WIP.patch, HIVE-11915.patch
>
>
> It's a very old bug in BoneCP and it will never be fixed... There are 
> multiple workarounds on the internet but according to responses they are all 
> unreliable. We should upgrade to HikariCP (which in turn is only supported by 
> DN 4), meanwhile try some shamanic rituals. In this JIRA we will try a 
> relatively weak drum.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12170) normalize HBase metastore connection configuration

2015-10-15 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959478#comment-14959478
 ] 

Sergey Shelukhin commented on HIVE-12170:
-

Will look at the patch after lunch :) 
The problems arise only with embedded metastore (including embedded in HS2). 
For the test, it appears that the different configs might have been caused by 
the issue fixed in HIVE-12062, where config in testing util is reset after 
HBase minicluster config is set, so all subsequent code uses a different config.
Another scenario is for embedded metastore usage for any service that gets 
different configs, like Tez AM. Tez AM should not rely on default config to 
create metastore and should instead rely on config of the query; I had problems 
with that before due to some static call to metastore where Tez AM would create 
ObjectStore even though it was configured later to connect to remote metastore 
via a query config. 
For HS2, I don't know if we support connecting to multiple metastores. However, 
accessing embedded metastore from multiple threads may cause a thread safety 
problem.

> normalize HBase metastore connection configuration
> --
>
> Key: HIVE-12170
> URL: https://issues.apache.org/jira/browse/HIVE-12170
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HIVE-12170.patch
>
>
> Right now there are two ways to get HBaseReadWrite instance in metastore. 
> Both get a threadlocal instance (is there a good reason for that?).
> 1) One is w/o conf and only works if someone called the (2) before, from any 
> thread.
> 2) The other blindly sets a static conf and then gets an instance with that 
> conf, or if someone already happened to call (1) or (2) from this thread, it 
> returns the existing instance with whatever conf was set before (but still 
> resets the current conf to new conf).
> This doesn't make sense even in an already-thread-safe case (like linear 
> CLI-based tests), and can easily lead to bugs as described; the config 
> propagation logic is not good (example - HIVE-12167); some calls just reset 
> config blindly, so there's no point in setting staticConf, other than for the 
> callers of method (1) above who don't have a conf and would rely on the 
> static (which is bad design).
> Having connections with different configs reliably in not possible, and 
> multi-threaded cases would also break - you could even set conf, have it 
> reset and get instance with somebody else's conf. 
> Static should definitely be removed, maybe threadlocal too (HConnection is 
> thread-safe).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-12170) normalize HBase metastore connection configuration

2015-10-15 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959478#comment-14959478
 ] 

Sergey Shelukhin edited comment on HIVE-12170 at 10/15/15 7:30 PM:
---

Will look at the patch after lunch :) 
The problems arise only with embedded metastore (including embedded in HS2). 
For the test, it appears that the different configs might have been caused by 
the issue fixed in HIVE-12062, where config in testing util is reset after 
HBase minicluster config is set, so all subsequent code uses a different config.
Another scenario is for embedded metastore usage for any service that gets 
different configs, like Tez AM. Tez AM should not rely on default config to 
create metastore and should instead rely on config of the query; I had problems 
with that before due to some static call to metastore where Tez AM would create 
ObjectStore even though it was configured later to connect to remote metastore 
via a query config. 
For HS2, I don't know if we support connecting to multiple metastores. However, 
accessing embedded metastore from multiple threads may cause a thread safety 
problem.
Also a static like that seems pretty brittle in an abstract sense, and the API 
get(conf) is misleading, because it will return the instance with potentially 
different conf, and only set up the conf for the next call. 

If we assume the same conf perhaps we should not reset staticConf if already 
set, and should throw if it's a different conf



was (Author: sershe):
Will look at the patch after lunch :) 
The problems arise only with embedded metastore (including embedded in HS2). 
For the test, it appears that the different configs might have been caused by 
the issue fixed in HIVE-12062, where config in testing util is reset after 
HBase minicluster config is set, so all subsequent code uses a different config.
Another scenario is for embedded metastore usage for any service that gets 
different configs, like Tez AM. Tez AM should not rely on default config to 
create metastore and should instead rely on config of the query; I had problems 
with that before due to some static call to metastore where Tez AM would create 
ObjectStore even though it was configured later to connect to remote metastore 
via a query config. 
For HS2, I don't know if we support connecting to multiple metastores. However, 
accessing embedded metastore from multiple threads may cause a thread safety 
problem.
Also a static like that seems pretty brittle in an abstract sense, and the API 
get(conf) is misleading, because it will return the instance with potentially 
different conf, and only set up the conf for the next call. 


> normalize HBase metastore connection configuration
> --
>
> Key: HIVE-12170
> URL: https://issues.apache.org/jira/browse/HIVE-12170
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HIVE-12170.patch
>
>
> Right now there are two ways to get HBaseReadWrite instance in metastore. 
> Both get a threadlocal instance (is there a good reason for that?).
> 1) One is w/o conf and only works if someone called the (2) before, from any 
> thread.
> 2) The other blindly sets a static conf and then gets an instance with that 
> conf, or if someone already happened to call (1) or (2) from this thread, it 
> returns the existing instance with whatever conf was set before (but still 
> resets the current conf to new conf).
> This doesn't make sense even in an already-thread-safe case (like linear 
> CLI-based tests), and can easily lead to bugs as described; the config 
> propagation logic is not good (example - HIVE-12167); some calls just reset 
> config blindly, so there's no point in setting staticConf, other than for the 
> callers of method (1) above who don't have a conf and would rely on the 
> static (which is bad design).
> Having connections with different configs reliably in not possible, and 
> multi-threaded cases would also break - you could even set conf, have it 
> reset and get instance with somebody else's conf. 
> Static should definitely be removed, maybe threadlocal too (HConnection is 
> thread-safe).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-12170) normalize HBase metastore connection configuration

2015-10-15 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959478#comment-14959478
 ] 

Sergey Shelukhin edited comment on HIVE-12170 at 10/15/15 7:30 PM:
---

Will look at the patch after lunch :) 
The problems arise only with embedded metastore (including embedded in HS2). 
For the test, it appears that the different configs might have been caused by 
the issue fixed in HIVE-12062, where config in testing util is reset after 
HBase minicluster config is set, so all subsequent code uses a different config.
Another scenario is for embedded metastore usage for any service that gets 
different configs, like Tez AM. Tez AM should not rely on default config to 
create metastore and should instead rely on config of the query; I had problems 
with that before due to some static call to metastore where Tez AM would create 
ObjectStore even though it was configured later to connect to remote metastore 
via a query config. 
For HS2, I don't know if we support connecting to multiple metastores. However, 
accessing embedded metastore from multiple threads may cause a thread safety 
problem.
Also a static like that seems pretty brittle in an abstract sense, and the API 
get(conf) is misleading, because it will return the instance with potentially 
different conf, and only set up the conf for the next call. 



was (Author: sershe):
Will look at the patch after lunch :) 
The problems arise only with embedded metastore (including embedded in HS2). 
For the test, it appears that the different configs might have been caused by 
the issue fixed in HIVE-12062, where config in testing util is reset after 
HBase minicluster config is set, so all subsequent code uses a different config.
Another scenario is for embedded metastore usage for any service that gets 
different configs, like Tez AM. Tez AM should not rely on default config to 
create metastore and should instead rely on config of the query; I had problems 
with that before due to some static call to metastore where Tez AM would create 
ObjectStore even though it was configured later to connect to remote metastore 
via a query config. 
For HS2, I don't know if we support connecting to multiple metastores. However, 
accessing embedded metastore from multiple threads may cause a thread safety 
problem.

> normalize HBase metastore connection configuration
> --
>
> Key: HIVE-12170
> URL: https://issues.apache.org/jira/browse/HIVE-12170
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HIVE-12170.patch
>
>
> Right now there are two ways to get HBaseReadWrite instance in metastore. 
> Both get a threadlocal instance (is there a good reason for that?).
> 1) One is w/o conf and only works if someone called the (2) before, from any 
> thread.
> 2) The other blindly sets a static conf and then gets an instance with that 
> conf, or if someone already happened to call (1) or (2) from this thread, it 
> returns the existing instance with whatever conf was set before (but still 
> resets the current conf to new conf).
> This doesn't make sense even in an already-thread-safe case (like linear 
> CLI-based tests), and can easily lead to bugs as described; the config 
> propagation logic is not good (example - HIVE-12167); some calls just reset 
> config blindly, so there's no point in setting staticConf, other than for the 
> callers of method (1) above who don't have a conf and would rely on the 
> static (which is bad design).
> Having connections with different configs reliably in not possible, and 
> multi-threaded cases would also break - you could even set conf, have it 
> reset and get instance with somebody else's conf. 
> Static should definitely be removed, maybe threadlocal too (HConnection is 
> thread-safe).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-7693) Invalid column ref error in order by when using column alias in select clause and using having

2015-10-15 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-7693:
--
Attachment: HIVE-7693.03.patch

> Invalid column ref error in order by when using column alias in select clause 
> and using having
> --
>
> Key: HIVE-7693
> URL: https://issues.apache.org/jira/browse/HIVE-7693
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.13.0
>Reporter: Deepesh Khandelwal
>Assignee: Pengcheng Xiong
> Attachments: HIVE-7693.01.patch, HIVE-7693.02.patch, 
> HIVE-7693.03.patch
>
>
> Hive CLI session:
> {noformat}
> hive> create table abc(foo int, bar string);
> OK
> Time taken: 0.633 seconds
> hive> select foo as c0, count(*) as c1 from abc group by foo, bar having bar 
> like '%abc%' order by foo;
> FAILED: SemanticException [Error 10004]: Line 1:93 Invalid table alias or 
> column reference 'foo': (possible column names are: c0, c1)
> {noformat}
> Without having clause, the query runs fine, example:
> {code}
> select foo as c0, count(*) as c1 from abc group by foo, bar order by foo;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-11915) BoneCP returns closed connections from the pool

2015-10-15 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959354#comment-14959354
 ] 

Sergey Shelukhin edited comment on HIVE-11915 at 10/15/15 6:25 PM:
---

Hmm.. not exactly sure what we put there. All versions are probably affected, 
but it's a relatively rare bug. It was reported online since 2013 for users of 
BoneCP. I'll just put the latest released version.


was (Author: sershe):
Hmm.. not exactly sure what we put there. All versions are probably affected, 
but it's a relatively rare bug. It was reported online since 2013 for users 
BoneCP. I'll just put the latest released version.

> BoneCP returns closed connections from the pool
> ---
>
> Key: HIVE-11915
> URL: https://issues.apache.org/jira/browse/HIVE-11915
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1
>Reporter: Takahiko Saito
>Assignee: Sergey Shelukhin
> Fix For: 1.3.0, 2.0.0
>
> Attachments: HIVE-11915.01.patch, HIVE-11915.02.patch, 
> HIVE-11915.03.patch, HIVE-11915.WIP.patch, HIVE-11915.patch
>
>
> It's a very old bug in BoneCP and it will never be fixed... There are 
> multiple workarounds on the internet but according to responses they are all 
> unreliable. We should upgrade to HikariCP (which in turn is only supported by 
> DN 4), meanwhile try some shamanic rituals. In this JIRA we will try a 
> relatively weak drum.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11915) BoneCP returns closed connections from the pool

2015-10-15 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959354#comment-14959354
 ] 

Sergey Shelukhin commented on HIVE-11915:
-

Hmm.. not exactly sure what we put there. All versions are probably affected, 
but it's a relatively rare bug. It was reported online since 2013 for users 
BoneCP. I'll just put the latest released version.

> BoneCP returns closed connections from the pool
> ---
>
> Key: HIVE-11915
> URL: https://issues.apache.org/jira/browse/HIVE-11915
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1
>Reporter: Takahiko Saito
>Assignee: Sergey Shelukhin
> Fix For: 1.3.0, 2.0.0
>
> Attachments: HIVE-11915.01.patch, HIVE-11915.02.patch, 
> HIVE-11915.03.patch, HIVE-11915.WIP.patch, HIVE-11915.patch
>
>
> It's a very old bug in BoneCP and it will never be fixed... There are 
> multiple workarounds on the internet but according to responses they are all 
> unreliable. We should upgrade to HikariCP (which in turn is only supported by 
> DN 4), meanwhile try some shamanic rituals. In this JIRA we will try a 
> relatively weak drum.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11710) Beeline embedded mode doesn't output query progress after setting any session property

2015-10-15 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959359#comment-14959359
 ] 

Aihua Xu commented on HIVE-11710:
-

OK. Seems we don't need flush the string manually since autoFlush is set to 
true in PrintStream {{PrintStream(OutputStream out, boolean autoFlush, String 
encoding) }}.

> Beeline embedded mode doesn't output query progress after setting any session 
> property
> --
>
> Key: HIVE-11710
> URL: https://issues.apache.org/jira/browse/HIVE-11710
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 2.0.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-11710.2.patch, HIVE-11710.3.patch, HIVE-11710.patch
>
>
> Connect to beeline embedded mode {{beeline -u jdbc:hive2://}}. Then set 
> anything in the session like {{set aa=true;}}.
> After that, any query like {{select count(*) from src;}} will only output 
> result but no query progress.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12188) DoAs does not work properly in non-kerberos secured HS2

2015-10-15 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang updated HIVE-12188:
---
Attachment: HIVE-12188.patch

> DoAs does not work properly in non-kerberos secured HS2
> ---
>
> Key: HIVE-12188
> URL: https://issues.apache.org/jira/browse/HIVE-12188
> Project: Hive
>  Issue Type: Bug
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
> Attachments: HIVE-12188.patch
>
>
> The case with following settings is valid but it seems still not work 
> correctly in current HS2
> ==
> hive.server2.authentication=NONE (or LDAP)
> hive.server2.enable.doAs= true
> hive.metastore.sasl.enabled=true (with HMS Kerberos enabled)
> ==
> Currently HS2 is able to fetch the delegation token to a kerberos secured HMS 
> only when itself is also kerberos secured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >