[jira] [Commented] (HIVE-14355) Schema evolution for ORC in llap is broken for int to string conversion

2016-07-29 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400463#comment-15400463
 ] 

Hive QA commented on HIVE-14355:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12820897/HIVE-14355.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 10418 tests 
executed
*Failed tests:*
{noformat}
TestMsgBusConnection - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avro_nullable_union
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_avro_non_nullable_union
org.apache.hadoop.hive.metastore.TestHiveMetaStoreTxns.stringifyValidTxns
org.apache.hadoop.hive.metastore.TestHiveMetaStoreTxns.testTxnRange
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/692/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/692/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-692/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 9 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12820897 - PreCommit-HIVE-MASTER-Build

> Schema evolution for ORC in llap is broken for int to string conversion
> ---
>
> Key: HIVE-14355
> URL: https://issues.apache.org/jira/browse/HIVE-14355
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.0, 2.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Critical
> Attachments: HIVE-14355-java-only.patch, HIVE-14355.1.patch, 
> HIVE-14355.2.java-only.patch, HIVE-14355.2.patch, 
> HIVE-14355.3.java-only.patch, HIVE-14355.3.patch
>
>
> When schema is evolved from any integer type to string then following 
> exceptions are thrown in LLAP (Works fine in Tez). I guess this should happen 
> even for other conversions.
> {code}
> hive> create table orc_integer(b bigint) stored as orc;
> hive> insert into orc_integer values(100);
> hive> select count(*) from orc_integer where b=100;
> OK
> 1
> hive> alter table orc_integer change column b b string;
> hive> select count(*) from orc_integer where b=100;
> // FAIL with following exception
> {code}
> {code:title=When vectorization is enabled}
> 2016-07-27T01:48:05,611  INFO [TezTaskRunner ()] 
> vector.VectorReduceSinkOperator: RECORDS_OUT_INTERMEDIATE_Map_1:0,
> 2016-07-27T01:48:05,611 ERROR [TezTaskRunner ()] tez.TezProcessor: 
> java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: 
> Hive Runtime Error while processing row
> at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:95)
> at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:70)
> at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:393)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:185)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:168)
> at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:370)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at 
> org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:110)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)

[jira] [Commented] (HIVE-14363) bucketmap inner join query fails due to NullPointerException in some cases

2016-07-29 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400436#comment-15400436
 ] 

Hari Sankar Sivarama Subramaniyan commented on HIVE-14363:
--

None of the errors look related.

> bucketmap inner join query fails due to NullPointerException in some cases
> --
>
> Key: HIVE-14363
> URL: https://issues.apache.org/jira/browse/HIVE-14363
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Jagruti Varia
>Assignee: Hari Sankar Sivarama Subramaniyan
> Attachments: HIVE-14363.1.patch
>
>
> Bucketmap inner join query between bucketed tables throws following exception 
> when one table contains all the empty buckets while other has all the 
> non-empty buckets.
> {noformat}
> Vertex failed, vertexName=Map 2, vertexId=vertex_1466710232033_0432_4_01, 
> diagnostics=[Task failed, taskId=task_1466710232033_0432_4_01_00, 
> diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( 
> failure ) : 
> attempt_1466710232033_0432_4_01_00_0:java.lang.RuntimeException: 
> java.lang.RuntimeException: Map operator initialization failed
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:211)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:168)
>   at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:370)
>   at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
>   at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
>   at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
>   at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
>   at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.RuntimeException: Map operator initialization failed
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:330)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:184)
>   ... 14 more
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.getKeyValueReader(MapRecordProcessor.java:372)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.initializeMapRecordSources(MapRecordProcessor.java:344)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:292)
>   ... 15 more
> ], TaskAttempt 1 failed, info=[Error: Error while running task ( failure ) : 
> attempt_1466710232033_0432_4_01_00_1:java.lang.RuntimeException: 
> java.lang.RuntimeException: Map operator initialization failed
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:211)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:168)
>   at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:370)
>   at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
>   at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
>   at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
>   at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
>   at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.RuntimeException: Map 

[jira] [Commented] (HIVE-14366) Conversion of a Non-ACID table to an ACID table produces non-unique primary keys

2016-07-29 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400404#comment-15400404
 ] 

Hive QA commented on HIVE-14366:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12820850/HIVE-14366.02.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/691/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/691/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-691/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.8.0_25 ]]
+ export JAVA_HOME=/usr/java/jdk1.8.0_25
+ JAVA_HOME=/usr/java/jdk1.8.0_25
+ export 
PATH=/usr/java/jdk1.8.0_25/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/java/jdk1.8.0_25/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-MASTER-Build-691/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ cd apache-github-source-source
+ git fetch origin
>From https://github.com/apache/hive
   c8648aa..6b0131b  master -> origin/master
+ git reset --hard HEAD
HEAD is now at c8648aa HIVE-14348: Add tests for alter table exchange partition 
(Vaibhav Gumashta reviewed by Thejas Nair)
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded.
  (use "git pull" to update your local branch)
+ git reset --hard origin/master
HEAD is now at 6b0131b Revert "HIVE-14303: CommonJoinOperator.checkAndGenObject 
should return directly at CLOSE state to avoid NPE if ExecReducer.close is 
called twice. (Zhihai Xu, reviewed by Xuefu Zhang)"
+ git merge --ff-only origin/master
Already up-to-date.
+ git gc
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12820850 - PreCommit-HIVE-MASTER-Build

> Conversion of a Non-ACID table to an ACID table produces non-unique primary 
> keys
> 
>
> Key: HIVE-14366
> URL: https://issues.apache.org/jira/browse/HIVE-14366
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Saket Saurabh
>Assignee: Saket Saurabh
>Priority: Blocker
> Attachments: HIVE-14366.01.patch, HIVE-14366.02.patch
>
>
> When a Non-ACID table is converted to an ACID table, the primary key 
> consisting of (original transaction id, bucket_id, row_id) is not generated 
> uniquely. Currently, the row_id is always set to 0 for most rows. This leads 
> to correctness issue for such tables.
> Quickest way to reproduce is to add the following unit test to 
> ql/src/test/org/apache/hadoop/hive/ql/TestTxnCommands2.java
> {code:title=ql/src/test/org/apache/hadoop/hive/ql/TestTxnCommands2.java|borderStyle=solid}
>   @Test
>   public void testOriginalReader() throws Exception {
> FileSystem fs = FileSystem.get(hiveConf);
> FileStatus[] status;
> // 1. Insert five rows to Non-ACID table.
> runStatementOnDriver("insert into " + Table.NONACIDORCTBL + "(a,b) 
> values(1,2),(3,4),(5,6),(7,8),(9,10)");
> // 2. Convert NONACIDORCTBL to ACID table.
> runStatementOnDriver("alter table " + Table.NONACIDORCTBL + " SET 
> TBLPROPERTIES ('transactional'='true')");
> // 3. Perform a major compaction.
> runStatementOnDriver("alter table "+ Table.NONACIDORCTBL + " compact 
> 'MAJOR'");
> runWorker(hiveConf);
> // 4. Perform a delete.
> runStatementOnDriver("delete from " + Table.NONACIDORCTBL + " 

[jira] [Commented] (HIVE-14363) bucketmap inner join query fails due to NullPointerException in some cases

2016-07-29 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400403#comment-15400403
 ] 

Hive QA commented on HIVE-14363:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12820610/HIVE-14363.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 186 failed/errored test(s), 10386 tests 
executed
*Failed tests:*
{noformat}
TestMsgBusConnection - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_smb_mapjoin_14
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_9
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avro_nullable_union
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketizedhiveinputformat_auto
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketsortoptimize_insert_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketsortoptimize_insert_6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketsortoptimize_insert_7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketsortoptimize_insert_8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_rp_auto_join1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_filters
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_nulls
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_nullsafe
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_join
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_join_partition_key
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin_11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin_12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin_14
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin_16
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin_17
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin_3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin_4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin_5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin_6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin_7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sort_merge_join_desc_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sort_merge_join_desc_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sort_merge_join_desc_3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sort_merge_join_desc_5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sort_merge_join_desc_8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_auto_smb_mapjoin_14
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorized_bucketmapjoin1
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_cte_5
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_cte_mat_4
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_cte_mat_5
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_dynamic_partition_pruning
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_hybridgrace_hashjoin_1
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_mrr
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_orc_llap
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_dynpart_hashjoin_1
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_join_hash
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_join_tests
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_joins_explain

[jira] [Reopened] (HIVE-14303) CommonJoinOperator.checkAndGenObject should return directly at CLOSE state to avoid NPE if ExecReducer.close is called twice.

2016-07-29 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun reopened HIVE-14303:
-

> CommonJoinOperator.checkAndGenObject should return directly at CLOSE state to 
> avoid NPE if ExecReducer.close is called twice.
> -
>
> Key: HIVE-14303
> URL: https://issues.apache.org/jira/browse/HIVE-14303
> Project: Hive
>  Issue Type: Bug
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.2.0
>
> Attachments: HIVE-14303.0.patch
>
>
> CommonJoinOperator.checkAndGenObject should return directly at CLOSE state to 
> avoid NPE if ExecReducer.close is called twice. ExecReducer.close implements 
> Closeable interface and ExecReducer.close can be called multiple time. We saw 
> the following NPE which hide the real exception due to this bug.
> {code}
> Error: java.lang.RuntimeException: Hive Runtime Error while closing 
> operators: null
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:296)
> at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
> at 
> org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:718)
> at 
> org.apache.hadoop.hive.ql.exec.JoinOperator.endGroup(JoinOperator.java:256)
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:284)
> ... 8 more
> {code}
> The code from ReduceTask.runOldReducer:
> {code}
>   reducer.close(); //line 453
>   reducer = null;
>   
>   out.close(reporter);
>   out = null;
> } finally {
>   IOUtils.cleanup(LOG, reducer);// line 459
>   closeQuietly(out, reporter);
> }
> {code}
> Based on the above stack trace and code, reducer.close() is called twice 
> because the exception happened when reducer.close() is called for the first 
> time at line 453, the code exit before reducer was set to null. 
> NullPointerException is triggered when reducer.close() is called for the 
> second time in IOUtils.cleanup at line 459. NullPointerException hide the 
> real exception which happened when reducer.close() is called for the first 
> time at line 453.
> The reason for NPE is:
> The first reducer.close called CommonJoinOperator.closeOp which clear 
> {{storage}}
> {code}
> Arrays.fill(storage, null);
> {code}
> the second reduce.close generated NPE due to null {{storage[alias]}} which is 
> set to null by first reducer.close.
> The following reducer log can give more proof:
> {code}
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: 0 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: 0 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: SKEWJOINFOLLOWUPJOBS:0
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 1 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 2 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 3 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: 4 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: FS[4]: records written - 
> 53466
> 2016-07-14 22:25:11,555 ERROR [main] ExecReducer: Hit error while closing 
> operators - failing tree
> 2016-07-14 22:25:11,649 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : java.lang.RuntimeException: Hive Runtime Error 
> while closing operators: null
>   at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:296)
>   at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
>   at 
> org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
>   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at 

[jira] [Commented] (HIVE-14303) CommonJoinOperator.checkAndGenObject should return directly at CLOSE state to avoid NPE if ExecReducer.close is called twice.

2016-07-29 Thread Chao Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400355#comment-15400355
 ] 

Chao Sun commented on HIVE-14303:
-

Yes, it seems the test failures are related to this patch. [~zxu] can you 
investigate and resubmit? meanwhile I'll revert it. thanks.

> CommonJoinOperator.checkAndGenObject should return directly at CLOSE state to 
> avoid NPE if ExecReducer.close is called twice.
> -
>
> Key: HIVE-14303
> URL: https://issues.apache.org/jira/browse/HIVE-14303
> Project: Hive
>  Issue Type: Bug
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.2.0
>
> Attachments: HIVE-14303.0.patch
>
>
> CommonJoinOperator.checkAndGenObject should return directly at CLOSE state to 
> avoid NPE if ExecReducer.close is called twice. ExecReducer.close implements 
> Closeable interface and ExecReducer.close can be called multiple time. We saw 
> the following NPE which hide the real exception due to this bug.
> {code}
> Error: java.lang.RuntimeException: Hive Runtime Error while closing 
> operators: null
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:296)
> at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
> at 
> org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:718)
> at 
> org.apache.hadoop.hive.ql.exec.JoinOperator.endGroup(JoinOperator.java:256)
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:284)
> ... 8 more
> {code}
> The code from ReduceTask.runOldReducer:
> {code}
>   reducer.close(); //line 453
>   reducer = null;
>   
>   out.close(reporter);
>   out = null;
> } finally {
>   IOUtils.cleanup(LOG, reducer);// line 459
>   closeQuietly(out, reporter);
> }
> {code}
> Based on the above stack trace and code, reducer.close() is called twice 
> because the exception happened when reducer.close() is called for the first 
> time at line 453, the code exit before reducer was set to null. 
> NullPointerException is triggered when reducer.close() is called for the 
> second time in IOUtils.cleanup at line 459. NullPointerException hide the 
> real exception which happened when reducer.close() is called for the first 
> time at line 453.
> The reason for NPE is:
> The first reducer.close called CommonJoinOperator.closeOp which clear 
> {{storage}}
> {code}
> Arrays.fill(storage, null);
> {code}
> the second reduce.close generated NPE due to null {{storage[alias]}} which is 
> set to null by first reducer.close.
> The following reducer log can give more proof:
> {code}
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: 0 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: 0 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: SKEWJOINFOLLOWUPJOBS:0
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 1 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 2 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 3 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: 4 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: FS[4]: records written - 
> 53466
> 2016-07-14 22:25:11,555 ERROR [main] ExecReducer: Hit error while closing 
> operators - failing tree
> 2016-07-14 22:25:11,649 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : java.lang.RuntimeException: Hive Runtime Error 
> while closing operators: null
>   at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:296)
>   at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
>   at 
> org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
>   at 

[jira] [Issue Comment Deleted] (HIVE-14303) CommonJoinOperator.checkAndGenObject should return directly at CLOSE state to avoid NPE if ExecReducer.close is called twice.

2016-07-29 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HIVE-14303:

Comment: was deleted

(was: [~ashutoshc] I chose one failed test (auto_smb_mapjoin_14.q) and run it 
with the git commit before this one, and it failed as well - the q.out file is 
different.)

> CommonJoinOperator.checkAndGenObject should return directly at CLOSE state to 
> avoid NPE if ExecReducer.close is called twice.
> -
>
> Key: HIVE-14303
> URL: https://issues.apache.org/jira/browse/HIVE-14303
> Project: Hive
>  Issue Type: Bug
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.2.0
>
> Attachments: HIVE-14303.0.patch
>
>
> CommonJoinOperator.checkAndGenObject should return directly at CLOSE state to 
> avoid NPE if ExecReducer.close is called twice. ExecReducer.close implements 
> Closeable interface and ExecReducer.close can be called multiple time. We saw 
> the following NPE which hide the real exception due to this bug.
> {code}
> Error: java.lang.RuntimeException: Hive Runtime Error while closing 
> operators: null
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:296)
> at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
> at 
> org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:718)
> at 
> org.apache.hadoop.hive.ql.exec.JoinOperator.endGroup(JoinOperator.java:256)
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:284)
> ... 8 more
> {code}
> The code from ReduceTask.runOldReducer:
> {code}
>   reducer.close(); //line 453
>   reducer = null;
>   
>   out.close(reporter);
>   out = null;
> } finally {
>   IOUtils.cleanup(LOG, reducer);// line 459
>   closeQuietly(out, reporter);
> }
> {code}
> Based on the above stack trace and code, reducer.close() is called twice 
> because the exception happened when reducer.close() is called for the first 
> time at line 453, the code exit before reducer was set to null. 
> NullPointerException is triggered when reducer.close() is called for the 
> second time in IOUtils.cleanup at line 459. NullPointerException hide the 
> real exception which happened when reducer.close() is called for the first 
> time at line 453.
> The reason for NPE is:
> The first reducer.close called CommonJoinOperator.closeOp which clear 
> {{storage}}
> {code}
> Arrays.fill(storage, null);
> {code}
> the second reduce.close generated NPE due to null {{storage[alias]}} which is 
> set to null by first reducer.close.
> The following reducer log can give more proof:
> {code}
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: 0 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: 0 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: SKEWJOINFOLLOWUPJOBS:0
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 1 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 2 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 3 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: 4 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: FS[4]: records written - 
> 53466
> 2016-07-14 22:25:11,555 ERROR [main] ExecReducer: Hit error while closing 
> operators - failing tree
> 2016-07-14 22:25:11,649 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : java.lang.RuntimeException: Hive Runtime Error 
> while closing operators: null
>   at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:296)
>   at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
>   at 
> org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
>   at 

[jira] [Updated] (HIVE-14382) Improve the Functionality of Reverse FOR Statement

2016-07-29 Thread Akash Sethi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash Sethi updated HIVE-14382:
---
Status: Patch Available  (was: Open)

> Improve the Functionality of Reverse  FOR Statement
> ---
>
> Key: HIVE-14382
> URL: https://issues.apache.org/jira/browse/HIVE-14382
> Project: Hive
>  Issue Type: Improvement
>Reporter: Akash Sethi
>Assignee: Akash Sethi
>Priority: Minor
> Attachments: HIVE-14382.1-branch-2.1.patch, HIVE-14382.1.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> According to SQL Standards, Reverse FOR Statement should be like this:-
> FOR index IN Optional[Reverse] Lower_Bound Upper_Bound
> but in hive it is like this :- 
> FOR index IN Optional[Reverse]  Upper_Bound Lower_Bound
> so i m just trying to improve the functionality for Reverse FOR Statement
> REFERNCES :- 
> https://docs.oracle.com/cloud/latest/db112/LNPLS/for_loop_statement.htm#LNPLS1536



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work stopped] (HIVE-14382) Improve the Functionality of Reverse FOR Statement

2016-07-29 Thread Akash Sethi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-14382 stopped by Akash Sethi.
--
> Improve the Functionality of Reverse  FOR Statement
> ---
>
> Key: HIVE-14382
> URL: https://issues.apache.org/jira/browse/HIVE-14382
> Project: Hive
>  Issue Type: Improvement
>Reporter: Akash Sethi
>Assignee: Akash Sethi
>Priority: Minor
> Attachments: HIVE-14382.1-branch-2.1.patch, HIVE-14382.1.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> According to SQL Standards, Reverse FOR Statement should be like this:-
> FOR index IN Optional[Reverse] Lower_Bound Upper_Bound
> but in hive it is like this :- 
> FOR index IN Optional[Reverse]  Upper_Bound Lower_Bound
> so i m just trying to improve the functionality for Reverse FOR Statement
> REFERNCES :- 
> https://docs.oracle.com/cloud/latest/db112/LNPLS/for_loop_statement.htm#LNPLS1536



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14382) Improve the Functionality of Reverse FOR Statement

2016-07-29 Thread Akash Sethi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash Sethi updated HIVE-14382:
---
Attachment: HIVE-14382.1-branch-2.1.patch

Please Review my patch for branch branch-2.1 

> Improve the Functionality of Reverse  FOR Statement
> ---
>
> Key: HIVE-14382
> URL: https://issues.apache.org/jira/browse/HIVE-14382
> Project: Hive
>  Issue Type: Improvement
>Reporter: Akash Sethi
>Assignee: Akash Sethi
>Priority: Minor
> Attachments: HIVE-14382.1-branch-2.1.patch, HIVE-14382.1.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> According to SQL Standards, Reverse FOR Statement should be like this:-
> FOR index IN Optional[Reverse] Lower_Bound Upper_Bound
> but in hive it is like this :- 
> FOR index IN Optional[Reverse]  Upper_Bound Lower_Bound
> so i m just trying to improve the functionality for Reverse FOR Statement
> REFERNCES :- 
> https://docs.oracle.com/cloud/latest/db112/LNPLS/for_loop_statement.htm#LNPLS1536



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14303) CommonJoinOperator.checkAndGenObject should return directly at CLOSE state to avoid NPE if ExecReducer.close is called twice.

2016-07-29 Thread Chao Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400346#comment-15400346
 ] 

Chao Sun commented on HIVE-14303:
-

[~ashutoshc] I chose one failed test (auto_smb_mapjoin_14.q) and run it with 
the git commit before this one, and it failed as well - the q.out file is 
different.

> CommonJoinOperator.checkAndGenObject should return directly at CLOSE state to 
> avoid NPE if ExecReducer.close is called twice.
> -
>
> Key: HIVE-14303
> URL: https://issues.apache.org/jira/browse/HIVE-14303
> Project: Hive
>  Issue Type: Bug
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.2.0
>
> Attachments: HIVE-14303.0.patch
>
>
> CommonJoinOperator.checkAndGenObject should return directly at CLOSE state to 
> avoid NPE if ExecReducer.close is called twice. ExecReducer.close implements 
> Closeable interface and ExecReducer.close can be called multiple time. We saw 
> the following NPE which hide the real exception due to this bug.
> {code}
> Error: java.lang.RuntimeException: Hive Runtime Error while closing 
> operators: null
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:296)
> at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
> at 
> org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:718)
> at 
> org.apache.hadoop.hive.ql.exec.JoinOperator.endGroup(JoinOperator.java:256)
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:284)
> ... 8 more
> {code}
> The code from ReduceTask.runOldReducer:
> {code}
>   reducer.close(); //line 453
>   reducer = null;
>   
>   out.close(reporter);
>   out = null;
> } finally {
>   IOUtils.cleanup(LOG, reducer);// line 459
>   closeQuietly(out, reporter);
> }
> {code}
> Based on the above stack trace and code, reducer.close() is called twice 
> because the exception happened when reducer.close() is called for the first 
> time at line 453, the code exit before reducer was set to null. 
> NullPointerException is triggered when reducer.close() is called for the 
> second time in IOUtils.cleanup at line 459. NullPointerException hide the 
> real exception which happened when reducer.close() is called for the first 
> time at line 453.
> The reason for NPE is:
> The first reducer.close called CommonJoinOperator.closeOp which clear 
> {{storage}}
> {code}
> Arrays.fill(storage, null);
> {code}
> the second reduce.close generated NPE due to null {{storage[alias]}} which is 
> set to null by first reducer.close.
> The following reducer log can give more proof:
> {code}
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: 0 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: 0 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: SKEWJOINFOLLOWUPJOBS:0
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 1 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 2 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 3 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: 4 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: FS[4]: records written - 
> 53466
> 2016-07-14 22:25:11,555 ERROR [main] ExecReducer: Hit error while closing 
> operators - failing tree
> 2016-07-14 22:25:11,649 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : java.lang.RuntimeException: Hive Runtime Error 
> while closing operators: null
>   at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:296)
>   at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
>   at 
> org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
>   at 

[jira] [Commented] (HIVE-14382) Improve the Functionality of Reverse FOR Statement

2016-07-29 Thread Akash Sethi (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400344#comment-15400344
 ] 

Akash Sethi commented on HIVE-14382:


i do know it sir but as new user start using the hive they expect sql like 
syntax. don't apply my patch now but probably on the next release this can be 
done.
That's why i added 2 patch one for master and another for branch2.1 

> Improve the Functionality of Reverse  FOR Statement
> ---
>
> Key: HIVE-14382
> URL: https://issues.apache.org/jira/browse/HIVE-14382
> Project: Hive
>  Issue Type: Improvement
>Reporter: Akash Sethi
>Assignee: Akash Sethi
>Priority: Minor
> Attachments: HIVE-14382.1.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> According to SQL Standards, Reverse FOR Statement should be like this:-
> FOR index IN Optional[Reverse] Lower_Bound Upper_Bound
> but in hive it is like this :- 
> FOR index IN Optional[Reverse]  Upper_Bound Lower_Bound
> so i m just trying to improve the functionality for Reverse FOR Statement
> REFERNCES :- 
> https://docs.oracle.com/cloud/latest/db112/LNPLS/for_loop_statement.htm#LNPLS1536



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14382) Improve the Functionality of Reverse FOR Statement

2016-07-29 Thread Akash Sethi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash Sethi updated HIVE-14382:
---
Attachment: HIVE-14382.1.patch

Please Review my Patch regarding reverse for loop functionality

> Improve the Functionality of Reverse  FOR Statement
> ---
>
> Key: HIVE-14382
> URL: https://issues.apache.org/jira/browse/HIVE-14382
> Project: Hive
>  Issue Type: Improvement
>Reporter: Akash Sethi
>Assignee: Akash Sethi
>Priority: Minor
> Attachments: HIVE-14382.1.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> According to SQL Standards, Reverse FOR Statement should be like this:-
> FOR index IN Optional[Reverse] Lower_Bound Upper_Bound
> but in hive it is like this :- 
> FOR index IN Optional[Reverse]  Upper_Bound Lower_Bound
> so i m just trying to improve the functionality for Reverse FOR Statement
> REFERNCES :- 
> https://docs.oracle.com/cloud/latest/db112/LNPLS/for_loop_statement.htm#LNPLS1536



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HIVE-14382) Improve the Functionality of Reverse FOR Statement

2016-07-29 Thread Akash Sethi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-14382 started by Akash Sethi.
--
> Improve the Functionality of Reverse  FOR Statement
> ---
>
> Key: HIVE-14382
> URL: https://issues.apache.org/jira/browse/HIVE-14382
> Project: Hive
>  Issue Type: Improvement
>Reporter: Akash Sethi
>Assignee: Akash Sethi
>Priority: Minor
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> According to SQL Standards, Reverse FOR Statement should be like this:-
> FOR index IN Optional[Reverse] Lower_Bound Upper_Bound
> but in hive it is like this :- 
> FOR index IN Optional[Reverse]  Upper_Bound Lower_Bound
> so i m just trying to improve the functionality for Reverse FOR Statement
> REFERNCES :- 
> https://docs.oracle.com/cloud/latest/db112/LNPLS/for_loop_statement.htm#LNPLS1536



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14357) TestDbTxnManager2#testLocksInSubquery failing in branch-2.1

2016-07-29 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-14357:
--
Affects Version/s: 2.1.1
 Target Version/s: 1.3.0, 2.1.1

> TestDbTxnManager2#testLocksInSubquery failing in branch-2.1
> ---
>
> Key: HIVE-14357
> URL: https://issues.apache.org/jira/browse/HIVE-14357
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.1
>Reporter: Rajat Khandelwal
>Assignee: Sergey Shelukhin
> Attachments: HIVE-14357.patch
>
>
> {noformat}
> checkCmdOnDriver(driver.compileAndRespond("insert into R select * from S 
> where a in (select a from T where b = 1)"));
> txnMgr.openTxn("three");
> txnMgr.acquireLocks(driver.getPlan(), ctx, "three");
> locks = getLocks();
> Assert.assertEquals("Unexpected lock count", 3, locks.size());
> checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "T", null, 
> locks.get(0));
> checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "S", null, 
> locks.get(1));
> checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "R", null, 
> locks.get(2));
> {noformat}
> This test case is failing. The expected order of locks is supposed to be T, 
> S, R. But upon closer inspection, it seems to be R,S,T. 
> I'm not much familiar with what these locks are and why the order is 
> important. Raising this jira so while I try to understand it all. Meanwhile, 
> if somebody can explain here, would be helpful. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14357) TestDbTxnManager2#testLocksInSubquery failing in branch-2.1

2016-07-29 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400332#comment-15400332
 ] 

Eugene Koifman commented on HIVE-14357:
---

+1 pending tests

> TestDbTxnManager2#testLocksInSubquery failing in branch-2.1
> ---
>
> Key: HIVE-14357
> URL: https://issues.apache.org/jira/browse/HIVE-14357
> Project: Hive
>  Issue Type: Bug
>Reporter: Rajat Khandelwal
>Assignee: Sergey Shelukhin
> Attachments: HIVE-14357.patch
>
>
> {noformat}
> checkCmdOnDriver(driver.compileAndRespond("insert into R select * from S 
> where a in (select a from T where b = 1)"));
> txnMgr.openTxn("three");
> txnMgr.acquireLocks(driver.getPlan(), ctx, "three");
> locks = getLocks();
> Assert.assertEquals("Unexpected lock count", 3, locks.size());
> checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "T", null, 
> locks.get(0));
> checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "S", null, 
> locks.get(1));
> checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "R", null, 
> locks.get(2));
> {noformat}
> This test case is failing. The expected order of locks is supposed to be T, 
> S, R. But upon closer inspection, it seems to be R,S,T. 
> I'm not much familiar with what these locks are and why the order is 
> important. Raising this jira so while I try to understand it all. Meanwhile, 
> if somebody can explain here, would be helpful. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14303) CommonJoinOperator.checkAndGenObject should return directly at CLOSE state to avoid NPE if ExecReducer.close is called twice.

2016-07-29 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400297#comment-15400297
 ] 

Ashutosh Chauhan commented on HIVE-14303:
-

Seems like these reported failures were indeed related. See failures in 
subsequent run at https://issues.apache.org/jira/browse/HIVE-14365 

Please verify and revert this patch.

> CommonJoinOperator.checkAndGenObject should return directly at CLOSE state to 
> avoid NPE if ExecReducer.close is called twice.
> -
>
> Key: HIVE-14303
> URL: https://issues.apache.org/jira/browse/HIVE-14303
> Project: Hive
>  Issue Type: Bug
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.2.0
>
> Attachments: HIVE-14303.0.patch
>
>
> CommonJoinOperator.checkAndGenObject should return directly at CLOSE state to 
> avoid NPE if ExecReducer.close is called twice. ExecReducer.close implements 
> Closeable interface and ExecReducer.close can be called multiple time. We saw 
> the following NPE which hide the real exception due to this bug.
> {code}
> Error: java.lang.RuntimeException: Hive Runtime Error while closing 
> operators: null
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:296)
> at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
> at 
> org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:718)
> at 
> org.apache.hadoop.hive.ql.exec.JoinOperator.endGroup(JoinOperator.java:256)
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:284)
> ... 8 more
> {code}
> The code from ReduceTask.runOldReducer:
> {code}
>   reducer.close(); //line 453
>   reducer = null;
>   
>   out.close(reporter);
>   out = null;
> } finally {
>   IOUtils.cleanup(LOG, reducer);// line 459
>   closeQuietly(out, reporter);
> }
> {code}
> Based on the above stack trace and code, reducer.close() is called twice 
> because the exception happened when reducer.close() is called for the first 
> time at line 453, the code exit before reducer was set to null. 
> NullPointerException is triggered when reducer.close() is called for the 
> second time in IOUtils.cleanup at line 459. NullPointerException hide the 
> real exception which happened when reducer.close() is called for the first 
> time at line 453.
> The reason for NPE is:
> The first reducer.close called CommonJoinOperator.closeOp which clear 
> {{storage}}
> {code}
> Arrays.fill(storage, null);
> {code}
> the second reduce.close generated NPE due to null {{storage[alias]}} which is 
> set to null by first reducer.close.
> The following reducer log can give more proof:
> {code}
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: 0 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: 0 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: SKEWJOINFOLLOWUPJOBS:0
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 1 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 2 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 3 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: 4 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: FS[4]: records written - 
> 53466
> 2016-07-14 22:25:11,555 ERROR [main] ExecReducer: Hit error while closing 
> operators - failing tree
> 2016-07-14 22:25:11,649 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : java.lang.RuntimeException: Hive Runtime Error 
> while closing operators: null
>   at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:296)
>   at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
>   at 
> org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
> 

[jira] [Commented] (HIVE-14348) Add tests for alter table exchange partition

2016-07-29 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400294#comment-15400294
 ] 

Vaibhav Gumashta commented on HIVE-14348:
-

Committed to 2.1 and master. Will attach a patch for branch-1

> Add tests for alter table exchange partition
> 
>
> Key: HIVE-14348
> URL: https://issues.apache.org/jira/browse/HIVE-14348
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1, 2.1.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-14348.1.patch, HIVE-14348.2.patch, 
> HIVE-14348.3.patch, HIVE-14348.4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14378) Data size may be estimated as 0 if no columns are being projected after an operator

2016-07-29 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400290#comment-15400290
 ] 

Ashutosh Chauhan commented on HIVE-14378:
-

[~jcamachorodriguez] Can you please take a look?

> Data size may be estimated as 0 if no columns are being projected after an 
> operator
> ---
>
> Key: HIVE-14378
> URL: https://issues.apache.org/jira/browse/HIVE-14378
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer, Statistics
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-14378.patch
>
>
> in those cases we still emit rows.. but they may not have any columns within 
> it.  We shouldn't estimate 0 data size in such cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14357) TestDbTxnManager2#testLocksInSubquery failing in branch-2.1

2016-07-29 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-14357:

Status: Patch Available  (was: Open)

[~ekoifman] [~wzheng] can you take a look?

> TestDbTxnManager2#testLocksInSubquery failing in branch-2.1
> ---
>
> Key: HIVE-14357
> URL: https://issues.apache.org/jira/browse/HIVE-14357
> Project: Hive
>  Issue Type: Bug
>Reporter: Rajat Khandelwal
>Assignee: Sergey Shelukhin
> Attachments: HIVE-14357.patch
>
>
> {noformat}
> checkCmdOnDriver(driver.compileAndRespond("insert into R select * from S 
> where a in (select a from T where b = 1)"));
> txnMgr.openTxn("three");
> txnMgr.acquireLocks(driver.getPlan(), ctx, "three");
> locks = getLocks();
> Assert.assertEquals("Unexpected lock count", 3, locks.size());
> checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "T", null, 
> locks.get(0));
> checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "S", null, 
> locks.get(1));
> checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "R", null, 
> locks.get(2));
> {noformat}
> This test case is failing. The expected order of locks is supposed to be T, 
> S, R. But upon closer inspection, it seems to be R,S,T. 
> I'm not much familiar with what these locks are and why the order is 
> important. Raising this jira so while I try to understand it all. Meanwhile, 
> if somebody can explain here, would be helpful. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14357) TestDbTxnManager2#testLocksInSubquery failing in branch-2.1

2016-07-29 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-14357:

Attachment: HIVE-14357.patch

Removed the order dependency from test. When it fails, the error looks like 
this:
{noformat}
java.lang.AssertionError: Could't find {SHARED_READ, ACQUIRED, default, XYZY, 
null} in [ShowLocksResponseElement(lockid:2, dbname:default, tablename:xyz, 
state:ACQUIRED, type:SHARED_READ, txnid:0, lastheartbeat:1469837191676, 
acquiredat:1469837191676, user:XYZ, hostname:reznor-mbp-2.local, 
agentInfo:sergey_20160729170623_e1153b39-54d3-4cab-9766-c1349c8efa4a, 
lockIdInternal:1)]{noformat}

> TestDbTxnManager2#testLocksInSubquery failing in branch-2.1
> ---
>
> Key: HIVE-14357
> URL: https://issues.apache.org/jira/browse/HIVE-14357
> Project: Hive
>  Issue Type: Bug
>Reporter: Rajat Khandelwal
>Assignee: Sergey Shelukhin
> Attachments: HIVE-14357.patch
>
>
> {noformat}
> checkCmdOnDriver(driver.compileAndRespond("insert into R select * from S 
> where a in (select a from T where b = 1)"));
> txnMgr.openTxn("three");
> txnMgr.acquireLocks(driver.getPlan(), ctx, "three");
> locks = getLocks();
> Assert.assertEquals("Unexpected lock count", 3, locks.size());
> checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "T", null, 
> locks.get(0));
> checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "S", null, 
> locks.get(1));
> checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "R", null, 
> locks.get(2));
> {noformat}
> This test case is failing. The expected order of locks is supposed to be T, 
> S, R. But upon closer inspection, it seems to be R,S,T. 
> I'm not much familiar with what these locks are and why the order is 
> important. Raising this jira so while I try to understand it all. Meanwhile, 
> if somebody can explain here, would be helpful. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14367) Estimated size for constant nulls is 0

2016-07-29 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-14367:

Status: Patch Available  (was: Open)

> Estimated size for constant nulls is 0
> --
>
> Key: HIVE-14367
> URL: https://issues.apache.org/jira/browse/HIVE-14367
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Affects Versions: 2.1.0, 2.0.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-14367.1.patch, HIVE-14367.1.patch
>
>
> since type is incorrectly assumed as void.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14367) Estimated size for constant nulls is 0

2016-07-29 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-14367:

Attachment: HIVE-14367.1.patch

> Estimated size for constant nulls is 0
> --
>
> Key: HIVE-14367
> URL: https://issues.apache.org/jira/browse/HIVE-14367
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Affects Versions: 2.0.0, 2.1.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-14367.1.patch, HIVE-14367.1.patch
>
>
> since type is incorrectly assumed as void.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14367) Estimated size for constant nulls is 0

2016-07-29 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-14367:

Status: Open  (was: Patch Available)

> Estimated size for constant nulls is 0
> --
>
> Key: HIVE-14367
> URL: https://issues.apache.org/jira/browse/HIVE-14367
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Affects Versions: 2.1.0, 2.0.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-14367.1.patch
>
>
> since type is incorrectly assumed as void.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14381) Handle null value in WindowingTableFunction.WindowingIterator.next()

2016-07-29 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-14381:
-
Attachment: HIVE-14381.2.patch

Thanks [~jcamachorodriguez] for the review. Patch 2 takes {{output.set(j, 
out);}} out from the if block

> Handle null value in WindowingTableFunction.WindowingIterator.next()
> 
>
> Key: HIVE-14381
> URL: https://issues.apache.org/jira/browse/HIVE-14381
> Project: Hive
>  Issue Type: Bug
>  Components: PTF-Windowing
>Affects Versions: 1.3.0, 2.1.0, 2.2.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
> Attachments: HIVE-14381.1.patch, HIVE-14381.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-13257) GroupBy with column alias does not support AVG

2016-07-29 Thread stephen sprague (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400173#comment-15400173
 ] 

stephen sprague edited comment on HIVE-13257 at 7/29/16 11:25 PM:
--

use  "order by 1" as your test case.you'll get the same error.

I should mention this is hive v2.1.0

{code}
[dwrdevnn1:10001] hive> select count(*), date_key FROM 
Raw_logs.fact_tracking_datekey_by_hour group by date_key order by 1 desc;
Fri Jul 29 15:47:06 2016 - Thrift::API::HiveClient2::execute: Error while 
compiling statement: FAILED: SemanticException [Error 10128]: Line 1:7 Not yet 
supported place for UDAF 'count'; HQL was: "select count(*), date_key FROM 
Raw_logs.fact_tracking_datekey_by_hour group by date_key order by 1 desc"
{code}

and yes "hive.groupby.orderby.position.alias=true"

thoughts?



was (Author: spragues):
use  "order by 1" as your test case.you'll get the same error.

{code}
[dwrdevnn1:10001] hive> select count(*), date_key FROM 
Raw_logs.fact_tracking_datekey_by_hour group by date_key order by 1 desc;
Fri Jul 29 15:47:06 2016 - Thrift::API::HiveClient2::execute: Error while 
compiling statement: FAILED: SemanticException [Error 10128]: Line 1:7 Not yet 
supported place for UDAF 'count'; HQL was: "select count(*), date_key FROM 
Raw_logs.fact_tracking_datekey_by_hour group by date_key order by 1 desc"
{code}

and yes "hive.groupby.orderby.position.alias=true"

thoughts?


> GroupBy with column alias does not support AVG
> --
>
> Key: HIVE-13257
> URL: https://issues.apache.org/jira/browse/HIVE-13257
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Prasanth Jayachandran
>
> For the following query, with hive.groupby.orderby.position.alias set to true
> {code:title=Query}
> SELECT Avg(`t0`.`x_measure__0`) AS `avg_calculation_270497503505567749_ok` 
> FROM   (SELECT `store_sales`.`ss_ticket_number` AS `ss_ticket_number`, 
>Sum(`store_sales`.`ss_net_paid`) AS `x_measure__0` 
> FROM   `store_sales` `store_sales` 
>JOIN `item` `item` 
>  ON ( `store_sales`.`ss_item_sk` = `item`.`i_item_sk` ) 
> GROUP  BY `store_sales`.`ss_ticket_number`) `t0` 
> GROUP  BY 1 
> HAVING ( Count(1) > 0 );
> {code}
> it throws the following exception
> {code:title=Exception}
> FAILED: SemanticException [Error 10128]: Line 2:7 Not yet supported place for 
> UDAF 'Avg’
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14348) Add tests for alter table exchange partition

2016-07-29 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400192#comment-15400192
 ] 

Vaibhav Gumashta commented on HIVE-14348:
-

Test failures unrelated. I'll commit shortly and create a patch for branch-1.

> Add tests for alter table exchange partition
> 
>
> Key: HIVE-14348
> URL: https://issues.apache.org/jira/browse/HIVE-14348
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1, 2.1.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-14348.1.patch, HIVE-14348.2.patch, 
> HIVE-14348.3.patch, HIVE-14348.4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13257) GroupBy with column alias does not support AVG

2016-07-29 Thread stephen sprague (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400173#comment-15400173
 ] 

stephen sprague commented on HIVE-13257:


use  "order by 1" as your test case.you'll get the same error.

{code}
[dwrdevnn1:10001] hive> select count(*), date_key FROM 
Raw_logs.fact_tracking_datekey_by_hour group by date_key order by 1 desc;
Fri Jul 29 15:47:06 2016 - Thrift::API::HiveClient2::execute: Error while 
compiling statement: FAILED: SemanticException [Error 10128]: Line 1:7 Not yet 
supported place for UDAF 'count'; HQL was: "select count(*), date_key FROM 
Raw_logs.fact_tracking_datekey_by_hour group by date_key order by 1 desc"
{code}

and yes "hive.groupby.orderby.position.alias=true"

thoughts?


> GroupBy with column alias does not support AVG
> --
>
> Key: HIVE-13257
> URL: https://issues.apache.org/jira/browse/HIVE-13257
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Prasanth Jayachandran
>
> For the following query, with hive.groupby.orderby.position.alias set to true
> {code:title=Query}
> SELECT Avg(`t0`.`x_measure__0`) AS `avg_calculation_270497503505567749_ok` 
> FROM   (SELECT `store_sales`.`ss_ticket_number` AS `ss_ticket_number`, 
>Sum(`store_sales`.`ss_net_paid`) AS `x_measure__0` 
> FROM   `store_sales` `store_sales` 
>JOIN `item` `item` 
>  ON ( `store_sales`.`ss_item_sk` = `item`.`i_item_sk` ) 
> GROUP  BY `store_sales`.`ss_ticket_number`) `t0` 
> GROUP  BY 1 
> HAVING ( Count(1) > 0 );
> {code}
> it throws the following exception
> {code:title=Exception}
> FAILED: SemanticException [Error 10128]: Line 2:7 Not yet supported place for 
> UDAF 'Avg’
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14365) Simplify logic for check introduced in HIVE-10022

2016-07-29 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400154#comment-15400154
 ] 

Hive QA commented on HIVE-14365:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12820589/HIVE-14365.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 183 failed/errored test(s), 10377 tests 
executed
*Failed tests:*
{noformat}
TestMsgBusConnection - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_smb_mapjoin_14
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_9
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avro_nullable_union
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketizedhiveinputformat_auto
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketsortoptimize_insert_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketsortoptimize_insert_6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketsortoptimize_insert_7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketsortoptimize_insert_8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_rp_auto_join1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_filters
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_nulls
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_nullsafe
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_join
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_join_partition_key
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin_11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin_12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin_14
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin_16
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin_17
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin_3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin_4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin_5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin_6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin_7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sort_merge_join_desc_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sort_merge_join_desc_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sort_merge_join_desc_3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sort_merge_join_desc_5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sort_merge_join_desc_8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_auto_smb_mapjoin_14
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorized_bucketmapjoin1
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_cte_5
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_cte_mat_4
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_cte_mat_5
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_dynamic_partition_pruning
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_hybridgrace_hashjoin_1
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_mrr
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_orc_llap
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_dynpart_hashjoin_1
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_join_hash
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_join_tests
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_joins_explain
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_self_join

[jira] [Resolved] (HIVE-8513) Vectorize multiple children operators

2016-07-29 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline resolved HIVE-8513.

Resolution: Won't Fix

> Vectorize multiple children operators 
> --
>
> Key: HIVE-8513
> URL: https://issues.apache.org/jira/browse/HIVE-8513
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Reporter: Matt McCline
>Assignee: Matt McCline
>
> For HIVE-8498 we turned off vectorization if any operator is encountered with 
> multiple children.
> Thus, this query will not be vectorized:
> from orc1 a
> insert overwrite table orc_rn1 select a.* where a.rn < 100
> insert overwrite table orc_rn2 select a.* where a.rn >= 100 and a.rn < 1000
> insert overwrite table orc_rn3 select a.* where a.rn >= 1000;
> But Prashanth [~prasanthj] also points that correlation operators Mux/Demux 
> will also not be vectorized, too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-14357) TestDbTxnManager2#testLocksInSubquery failing in branch-2.1

2016-07-29 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin reassigned HIVE-14357:
---

Assignee: Sergey Shelukhin  (was: Wei Zheng)

> TestDbTxnManager2#testLocksInSubquery failing in branch-2.1
> ---
>
> Key: HIVE-14357
> URL: https://issues.apache.org/jira/browse/HIVE-14357
> Project: Hive
>  Issue Type: Bug
>Reporter: Rajat Khandelwal
>Assignee: Sergey Shelukhin
>
> {noformat}
> checkCmdOnDriver(driver.compileAndRespond("insert into R select * from S 
> where a in (select a from T where b = 1)"));
> txnMgr.openTxn("three");
> txnMgr.acquireLocks(driver.getPlan(), ctx, "three");
> locks = getLocks();
> Assert.assertEquals("Unexpected lock count", 3, locks.size());
> checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "T", null, 
> locks.get(0));
> checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "S", null, 
> locks.get(1));
> checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "R", null, 
> locks.get(2));
> {noformat}
> This test case is failing. The expected order of locks is supposed to be T, 
> S, R. But upon closer inspection, it seems to be R,S,T. 
> I'm not much familiar with what these locks are and why the order is 
> important. Raising this jira so while I try to understand it all. Meanwhile, 
> if somebody can explain here, would be helpful. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-14357) TestDbTxnManager2#testLocksInSubquery failing in branch-2.1

2016-07-29 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng reassigned HIVE-14357:


Assignee: Wei Zheng

> TestDbTxnManager2#testLocksInSubquery failing in branch-2.1
> ---
>
> Key: HIVE-14357
> URL: https://issues.apache.org/jira/browse/HIVE-14357
> Project: Hive
>  Issue Type: Bug
>Reporter: Rajat Khandelwal
>Assignee: Wei Zheng
>
> {noformat}
> checkCmdOnDriver(driver.compileAndRespond("insert into R select * from S 
> where a in (select a from T where b = 1)"));
> txnMgr.openTxn("three");
> txnMgr.acquireLocks(driver.getPlan(), ctx, "three");
> locks = getLocks();
> Assert.assertEquals("Unexpected lock count", 3, locks.size());
> checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "T", null, 
> locks.get(0));
> checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "S", null, 
> locks.get(1));
> checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "R", null, 
> locks.get(2));
> {noformat}
> This test case is failing. The expected order of locks is supposed to be T, 
> S, R. But upon closer inspection, it seems to be R,S,T. 
> I'm not much familiar with what these locks are and why the order is 
> important. Raising this jira so while I try to understand it all. Meanwhile, 
> if somebody can explain here, would be helpful. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-12826) Vectorization: fix VectorUDAF* suspect isNull checks

2016-07-29 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400095#comment-15400095
 ] 

Matt McCline edited comment on HIVE-12826 at 7/29/16 10:11 PM:
---

This fix is present in branch-2.1 before tags/release-2.1.0-rc3


was (Author: mmccline):
This fix is present in branch-2.1

> Vectorization: fix VectorUDAF* suspect isNull checks
> 
>
> Key: HIVE-12826
> URL: https://issues.apache.org/jira/browse/HIVE-12826
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 1.3.0, 2.0.0, 2.1.0
>Reporter: Gopal V
>Assignee: Gopal V
> Fix For: 2.0.0
>
> Attachments: HIVE-12826.1.patch
>
>
> for isRepeating=true, checking isNull[selected[i]] might return incorrect 
> results (without a heavy array fill of isNull).
> VectorUDAFSum/Min/Max/Avg and SumDecimal impls need to be reviewed for this 
> pattern.
> {code}
> private void iterateHasNullsRepeatingSelectionWithAggregationSelection(
>   VectorAggregationBufferRow[] aggregationBufferSets,
>   int aggregateIndex,
>value,
>   int batchSize,
>   int[] selection,
>   boolean[] isNull) {
>   
>   for (int i=0; i < batchSize; ++i) {
> if (!isNull[selection[i]]) {
>   Aggregation myagg = getCurrentAggregationBuffer(
> aggregationBufferSets, 
> aggregateIndex,
> i);
>   myagg.sumValue(value);
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14374) BeeLine argument, and configuration handling cleanup

2016-07-29 Thread Peter Vary (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400101#comment-15400101
 ] 

Peter Vary commented on HIVE-14374:
---

[~stakiar] Currently only a handful of parameters are documented as a command 
line option, but almost twice as many could be set from the configuration file. 
I don't know how much of it is by design, and how much of this is by error. I 
hope someone with more experience in Hive could help here.

+1 for the commons-cli solution and then we could forget every annotation 
magic. I wasn't brave enough to suggest it, although it is already used in 
beeline compatibility mode.

> BeeLine argument, and configuration handling cleanup
> 
>
> Key: HIVE-14374
> URL: https://issues.apache.org/jira/browse/HIVE-14374
> Project: Hive
>  Issue Type: Improvement
>  Components: Beeline
>Affects Versions: 2.2.0
>Reporter: Peter Vary
>Assignee: Peter Vary
>
> BeeLine uses reflection, to set the BeeLineOpts attributes when parsing 
> command line arguments, and when loading the configuration file.
> This means, that creating a setXXX, getXXX method in BeeLineOpts is a 
> potential risk of exposing an attribute for the user unintentionally. There 
> is a possibility to exclude an attribute from saving the value in the 
> configuration file with the Ignore annotation. This does not restrict the 
> loading or command line setting of these parameters which means there are 
> many undocumented "features" as-is, like setting the lastConnectedUrl, 
> allowMultilineCommand, maxHeight, trimScripts, etc. from command line.
> This part of the code needs a little cleanup.
> I think we should make this exposure more explicit, and be able to 
> differentiate the configurable options depending on the source (command line, 
> and configuration file), so I propose to create a mechanism to tell 
> explicitly which BeeLineOpts attributes are settable by command line, and 
> configuration file, and every other attribute should be inaccessible by the 
> user of the beeline cli.
> One possible solution could be two annotations like these:
> - CommandLineOption - there could be a mandatory text parameter here, so the 
> developer had to provide the help text for it which could be displayed to the 
> user
> - ConfigurationFileOption - no text is required here
> Something like this:
> - This attribute could be provided by command line, and from a configuration 
> file too:
> {noformat}
> @CommandLineOption("automatically save preferences")
> @ConfigurationFileOption
> public void setAutosave(boolean autosave) {
>   this.autosave = autosave;
> }
> public void getAutosave() {
>   return this.autosave;
> }
> {noformat}
> - This attribute could be set through the configuration only
> {noformat}
> @ConfigurationFileOption
> public void setLastConnectedUrl(String lastConnectedUrl) {
>   this.lastConnectedUrl = lastConnectedUrl;

> }
> 

> public String getLastConnectedUrl()
> {

>   return lastConnectedUrl;
> 
}
> 
{noformat}
> - Attribute could be set through command line only - I think this is not too 
> relevant, but possible
> {noformat}
> @CommandLineOption("specific command line option")
> public void setSpecificCommandLineOption(String specificCommandLineOption) {
> 
  this.specificCommandLineOption = specificCommandLineOption;
> 
}
> 

> public String getSpecificCommandLineOption() {
> 
  return specificCommandLineOption;
> 
}
> 
{noformat}
> - Attribute could not be set
> {noformat}
> public static Env getEnv() {
> 
  return env;
> 
}
> 

public static void setEnv(Env envToUse) {
> 
  env = envToUse;
> 
}
> {noformat}
> Accouring to our previous conversations, I think you might be interested in: 
> [~spena], [~vihangk1], [~aihuaxu], [~ngangam], [~ychena], [~xuefuz]
> but anyone is welcome to discuss this.
> What do you think about the proposed solution?
> Any better ideas, or extensions?
> Thanks,
> Peter



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12826) Vectorization: fix VectorUDAF* suspect isNull checks

2016-07-29 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400095#comment-15400095
 ] 

Matt McCline commented on HIVE-12826:
-

This fix is present in branch-2.1

> Vectorization: fix VectorUDAF* suspect isNull checks
> 
>
> Key: HIVE-12826
> URL: https://issues.apache.org/jira/browse/HIVE-12826
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 1.3.0, 2.0.0, 2.1.0
>Reporter: Gopal V
>Assignee: Gopal V
> Fix For: 2.0.0
>
> Attachments: HIVE-12826.1.patch
>
>
> for isRepeating=true, checking isNull[selected[i]] might return incorrect 
> results (without a heavy array fill of isNull).
> VectorUDAFSum/Min/Max/Avg and SumDecimal impls need to be reviewed for this 
> pattern.
> {code}
> private void iterateHasNullsRepeatingSelectionWithAggregationSelection(
>   VectorAggregationBufferRow[] aggregationBufferSets,
>   int aggregateIndex,
>value,
>   int batchSize,
>   int[] selection,
>   boolean[] isNull) {
>   
>   for (int i=0; i < batchSize; ++i) {
> if (!isNull[selection[i]]) {
>   Aggregation myagg = getCurrentAggregationBuffer(
> aggregationBufferSets, 
> aggregateIndex,
> i);
>   myagg.sumValue(value);
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14146) Column comments with "\n" character "corrupts" table metadata

2016-07-29 Thread Peter Vary (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400036#comment-15400036
 ] 

Peter Vary commented on HIVE-14146:
---

[~aihuaxu] Checked, and found the following:
- Beeline - when using "describe pretty" the \n is padded and displayed - this 
is as intended - the original behavior is not changed
- Cli - when using "describe pretty" the \n is padded and displayed - this is 
as intended - the original behavior is not changed

What I have missed, that in Cli when using "describe formatted" the following 
code sets the output to padded:
{noformat}
package org.apache.hadoop.hive.ql.exec;
[..]
public class DDLTask extends Task implements Serializable {
[..]
  private int describeTable(Hive db, DescTableDesc descTbl) throws 
HiveException {
[..]
  // In case the query is served by HiveServer2, don't pad it with spaces,
  // as HiveServer2 output is consumed by JDBC/ODBC clients.
  boolean isOutputPadded = !SessionState.get().isHiveServerQuery();
{noformat}
So the column comments are printed as a formatted strings with new lines - but 
not the table comment. (same as in pretty mode)

What do you think we should do here?
I personally would prefer to remove this code below (pretty comment 
formatting), so every comment should be handled in the same way (printing \n 
instead of newline):
{noformat}
package org.apache.hadoop.hive.ql.metadata.formatting;
[..]
public final class MetaDataFormatUtils {
[..]
  private static void formatWithIndentation(String colName, String colType, 
String colComment,
[..]
// comment indent processing for multi-line comments
// comments should be indented the same amount on each line
// if the first line comment starts indented by k,
// the following line comments should also be indented by k
String[] commentSegments = colComment.split("\n|\r|\r\n");
tableInfo.append(String.format("%-" + ALIGNMENT + "s", 
commentSegments[0])).append(LINE_DELIM);
int colNameLength = ALIGNMENT > colName.length() ? ALIGNMENT : 
colName.length();
int colTypeLength = ALIGNMENT > colType.length() ? ALIGNMENT : 
colType.length();
for (int i = 1; i < commentSegments.length; i++) {
  tableInfo.append(String.format("%" + colNameLength + "s" + FIELD_DELIM + 
"%"
  + colTypeLength + "s" + FIELD_DELIM + "%s", "", "", 
commentSegments[i])).append(LINE_DELIM);
}
{noformat}

> Column comments with "\n" character "corrupts" table metadata
> -
>
> Key: HIVE-14146
> URL: https://issues.apache.org/jira/browse/HIVE-14146
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 2.2.0
>Reporter: Peter Vary
>Assignee: Peter Vary
> Attachments: HIVE-14146.2.patch, HIVE-14146.3.patch, 
> HIVE-14146.4.patch, HIVE-14146.5.patch, HIVE-14146.6.patch, HIVE-14146.patch
>
>
> Create a table with the following(noting the \n in the COMMENT):
> {noformat}
> CREATE TABLE commtest(first_nm string COMMENT 'Indicates First name\nof an 
> individual’);
> {noformat}
> Describe shows that now the metadata is messed up:
> {noformat}
> beeline> describe commtest;
> +---++---+--+
> | col_name  | data_type  |comment|
> +---++---+--+
> | first_nm | string   | Indicates First name  |
> | of an individual  | NULL   | NULL  |
> +---++---+--+
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14376) Schema evolution tests takes a long time

2016-07-29 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-14376:
-
Assignee: (was: Prasanth Jayachandran)

> Schema evolution tests takes a long time
> 
>
> Key: HIVE-14376
> URL: https://issues.apache.org/jira/browse/HIVE-14376
> Project: Hive
>  Issue Type: Bug
>  Components: Test
>Affects Versions: 2.2.0, 2.1.1
>Reporter: Prasanth Jayachandran
>Priority: Minor
>
> schema_evol_* tests (14 tests) takes 47 mins in tez, 40mins in TestCliDriver. 
> Same set of tests are added to llap as well in HIVE-14355 which will some 
> more time. Most tests uses INSERT into table which can be slow and is not 
> needed. Even some individual tests takes about 5 mins to run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14123) Add beeline configuration option to show database in the prompt

2016-07-29 Thread Peter Vary (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1546#comment-1546
 ] 

Peter Vary commented on HIVE-14123:
---

[~leftylev] Is this the standard method of taking note of the required 
documentation changes?
- Label
- Url of the doc which should be changed


> Add beeline configuration option to show database in the prompt
> ---
>
> Key: HIVE-14123
> URL: https://issues.apache.org/jira/browse/HIVE-14123
> Project: Hive
>  Issue Type: Improvement
>  Components: Beeline, CLI
>Affects Versions: 2.2.0
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Minor
>  Labels: TODOC2.2
> Fix For: 2.2.0
>
> Attachments: HIVE-14123.10.patch, HIVE-14123.2.patch, 
> HIVE-14123.3.patch, HIVE-14123.4.patch, HIVE-14123.5.patch, 
> HIVE-14123.6.patch, HIVE-14123.7.patch, HIVE-14123.8.patch, 
> HIVE-14123.9.patch, HIVE-14123.patch
>
>
> There are several jira issues complaining that, the Beeline does not respect 
> hive.cli.print.current.db.
> This is partially true, since in embedded mode, it uses the 
> hive.cli.print.current.db to change the prompt, since HIVE-10511.
> In beeline mode, I think this function should use a beeline command line 
> option instead, like for the showHeader option emphasizing, that this is a 
> client side option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14386) UGI clone shim also needs to clone credentials

2016-07-29 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-14386:

Attachment: HIVE-14386.patch

[~sseth] can you take a look?

> UGI clone shim also needs to clone credentials
> --
>
> Key: HIVE-14386
> URL: https://issues.apache.org/jira/browse/HIVE-14386
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-14386.patch
>
>
> Discovered while testing HADOOP-13081



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14386) UGI clone shim also needs to clone credentials

2016-07-29 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-14386:

Status: Patch Available  (was: Open)

> UGI clone shim also needs to clone credentials
> --
>
> Key: HIVE-14386
> URL: https://issues.apache.org/jira/browse/HIVE-14386
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-14386.patch
>
>
> Discovered while testing HADOOP-13081



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14340) Add a new hook triggers before query compilation and after query execution

2016-07-29 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399979#comment-15399979
 ] 

Jimmy Xiang commented on HIVE-14340:


Multiple configurations/interfaces should be fine.

> Add a new hook triggers before query compilation and after query execution
> --
>
> Key: HIVE-14340
> URL: https://issues.apache.org/jira/browse/HIVE-14340
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 2.2.0
>Reporter: Chao Sun
>Assignee: Chao Sun
> Attachments: HIVE-14340.0.patch
>
>
> In some cases we may need to have a hook that activates before a query 
> compilation and after its execution. For instance, dynamically generate a UDF 
> specifically for the running query and clean up the resource after the query 
> is done. The current hooks only covers pre & post semantic analysis, pre & 
> post query execution, which doesn't fit the requirement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14377) LLAP IO: issue with how estimate cache removes unneeded buffers

2016-07-29 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399964#comment-15399964
 ] 

Prasanth Jayachandran commented on HIVE-14377:
--

+1

> LLAP IO: issue with how estimate cache removes unneeded buffers
> ---
>
> Key: HIVE-14377
> URL: https://issues.apache.org/jira/browse/HIVE-14377
> Project: Hive
>  Issue Type: Bug
>Reporter: Gopal V
>Assignee: Sergey Shelukhin
> Attachments: HIVE-14377.01.patch, HIVE-14377.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13560) Adding Omid as connection manager for HBase Metastore

2016-07-29 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399950#comment-15399950
 ] 

Hive QA commented on HIVE-13560:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12820568/HIVE-13560.10.patch

{color:green}SUCCESS:{color} +1 due to 7 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 10379 tests 
executed
*Failed tests:*
{noformat}
TestMsgBusConnection - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avro_nullable_union
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_avro_non_nullable_union
org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskSchedulerService.testDelayedLocalityNodeCommErrorImmediateAllocation
org.apache.hive.service.cli.session.TestSessionManagerMetrics.org.apache.hive.service.cli.session.TestSessionManagerMetrics
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/687/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/687/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-687/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 7 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12820568 - PreCommit-HIVE-MASTER-Build

> Adding Omid as connection manager for HBase Metastore
> -
>
> Key: HIVE-13560
> URL: https://issues.apache.org/jira/browse/HIVE-13560
> Project: Hive
>  Issue Type: Improvement
>  Components: HBase Metastore
>Reporter: Daniel Dai
>Assignee: Daniel Dai
> Attachments: HIVE-13560.1.patch, HIVE-13560.10.patch, 
> HIVE-13560.11.patch, HIVE-13560.2.patch, HIVE-13560.3.patch, 
> HIVE-13560.4.patch, HIVE-13560.5.patch, HIVE-13560.6.patch, 
> HIVE-13560.7.patch, HIVE-13560.8.patch, HIVE-13560.9.patch
>
>
> Adding Omid as a transaction manager to HBase Metastore. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14303) CommonJoinOperator.checkAndGenObject should return directly at CLOSE state to avoid NPE if ExecReducer.close is called twice.

2016-07-29 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HIVE-14303:

   Resolution: Fixed
Fix Version/s: (was: 2.1.0)
   2.2.0
   Status: Resolved  (was: Patch Available)

> CommonJoinOperator.checkAndGenObject should return directly at CLOSE state to 
> avoid NPE if ExecReducer.close is called twice.
> -
>
> Key: HIVE-14303
> URL: https://issues.apache.org/jira/browse/HIVE-14303
> Project: Hive
>  Issue Type: Bug
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.2.0
>
> Attachments: HIVE-14303.0.patch
>
>
> CommonJoinOperator.checkAndGenObject should return directly at CLOSE state to 
> avoid NPE if ExecReducer.close is called twice. ExecReducer.close implements 
> Closeable interface and ExecReducer.close can be called multiple time. We saw 
> the following NPE which hide the real exception due to this bug.
> {code}
> Error: java.lang.RuntimeException: Hive Runtime Error while closing 
> operators: null
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:296)
> at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
> at 
> org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:718)
> at 
> org.apache.hadoop.hive.ql.exec.JoinOperator.endGroup(JoinOperator.java:256)
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:284)
> ... 8 more
> {code}
> The code from ReduceTask.runOldReducer:
> {code}
>   reducer.close(); //line 453
>   reducer = null;
>   
>   out.close(reporter);
>   out = null;
> } finally {
>   IOUtils.cleanup(LOG, reducer);// line 459
>   closeQuietly(out, reporter);
> }
> {code}
> Based on the above stack trace and code, reducer.close() is called twice 
> because the exception happened when reducer.close() is called for the first 
> time at line 453, the code exit before reducer was set to null. 
> NullPointerException is triggered when reducer.close() is called for the 
> second time in IOUtils.cleanup at line 459. NullPointerException hide the 
> real exception which happened when reducer.close() is called for the first 
> time at line 453.
> The reason for NPE is:
> The first reducer.close called CommonJoinOperator.closeOp which clear 
> {{storage}}
> {code}
> Arrays.fill(storage, null);
> {code}
> the second reduce.close generated NPE due to null {{storage[alias]}} which is 
> set to null by first reducer.close.
> The following reducer log can give more proof:
> {code}
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: 0 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: 0 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: SKEWJOINFOLLOWUPJOBS:0
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 1 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 2 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 3 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: 4 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: FS[4]: records written - 
> 53466
> 2016-07-14 22:25:11,555 ERROR [main] ExecReducer: Hit error while closing 
> operators - failing tree
> 2016-07-14 22:25:11,649 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : java.lang.RuntimeException: Hive Runtime Error 
> while closing operators: null
>   at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:296)
>   at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
>   at 
> org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
>   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
>   at 

[jira] [Commented] (HIVE-14303) CommonJoinOperator.checkAndGenObject should return directly at CLOSE state to avoid NPE if ExecReducer.close is called twice.

2016-07-29 Thread Chao Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399943#comment-15399943
 ] 

Chao Sun commented on HIVE-14303:
-

Committed to master. Thanks [~zxu] for the patch and [~xuefuz] for the review!

> CommonJoinOperator.checkAndGenObject should return directly at CLOSE state to 
> avoid NPE if ExecReducer.close is called twice.
> -
>
> Key: HIVE-14303
> URL: https://issues.apache.org/jira/browse/HIVE-14303
> Project: Hive
>  Issue Type: Bug
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.2.0
>
> Attachments: HIVE-14303.0.patch
>
>
> CommonJoinOperator.checkAndGenObject should return directly at CLOSE state to 
> avoid NPE if ExecReducer.close is called twice. ExecReducer.close implements 
> Closeable interface and ExecReducer.close can be called multiple time. We saw 
> the following NPE which hide the real exception due to this bug.
> {code}
> Error: java.lang.RuntimeException: Hive Runtime Error while closing 
> operators: null
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:296)
> at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
> at 
> org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:718)
> at 
> org.apache.hadoop.hive.ql.exec.JoinOperator.endGroup(JoinOperator.java:256)
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:284)
> ... 8 more
> {code}
> The code from ReduceTask.runOldReducer:
> {code}
>   reducer.close(); //line 453
>   reducer = null;
>   
>   out.close(reporter);
>   out = null;
> } finally {
>   IOUtils.cleanup(LOG, reducer);// line 459
>   closeQuietly(out, reporter);
> }
> {code}
> Based on the above stack trace and code, reducer.close() is called twice 
> because the exception happened when reducer.close() is called for the first 
> time at line 453, the code exit before reducer was set to null. 
> NullPointerException is triggered when reducer.close() is called for the 
> second time in IOUtils.cleanup at line 459. NullPointerException hide the 
> real exception which happened when reducer.close() is called for the first 
> time at line 453.
> The reason for NPE is:
> The first reducer.close called CommonJoinOperator.closeOp which clear 
> {{storage}}
> {code}
> Arrays.fill(storage, null);
> {code}
> the second reduce.close generated NPE due to null {{storage[alias]}} which is 
> set to null by first reducer.close.
> The following reducer log can give more proof:
> {code}
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: 0 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: 0 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: SKEWJOINFOLLOWUPJOBS:0
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 1 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 2 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 3 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: 4 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: FS[4]: records written - 
> 53466
> 2016-07-14 22:25:11,555 ERROR [main] ExecReducer: Hit error while closing 
> operators - failing tree
> 2016-07-14 22:25:11,649 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : java.lang.RuntimeException: Hive Runtime Error 
> while closing operators: null
>   at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:296)
>   at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
>   at 
> org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
>   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
>   at 

[jira] [Updated] (HIVE-14322) Postgres db issues after Datanucleus 4.x upgrade

2016-07-29 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-14322:

Attachment: HIVE-14322.04.patch

And again. Looks like it never got picked up.

> Postgres db issues after Datanucleus 4.x upgrade
> 
>
> Key: HIVE-14322
> URL: https://issues.apache.org/jira/browse/HIVE-14322
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.0.0, 2.1.0, 2.0.1
>Reporter: Thejas M Nair
>Assignee: Sergey Shelukhin
> Attachments: HIVE-14322.02.patch, HIVE-14322.03.patch, 
> HIVE-14322.04.patch, HIVE-14322.1.patch
>
>
> With the upgrade to  datanucleus 4.x versions in HIVE-6113, hive does not 
> work properly with postgres.
> The nullable fields in the database have string "NULL::character varying" 
> instead of real NULL values. This causes various issues.
> One example is -
> {code}
> hive> create table t(i int);
> OK
> Time taken: 1.9 seconds
> hive> create view v as select * from t;
> OK
> Time taken: 0.542 seconds
> hive> select * from v;
> FAILED: SemanticException Unable to fetch table v. 
> java.net.URISyntaxException: Relative path in absolute URI: 
> NULL::character%20varying
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14355) Schema evolution for ORC in llap is broken for int to string conversion

2016-07-29 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399874#comment-15399874
 ] 

Sergey Shelukhin commented on HIVE-14355:
-

+1

> Schema evolution for ORC in llap is broken for int to string conversion
> ---
>
> Key: HIVE-14355
> URL: https://issues.apache.org/jira/browse/HIVE-14355
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.0, 2.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Critical
> Attachments: HIVE-14355-java-only.patch, HIVE-14355.1.patch, 
> HIVE-14355.2.java-only.patch, HIVE-14355.2.patch, 
> HIVE-14355.3.java-only.patch, HIVE-14355.3.patch
>
>
> When schema is evolved from any integer type to string then following 
> exceptions are thrown in LLAP (Works fine in Tez). I guess this should happen 
> even for other conversions.
> {code}
> hive> create table orc_integer(b bigint) stored as orc;
> hive> insert into orc_integer values(100);
> hive> select count(*) from orc_integer where b=100;
> OK
> 1
> hive> alter table orc_integer change column b b string;
> hive> select count(*) from orc_integer where b=100;
> // FAIL with following exception
> {code}
> {code:title=When vectorization is enabled}
> 2016-07-27T01:48:05,611  INFO [TezTaskRunner ()] 
> vector.VectorReduceSinkOperator: RECORDS_OUT_INTERMEDIATE_Map_1:0,
> 2016-07-27T01:48:05,611 ERROR [TezTaskRunner ()] tez.TezProcessor: 
> java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: 
> Hive Runtime Error while processing row
> at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:95)
> at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:70)
> at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:393)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:185)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:168)
> at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:370)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at 
> org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:110)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:866)
> at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:86)
> ... 18 more
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hive.ql.exec.vector.LongColumnVector cannot be cast to 
> org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector
> at 
> org.apache.hadoop.hive.ql.exec.vector.expressions.gen.FilterStringGroupColEqualStringGroupScalarBase.evaluate(FilterStringGroupColEqualStringGroupScalarBase.java:42)
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator.process(VectorFilterOperator.java:110)
> at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:879)
> at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:130)
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:774)
> ... 19 more
> {code}
> {code:title=When vectorization is disabled}
> 2016-07-27T01:52:43,328  INFO [TezTaskRunner 
> (1469608604787_0002_26_00_00_0)] exec.ReduceSinkOperator: Using tag = -1
> 2016-07-27T01:52:43,328  INFO [TezTaskRunner 
> (1469608604787_0002_26_00_00_0)] exec.OperatorUtils: Setting 

[jira] [Commented] (HIVE-14303) CommonJoinOperator.checkAndGenObject should return directly at CLOSE state to avoid NPE if ExecReducer.close is called twice.

2016-07-29 Thread zhihai xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399859#comment-15399859
 ] 

zhihai xu commented on HIVE-14303:
--

Thanks for the review [~csun]! All these test failures are not related to my 
patch!

> CommonJoinOperator.checkAndGenObject should return directly at CLOSE state to 
> avoid NPE if ExecReducer.close is called twice.
> -
>
> Key: HIVE-14303
> URL: https://issues.apache.org/jira/browse/HIVE-14303
> Project: Hive
>  Issue Type: Bug
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.1.0
>
> Attachments: HIVE-14303.0.patch
>
>
> CommonJoinOperator.checkAndGenObject should return directly at CLOSE state to 
> avoid NPE if ExecReducer.close is called twice. ExecReducer.close implements 
> Closeable interface and ExecReducer.close can be called multiple time. We saw 
> the following NPE which hide the real exception due to this bug.
> {code}
> Error: java.lang.RuntimeException: Hive Runtime Error while closing 
> operators: null
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:296)
> at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
> at 
> org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:718)
> at 
> org.apache.hadoop.hive.ql.exec.JoinOperator.endGroup(JoinOperator.java:256)
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:284)
> ... 8 more
> {code}
> The code from ReduceTask.runOldReducer:
> {code}
>   reducer.close(); //line 453
>   reducer = null;
>   
>   out.close(reporter);
>   out = null;
> } finally {
>   IOUtils.cleanup(LOG, reducer);// line 459
>   closeQuietly(out, reporter);
> }
> {code}
> Based on the above stack trace and code, reducer.close() is called twice 
> because the exception happened when reducer.close() is called for the first 
> time at line 453, the code exit before reducer was set to null. 
> NullPointerException is triggered when reducer.close() is called for the 
> second time in IOUtils.cleanup at line 459. NullPointerException hide the 
> real exception which happened when reducer.close() is called for the first 
> time at line 453.
> The reason for NPE is:
> The first reducer.close called CommonJoinOperator.closeOp which clear 
> {{storage}}
> {code}
> Arrays.fill(storage, null);
> {code}
> the second reduce.close generated NPE due to null {{storage[alias]}} which is 
> set to null by first reducer.close.
> The following reducer log can give more proof:
> {code}
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: 0 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: 0 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: SKEWJOINFOLLOWUPJOBS:0
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 1 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 2 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 3 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: 4 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: FS[4]: records written - 
> 53466
> 2016-07-14 22:25:11,555 ERROR [main] ExecReducer: Hit error while closing 
> operators - failing tree
> 2016-07-14 22:25:11,649 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : java.lang.RuntimeException: Hive Runtime Error 
> while closing operators: null
>   at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:296)
>   at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
>   at 
> org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
>   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
>   at 

[jira] [Commented] (HIVE-14340) Add a new hook triggers before query compilation and after query execution

2016-07-29 Thread Chao Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399857#comment-15399857
 ] 

Chao Sun commented on HIVE-14340:
-

Another reason is that user may need to use the same hook object for 
compilation and execution, to maintain some internal state. Neither semantic 
analysis hook nor exec hook satisfy this requirement.
Actually my patch also failed on this since it triggers in different 
functions.. I'll upload a new patch.

> Add a new hook triggers before query compilation and after query execution
> --
>
> Key: HIVE-14340
> URL: https://issues.apache.org/jira/browse/HIVE-14340
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 2.2.0
>Reporter: Chao Sun
>Assignee: Chao Sun
> Attachments: HIVE-14340.0.patch
>
>
> In some cases we may need to have a hook that activates before a query 
> compilation and after its execution. For instance, dynamically generate a UDF 
> specifically for the running query and clean up the resource after the query 
> is done. The current hooks only covers pre & post semantic analysis, pre & 
> post query execution, which doesn't fit the requirement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14380) Queries on tables with remote HDFS paths fail in "encryption" checks.

2016-07-29 Thread Mithun Radhakrishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399853#comment-15399853
 ] 

Mithun Radhakrishnan commented on HIVE-14380:
-

Thanks for reviewing, [~spena]. :] Also, yikes. I'm not sure how 2 JIRAs got 
raised. :/

> Queries on tables with remote HDFS paths fail in "encryption" checks.
> -
>
> Key: HIVE-14380
> URL: https://issues.apache.org/jira/browse/HIVE-14380
> Project: Hive
>  Issue Type: Bug
>  Components: Encryption
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
> Attachments: HIVE-14380.1.patch
>
>
> If a table has table/partition locations set to remote HDFS paths, querying 
> them will cause the following IAException:
> {noformat}
> 2016-07-26 01:16:27,471 ERROR parse.CalcitePlanner 
> (SemanticAnalyzer.java:getMetaData(1867)) - 
> org.apache.hadoop.hive.ql.metadata.HiveException: Unable to determine if 
> hdfs://foo.ygrid.yahoo.com:8020/projects/my_db/my_table is encrypted: 
> java.lang.IllegalArgumentException: Wrong FS: 
> hdfs://foo.ygrid.yahoo.com:8020/projects/my_db/my_table, expected: 
> hdfs://bar.ygrid.yahoo.com:8020
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.isPathEncrypted(SemanticAnalyzer.java:2204)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getStrongestEncryptedTablePath(SemanticAnalyzer.java:2274)
> ...
> {noformat}
> This is because of the following code in {{SessionState}}:
> {code:title=SessionState.java|borderStyle=solid}
>  public HadoopShims.HdfsEncryptionShim getHdfsEncryptionShim() throws 
> HiveException {
> if (hdfsEncryptionShim == null) {
>   try {
> FileSystem fs = FileSystem.get(sessionConf);
> if ("hdfs".equals(fs.getUri().getScheme())) {
>   hdfsEncryptionShim = 
> ShimLoader.getHadoopShims().createHdfsEncryptionShim(fs, sessionConf);
> } else {
>   LOG.debug("Could not get hdfsEncryptionShim, it is only applicable 
> to hdfs filesystem.");
> }
>   } catch (Exception e) {
> throw new HiveException(e);
>   }
> }
> return hdfsEncryptionShim;
>   }
> {code}
> When the {{FileSystem}} instance is created, using the {{sessionConf}} 
> implies that the current HDFS is going to be used. This call should instead 
> fetch the {{FileSystem}} instance corresponding to the path being checked.
> A fix is forthcoming...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14340) Add a new hook triggers before query compilation and after query execution

2016-07-29 Thread Chao Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399847#comment-15399847
 ] 

Chao Sun commented on HIVE-14340:
-

{quote}
Query hook is a little ambiguous.
{quote}
Hmm... how about {{hive.query.lifetime.hooks}}? I'm open to suggestions. :)

{quote}
Since post execution hooks are already supported, does this mean we just need 
to add pre compilation hook? Isn't pre semantic analysis good for this purpose?
{quote}
Yeah, I've hesitated on this - not sure if its a good idea to add this new 
hook.. One reason is that I'm not sure how to get the query command for the 
post-exec case, and another is that the user needs to implement two interfaces 
and add multiple configurations.

> Add a new hook triggers before query compilation and after query execution
> --
>
> Key: HIVE-14340
> URL: https://issues.apache.org/jira/browse/HIVE-14340
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 2.2.0
>Reporter: Chao Sun
>Assignee: Chao Sun
> Attachments: HIVE-14340.0.patch
>
>
> In some cases we may need to have a hook that activates before a query 
> compilation and after its execution. For instance, dynamically generate a UDF 
> specifically for the running query and clean up the resource after the query 
> is done. The current hooks only covers pre & post semantic analysis, pre & 
> post query execution, which doesn't fit the requirement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14303) CommonJoinOperator.checkAndGenObject should return directly at CLOSE state to avoid NPE if ExecReducer.close is called twice.

2016-07-29 Thread Chao Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399844#comment-15399844
 ] 

Chao Sun commented on HIVE-14303:
-

Going to commit this. [~zxu] are the test failures related?

> CommonJoinOperator.checkAndGenObject should return directly at CLOSE state to 
> avoid NPE if ExecReducer.close is called twice.
> -
>
> Key: HIVE-14303
> URL: https://issues.apache.org/jira/browse/HIVE-14303
> Project: Hive
>  Issue Type: Bug
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.1.0
>
> Attachments: HIVE-14303.0.patch
>
>
> CommonJoinOperator.checkAndGenObject should return directly at CLOSE state to 
> avoid NPE if ExecReducer.close is called twice. ExecReducer.close implements 
> Closeable interface and ExecReducer.close can be called multiple time. We saw 
> the following NPE which hide the real exception due to this bug.
> {code}
> Error: java.lang.RuntimeException: Hive Runtime Error while closing 
> operators: null
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:296)
> at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
> at 
> org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:718)
> at 
> org.apache.hadoop.hive.ql.exec.JoinOperator.endGroup(JoinOperator.java:256)
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:284)
> ... 8 more
> {code}
> The code from ReduceTask.runOldReducer:
> {code}
>   reducer.close(); //line 453
>   reducer = null;
>   
>   out.close(reporter);
>   out = null;
> } finally {
>   IOUtils.cleanup(LOG, reducer);// line 459
>   closeQuietly(out, reporter);
> }
> {code}
> Based on the above stack trace and code, reducer.close() is called twice 
> because the exception happened when reducer.close() is called for the first 
> time at line 453, the code exit before reducer was set to null. 
> NullPointerException is triggered when reducer.close() is called for the 
> second time in IOUtils.cleanup at line 459. NullPointerException hide the 
> real exception which happened when reducer.close() is called for the first 
> time at line 453.
> The reason for NPE is:
> The first reducer.close called CommonJoinOperator.closeOp which clear 
> {{storage}}
> {code}
> Arrays.fill(storage, null);
> {code}
> the second reduce.close generated NPE due to null {{storage[alias]}} which is 
> set to null by first reducer.close.
> The following reducer log can give more proof:
> {code}
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: 0 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: 0 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: SKEWJOINFOLLOWUPJOBS:0
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 1 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 2 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 3 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: 4 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: FS[4]: records written - 
> 53466
> 2016-07-14 22:25:11,555 ERROR [main] ExecReducer: Hit error while closing 
> operators - failing tree
> 2016-07-14 22:25:11,649 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : java.lang.RuntimeException: Hive Runtime Error 
> while closing operators: null
>   at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:296)
>   at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
>   at 
> org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
>   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
>   at 

[jira] [Commented] (HIVE-14355) Schema evolution for ORC in llap is broken for int to string conversion

2016-07-29 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399840#comment-15399840
 ] 

Prasanth Jayachandran commented on HIVE-14355:
--

[~sershe] Can you take a look again plz?

> Schema evolution for ORC in llap is broken for int to string conversion
> ---
>
> Key: HIVE-14355
> URL: https://issues.apache.org/jira/browse/HIVE-14355
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.0, 2.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Critical
> Attachments: HIVE-14355-java-only.patch, HIVE-14355.1.patch, 
> HIVE-14355.2.java-only.patch, HIVE-14355.2.patch, 
> HIVE-14355.3.java-only.patch, HIVE-14355.3.patch
>
>
> When schema is evolved from any integer type to string then following 
> exceptions are thrown in LLAP (Works fine in Tez). I guess this should happen 
> even for other conversions.
> {code}
> hive> create table orc_integer(b bigint) stored as orc;
> hive> insert into orc_integer values(100);
> hive> select count(*) from orc_integer where b=100;
> OK
> 1
> hive> alter table orc_integer change column b b string;
> hive> select count(*) from orc_integer where b=100;
> // FAIL with following exception
> {code}
> {code:title=When vectorization is enabled}
> 2016-07-27T01:48:05,611  INFO [TezTaskRunner ()] 
> vector.VectorReduceSinkOperator: RECORDS_OUT_INTERMEDIATE_Map_1:0,
> 2016-07-27T01:48:05,611 ERROR [TezTaskRunner ()] tez.TezProcessor: 
> java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: 
> Hive Runtime Error while processing row
> at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:95)
> at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:70)
> at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:393)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:185)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:168)
> at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:370)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at 
> org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:110)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:866)
> at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:86)
> ... 18 more
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hive.ql.exec.vector.LongColumnVector cannot be cast to 
> org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector
> at 
> org.apache.hadoop.hive.ql.exec.vector.expressions.gen.FilterStringGroupColEqualStringGroupScalarBase.evaluate(FilterStringGroupColEqualStringGroupScalarBase.java:42)
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator.process(VectorFilterOperator.java:110)
> at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:879)
> at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:130)
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:774)
> ... 19 more
> {code}
> {code:title=When vectorization is disabled}
> 2016-07-27T01:52:43,328  INFO [TezTaskRunner 
> (1469608604787_0002_26_00_00_0)] exec.ReduceSinkOperator: Using tag = -1
> 2016-07-27T01:52:43,328  INFO [TezTaskRunner 
> 

[jira] [Updated] (HIVE-14366) Conversion of a Non-ACID table to an ACID table produces non-unique primary keys

2016-07-29 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-14366:
--
Priority: Blocker  (was: Critical)

> Conversion of a Non-ACID table to an ACID table produces non-unique primary 
> keys
> 
>
> Key: HIVE-14366
> URL: https://issues.apache.org/jira/browse/HIVE-14366
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Saket Saurabh
>Assignee: Saket Saurabh
>Priority: Blocker
> Attachments: HIVE-14366.01.patch, HIVE-14366.02.patch
>
>
> When a Non-ACID table is converted to an ACID table, the primary key 
> consisting of (original transaction id, bucket_id, row_id) is not generated 
> uniquely. Currently, the row_id is always set to 0 for most rows. This leads 
> to correctness issue for such tables.
> Quickest way to reproduce is to add the following unit test to 
> ql/src/test/org/apache/hadoop/hive/ql/TestTxnCommands2.java
> {code:title=ql/src/test/org/apache/hadoop/hive/ql/TestTxnCommands2.java|borderStyle=solid}
>   @Test
>   public void testOriginalReader() throws Exception {
> FileSystem fs = FileSystem.get(hiveConf);
> FileStatus[] status;
> // 1. Insert five rows to Non-ACID table.
> runStatementOnDriver("insert into " + Table.NONACIDORCTBL + "(a,b) 
> values(1,2),(3,4),(5,6),(7,8),(9,10)");
> // 2. Convert NONACIDORCTBL to ACID table.
> runStatementOnDriver("alter table " + Table.NONACIDORCTBL + " SET 
> TBLPROPERTIES ('transactional'='true')");
> // 3. Perform a major compaction.
> runStatementOnDriver("alter table "+ Table.NONACIDORCTBL + " compact 
> 'MAJOR'");
> runWorker(hiveConf);
> // 4. Perform a delete.
> runStatementOnDriver("delete from " + Table.NONACIDORCTBL + " where a = 
> 1");
> // 5. Now do a projection should have (3,4) (5,6),(7,8),(9,10) only since 
> (1,2) has been deleted.
> List rs = runStatementOnDriver("select a,b from " + 
> Table.NONACIDORCTBL + " order by a,b");
> int[][] resultData = new int[][] {{3,4}, {5,6}, {7,8}, {9,10}};
> Assert.assertEquals(stringifyValues(resultData), rs);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13991) Union All on view fail with no valid permission on underneath table

2016-07-29 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399830#comment-15399830
 ] 

Eugene Koifman commented on HIVE-13991:
---

back ported to branch-1 
https://github.com/apache/hive/commit/123f088a3a029c4b01db3e7cc0769a5836e1f6c6

> Union All on view fail with no valid permission on underneath table
> ---
>
> Key: HIVE-13991
> URL: https://issues.apache.org/jira/browse/HIVE-13991
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
> Fix For: 1.3.0, 2.2.0, 2.1.1
>
> Attachments: HIVE-13991.1.patch, HIVE-13991.2.patch
>
>
> When sentry is enabled. 
> create view V as select * from T;
> When the user has read permission on view V, but does not have read 
> permission on table T,
> select * from V union all select * from V 
> failed with:
> {noformat}
> 0: jdbc:hive2://> select * from s07view union all select * from 
> s07view limit 1;
> Error: Error while compiling statement: FAILED: SemanticException No valid 
> privileges
>  Required privileges for this query: 
> Server=server1->Db=default->Table=sample_07->action=select; 
> (state=42000,code=4)
> {noformat} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13991) Union All on view fail with no valid permission on underneath table

2016-07-29 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-13991:
--
Fix Version/s: 1.3.0

> Union All on view fail with no valid permission on underneath table
> ---
>
> Key: HIVE-13991
> URL: https://issues.apache.org/jira/browse/HIVE-13991
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
> Fix For: 1.3.0, 2.2.0, 2.1.1
>
> Attachments: HIVE-13991.1.patch, HIVE-13991.2.patch
>
>
> When sentry is enabled. 
> create view V as select * from T;
> When the user has read permission on view V, but does not have read 
> permission on table T,
> select * from V union all select * from V 
> failed with:
> {noformat}
> 0: jdbc:hive2://> select * from s07view union all select * from 
> s07view limit 1;
> Error: Error while compiling statement: FAILED: SemanticException No valid 
> privileges
>  Required privileges for this query: 
> Server=server1->Db=default->Table=sample_07->action=select; 
> (state=42000,code=4)
> {noformat} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14383) SparkClientImpl should pass principal and keytab to spark-submit instead of calling kinit explicitely

2016-07-29 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399826#comment-15399826
 ] 

Xuefu Zhang commented on HIVE-14383:


I think the old version of Spark doesn't do kinit for the client, and that's 
why kinit is called explicitly. If the bug is fixed in new version of Spark, 
this is certainly a welcome fix for Hive on Spark.

Other than that, code looks good to me. +1

> SparkClientImpl should pass principal and keytab to spark-submit instead of 
> calling kinit explicitely
> -
>
> Key: HIVE-14383
> URL: https://issues.apache.org/jira/browse/HIVE-14383
> Project: Hive
>  Issue Type: Improvement
>  Components: Spark
>Affects Versions: 2.1.0
>Reporter: Mubashir Kazia
>Assignee: Chaoyu Tang
> Attachments: HIVE-14383.patch
>
>
> In 
> https://github.com/apache/hive/blob/master/spark-client/src/main/java/org/apache/hive/spark/client/SparkClientImpl.java#L331,
>  kinit is manually called before spark-submit is called. This is problematic 
> if the spark session outlives the delegation token expiry period or the 
> kerberos ticket expiry period. A better way to handle this would be to pass 
> the keytab file location and principal name as parameters to the spark-submit 
> command when kerberos auth is in use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14383) SparkClientImpl should pass principal and keytab to spark-submit instead of calling kinit explicitely

2016-07-29 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang updated HIVE-14383:
---
Status: Patch Available  (was: Open)

Did some manual tests in kerberized cluster and the patch works fine.
[~xuefuz], [~mkazia], [~csun] Could you review the patch? Thanks

> SparkClientImpl should pass principal and keytab to spark-submit instead of 
> calling kinit explicitely
> -
>
> Key: HIVE-14383
> URL: https://issues.apache.org/jira/browse/HIVE-14383
> Project: Hive
>  Issue Type: Improvement
>  Components: Spark
>Affects Versions: 2.1.0
>Reporter: Mubashir Kazia
>Assignee: Chaoyu Tang
> Attachments: HIVE-14383.patch
>
>
> In 
> https://github.com/apache/hive/blob/master/spark-client/src/main/java/org/apache/hive/spark/client/SparkClientImpl.java#L331,
>  kinit is manually called before spark-submit is called. This is problematic 
> if the spark session outlives the delegation token expiry period or the 
> kerberos ticket expiry period. A better way to handle this would be to pass 
> the keytab file location and principal name as parameters to the spark-submit 
> command when kerberos auth is in use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14376) Schema evolution tests takes a long time

2016-07-29 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399812#comment-15399812
 ] 

Prasanth Jayachandran commented on HIVE-14376:
--

Actually it gets worse there are 13 more tests for schema evolution on text. 
Replacing INSERT INTO queries is extremely laborious task. Most q files are 
>800 LOC. All of the select queries also have order by.

> Schema evolution tests takes a long time
> 
>
> Key: HIVE-14376
> URL: https://issues.apache.org/jira/browse/HIVE-14376
> Project: Hive
>  Issue Type: Bug
>  Components: Test
>Affects Versions: 2.2.0, 2.1.1
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Minor
>
> schema_evol_* tests (14 tests) takes 47 mins in tez, 40mins in TestCliDriver. 
> Same set of tests are added to llap as well in HIVE-14355 which will some 
> more time. Most tests uses INSERT into table which can be slow and is not 
> needed. Even some individual tests takes about 5 mins to run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13560) Adding Omid as connection manager for HBase Metastore

2016-07-29 Thread Daniel Dai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-13560:
--
Attachment: HIVE-13560.11.patch

> Adding Omid as connection manager for HBase Metastore
> -
>
> Key: HIVE-13560
> URL: https://issues.apache.org/jira/browse/HIVE-13560
> Project: Hive
>  Issue Type: Improvement
>  Components: HBase Metastore
>Reporter: Daniel Dai
>Assignee: Daniel Dai
> Attachments: HIVE-13560.1.patch, HIVE-13560.10.patch, 
> HIVE-13560.11.patch, HIVE-13560.2.patch, HIVE-13560.3.patch, 
> HIVE-13560.4.patch, HIVE-13560.5.patch, HIVE-13560.6.patch, 
> HIVE-13560.7.patch, HIVE-13560.8.patch, HIVE-13560.9.patch
>
>
> Adding Omid as a transaction manager to HBase Metastore. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14383) SparkClientImpl should pass principal and keytab to spark-submit instead of calling kinit explicitely

2016-07-29 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang updated HIVE-14383:
---
Attachment: HIVE-14383.patch

> SparkClientImpl should pass principal and keytab to spark-submit instead of 
> calling kinit explicitely
> -
>
> Key: HIVE-14383
> URL: https://issues.apache.org/jira/browse/HIVE-14383
> Project: Hive
>  Issue Type: Improvement
>  Components: Spark
>Affects Versions: 2.1.0
>Reporter: Mubashir Kazia
>Assignee: Chaoyu Tang
> Attachments: HIVE-14383.patch
>
>
> In 
> https://github.com/apache/hive/blob/master/spark-client/src/main/java/org/apache/hive/spark/client/SparkClientImpl.java#L331,
>  kinit is manually called before spark-submit is called. This is problematic 
> if the spark session outlives the delegation token expiry period or the 
> kerberos ticket expiry period. A better way to handle this would be to pass 
> the keytab file location and principal name as parameters to the spark-submit 
> command when kerberos auth is in use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-14383) SparkClientImpl should pass principal and keytab to spark-submit instead of calling kinit explicitely

2016-07-29 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang reassigned HIVE-14383:
--

Assignee: Chaoyu Tang

> SparkClientImpl should pass principal and keytab to spark-submit instead of 
> calling kinit explicitely
> -
>
> Key: HIVE-14383
> URL: https://issues.apache.org/jira/browse/HIVE-14383
> Project: Hive
>  Issue Type: Improvement
>  Components: Spark
>Affects Versions: 2.1.0
>Reporter: Mubashir Kazia
>Assignee: Chaoyu Tang
>
> In 
> https://github.com/apache/hive/blob/master/spark-client/src/main/java/org/apache/hive/spark/client/SparkClientImpl.java#L331,
>  kinit is manually called before spark-submit is called. This is problematic 
> if the spark session outlives the delegation token expiry period or the 
> kerberos ticket expiry period. A better way to handle this would be to pass 
> the keytab file location and principal name as parameters to the spark-submit 
> command when kerberos auth is in use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14361) Empty method in TestClientCommandHookFactory

2016-07-29 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399794#comment-15399794
 ] 

Hive QA commented on HIVE-14361:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12820545/HIVE-14361.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 14 failed/errored test(s), 10380 tests 
executed
*Failed tests:*
{noformat}
TestMsgBusConnection - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avro_nullable_union
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.org.apache.hadoop.hive.cli.TestMiniTezCliDriver
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_dynpart_sort_optimization2
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_explainuser_3
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_orc_merge7
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_union6
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_outer_join3
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_avro_non_nullable_union
org.apache.hive.spark.client.TestSparkClient.testJobSubmission
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/686/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/686/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-686/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 14 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12820545 - PreCommit-HIVE-MASTER-Build

> Empty method in TestClientCommandHookFactory
> 
>
> Key: HIVE-14361
> URL: https://issues.apache.org/jira/browse/HIVE-14361
> Project: Hive
>  Issue Type: Bug
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Trivial
> Attachments: HIVE-14361.patch
>
>
> Remove the empty method left in TestClientCommandHookFactory



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14340) Add a new hook triggers before query compilation and after query execution

2016-07-29 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399753#comment-15399753
 ] 

Jimmy Xiang commented on HIVE-14340:


Query hook is a little ambiguous.

Since post execution hooks are already supported, does this mean we just need 
to add pre compilation hook? Isn't pre semantic analysis good for this purpose?


> Add a new hook triggers before query compilation and after query execution
> --
>
> Key: HIVE-14340
> URL: https://issues.apache.org/jira/browse/HIVE-14340
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 2.2.0
>Reporter: Chao Sun
>Assignee: Chao Sun
> Attachments: HIVE-14340.0.patch
>
>
> In some cases we may need to have a hook that activates before a query 
> compilation and after its execution. For instance, dynamically generate a UDF 
> specifically for the running query and clean up the resource after the query 
> is done. The current hooks only covers pre & post semantic analysis, pre & 
> post query execution, which doesn't fit the requirement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14374) BeeLine argument, and configuration handling cleanup

2016-07-29 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399734#comment-15399734
 ] 

Sahil Takiar commented on HIVE-14374:
-

A lot of the reflection code is also in {{org.apache.hive.beeline.Reflector}}. 
I agree this code requires some cleanup. I believe [~vihangk1] has brought up 
the issue that it makes the code a lot less readable.

[~pvary] are there a lot of scenarios where a configuration parameter should 
only be set via the command, but not in the configuration file (or vice versa)? 
If not I would say it may be simpler to just have one annotation 
{{ConfigurationKey}} that indicates this setter method should be use as a 
configuration key. The advantage here is that users don't need to track which 
configuration keys can be configured from which location (e.g. command line or 
properties file).

If we are going to invest time in cleaning up this code, it may be worth 
considering whether we should just use a standard Java Command Line Options 
Parser (commons-cli, jopt, jcommander, etc.). A simple pattern could be to just 
parse in the BeeLine configuration file, and then parse in the command line 
options. If a configuration key is set in both places, the command line option 
takes precedence.

> BeeLine argument, and configuration handling cleanup
> 
>
> Key: HIVE-14374
> URL: https://issues.apache.org/jira/browse/HIVE-14374
> Project: Hive
>  Issue Type: Improvement
>  Components: Beeline
>Affects Versions: 2.2.0
>Reporter: Peter Vary
>Assignee: Peter Vary
>
> BeeLine uses reflection, to set the BeeLineOpts attributes when parsing 
> command line arguments, and when loading the configuration file.
> This means, that creating a setXXX, getXXX method in BeeLineOpts is a 
> potential risk of exposing an attribute for the user unintentionally. There 
> is a possibility to exclude an attribute from saving the value in the 
> configuration file with the Ignore annotation. This does not restrict the 
> loading or command line setting of these parameters which means there are 
> many undocumented "features" as-is, like setting the lastConnectedUrl, 
> allowMultilineCommand, maxHeight, trimScripts, etc. from command line.
> This part of the code needs a little cleanup.
> I think we should make this exposure more explicit, and be able to 
> differentiate the configurable options depending on the source (command line, 
> and configuration file), so I propose to create a mechanism to tell 
> explicitly which BeeLineOpts attributes are settable by command line, and 
> configuration file, and every other attribute should be inaccessible by the 
> user of the beeline cli.
> One possible solution could be two annotations like these:
> - CommandLineOption - there could be a mandatory text parameter here, so the 
> developer had to provide the help text for it which could be displayed to the 
> user
> - ConfigurationFileOption - no text is required here
> Something like this:
> - This attribute could be provided by command line, and from a configuration 
> file too:
> {noformat}
> @CommandLineOption("automatically save preferences")
> @ConfigurationFileOption
> public void setAutosave(boolean autosave) {
>   this.autosave = autosave;
> }
> public void getAutosave() {
>   return this.autosave;
> }
> {noformat}
> - This attribute could be set through the configuration only
> {noformat}
> @ConfigurationFileOption
> public void setLastConnectedUrl(String lastConnectedUrl) {
>   this.lastConnectedUrl = lastConnectedUrl;

> }
> 

> public String getLastConnectedUrl()
> {

>   return lastConnectedUrl;
> 
}
> 
{noformat}
> - Attribute could be set through command line only - I think this is not too 
> relevant, but possible
> {noformat}
> @CommandLineOption("specific command line option")
> public void setSpecificCommandLineOption(String specificCommandLineOption) {
> 
  this.specificCommandLineOption = specificCommandLineOption;
> 
}
> 

> public String getSpecificCommandLineOption() {
> 
  return specificCommandLineOption;
> 
}
> 
{noformat}
> - Attribute could not be set
> {noformat}
> public static Env getEnv() {
> 
  return env;
> 
}
> 

public static void setEnv(Env envToUse) {
> 
  env = envToUse;
> 
}
> {noformat}
> Accouring to our previous conversations, I think you might be interested in: 
> [~spena], [~vihangk1], [~aihuaxu], [~ngangam], [~ychena], [~xuefuz]
> but anyone is welcome to discuss this.
> What do you think about the proposed solution?
> Any better ideas, or extensions?
> Thanks,
> Peter



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14381) Handle null value in WindowingTableFunction.WindowingIterator.next()

2016-07-29 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399732#comment-15399732
 ] 

Jesus Camacho Rodriguez commented on HIVE-14381:


+1, I was checking the code, and the change makes sense, although we should 
still call {{output.set(j, out);}} in every case, as stated above.

We make a similar conversion to NULL value in two other places; it seems like 
it was forgotten to do it in the {{next()}} method too.

> Handle null value in WindowingTableFunction.WindowingIterator.next()
> 
>
> Key: HIVE-14381
> URL: https://issues.apache.org/jira/browse/HIVE-14381
> Project: Hive
>  Issue Type: Bug
>  Components: PTF-Windowing
>Affects Versions: 1.3.0, 2.1.0, 2.2.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
> Attachments: HIVE-14381.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14343) HiveDriverRunHookContext's command is null in HS2 mode

2016-07-29 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HIVE-14343:

Attachment: HIVE-14343.1.patch

> HiveDriverRunHookContext's command is null in HS2 mode
> --
>
> Key: HIVE-14343
> URL: https://issues.apache.org/jira/browse/HIVE-14343
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 2.2.0
>Reporter: Chao Sun
>Assignee: Chao Sun
> Attachments: HIVE-14343.0.patch, HIVE-14343.1.patch
>
>
> Looking at the {{Driver#runInternal(String command, boolean 
> alreadyCompiled)}}:
> {code}
> HiveDriverRunHookContext hookContext = new 
> HiveDriverRunHookContextImpl(conf, command);
> // Get all the driver run hooks and pre-execute them.
> List driverRunHooks;
> {code}
> The context is initialized with the {{command}} passed in to the method. 
> However, this command is always null if {{alreadyCompiled}} is true, which is 
> the case for HS2 mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14170) Beeline IncrementalRows should buffer rows and incrementally re-calculate width if TableOutputFormat is used

2016-07-29 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399700#comment-15399700
 ] 

Sahil Takiar commented on HIVE-14170:
-

Hey [~taoli-hwx], thanks for taking a look. I plan to set incremental to true 
by default, but in another JIRA HIVE-7224. Once the changes in this JIRA have 
been changed, I will merge HIVE-7224.

> Beeline IncrementalRows should buffer rows and incrementally re-calculate 
> width if TableOutputFormat is used
> 
>
> Key: HIVE-14170
> URL: https://issues.apache.org/jira/browse/HIVE-14170
> Project: Hive
>  Issue Type: Sub-task
>  Components: Beeline
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
> Attachments: HIVE-14170.1.patch, HIVE-14170.2.patch, 
> HIVE-14170.3.patch, HIVE-14170.4.patch
>
>
> If {{--incremental}} is specified in Beeline, rows are meant to be printed 
> out immediately. However, if {{TableOutputFormat}} is used with this option 
> the formatting can look really off.
> The reason is that {{IncrementalRows}} does not do a global calculation of 
> the optimal width size for {{TableOutputFormat}} (it can't because it only 
> sees one row at a time). The output of {{BufferedRows}} looks much better 
> because it can do this global calculation.
> If {{--incremental}} is used, and {{TableOutputFormat}} is used, the width 
> should be re-calculated every "x" rows ("x" can be configurable and by 
> default it can be 1000).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14117) HS2 UI: List of recent queries shows most recent query last

2016-07-29 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-14117:
--
Attachment: HIVE-14117.1.patch

Re-upload for QA run.

> HS2 UI: List of recent queries shows most recent query last
> ---
>
> Key: HIVE-14117
> URL: https://issues.apache.org/jira/browse/HIVE-14117
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-14117.1.patch, HIVE-14117.1.patch
>
>
> It's more useful to see the latest one first in your "last n queries" view.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14117) HS2 UI: List of recent queries shows most recent query last

2016-07-29 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-14117:
--
Status: Patch Available  (was: Open)

> HS2 UI: List of recent queries shows most recent query last
> ---
>
> Key: HIVE-14117
> URL: https://issues.apache.org/jira/browse/HIVE-14117
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-14117.1.patch, HIVE-14117.1.patch
>
>
> It's more useful to see the latest one first in your "last n queries" view.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14117) HS2 UI: List of recent queries shows most recent query last

2016-07-29 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-14117:
--
Status: Open  (was: Patch Available)

> HS2 UI: List of recent queries shows most recent query last
> ---
>
> Key: HIVE-14117
> URL: https://issues.apache.org/jira/browse/HIVE-14117
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-14117.1.patch
>
>
> It's more useful to see the latest one first in your "last n queries" view.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14366) Conversion of a Non-ACID table to an ACID table produces non-unique primary keys

2016-07-29 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-14366:
--
Assignee: Saket Saurabh  (was: Eugene Koifman)

> Conversion of a Non-ACID table to an ACID table produces non-unique primary 
> keys
> 
>
> Key: HIVE-14366
> URL: https://issues.apache.org/jira/browse/HIVE-14366
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Saket Saurabh
>Assignee: Saket Saurabh
>Priority: Critical
> Attachments: HIVE-14366.01.patch, HIVE-14366.02.patch
>
>
> When a Non-ACID table is converted to an ACID table, the primary key 
> consisting of (original transaction id, bucket_id, row_id) is not generated 
> uniquely. Currently, the row_id is always set to 0 for most rows. This leads 
> to correctness issue for such tables.
> Quickest way to reproduce is to add the following unit test to 
> ql/src/test/org/apache/hadoop/hive/ql/TestTxnCommands2.java
> {code:title=ql/src/test/org/apache/hadoop/hive/ql/TestTxnCommands2.java|borderStyle=solid}
>   @Test
>   public void testOriginalReader() throws Exception {
> FileSystem fs = FileSystem.get(hiveConf);
> FileStatus[] status;
> // 1. Insert five rows to Non-ACID table.
> runStatementOnDriver("insert into " + Table.NONACIDORCTBL + "(a,b) 
> values(1,2),(3,4),(5,6),(7,8),(9,10)");
> // 2. Convert NONACIDORCTBL to ACID table.
> runStatementOnDriver("alter table " + Table.NONACIDORCTBL + " SET 
> TBLPROPERTIES ('transactional'='true')");
> // 3. Perform a major compaction.
> runStatementOnDriver("alter table "+ Table.NONACIDORCTBL + " compact 
> 'MAJOR'");
> runWorker(hiveConf);
> // 4. Perform a delete.
> runStatementOnDriver("delete from " + Table.NONACIDORCTBL + " where a = 
> 1");
> // 5. Now do a projection should have (3,4) (5,6),(7,8),(9,10) only since 
> (1,2) has been deleted.
> List rs = runStatementOnDriver("select a,b from " + 
> Table.NONACIDORCTBL + " order by a,b");
> int[][] resultData = new int[][] {{3,4}, {5,6}, {7,8}, {9,10}};
> Assert.assertEquals(stringifyValues(resultData), rs);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14340) Add a new hook triggers before query compilation and after query execution

2016-07-29 Thread Chao Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399630#comment-15399630
 ] 

Chao Sun commented on HIVE-14340:
-

I don't think the test failures are related. [~xuefuz], [~jxiang]: can you give 
a review when you have time? thanks.

> Add a new hook triggers before query compilation and after query execution
> --
>
> Key: HIVE-14340
> URL: https://issues.apache.org/jira/browse/HIVE-14340
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 2.2.0
>Reporter: Chao Sun
>Assignee: Chao Sun
> Attachments: HIVE-14340.0.patch
>
>
> In some cases we may need to have a hook that activates before a query 
> compilation and after its execution. For instance, dynamically generate a UDF 
> specifically for the running query and clean up the resource after the query 
> is done. The current hooks only covers pre & post semantic analysis, pre & 
> post query execution, which doesn't fit the requirement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14146) Column comments with "\n" character "corrupts" table metadata

2016-07-29 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399610#comment-15399610
 ] 

Aihua Xu commented on HIVE-14146:
-

>From the test case result. seems we still have some comments not escaped. 
>[~pvary] Can you check? The one in code review seems to be correct.

> Column comments with "\n" character "corrupts" table metadata
> -
>
> Key: HIVE-14146
> URL: https://issues.apache.org/jira/browse/HIVE-14146
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 2.2.0
>Reporter: Peter Vary
>Assignee: Peter Vary
> Attachments: HIVE-14146.2.patch, HIVE-14146.3.patch, 
> HIVE-14146.4.patch, HIVE-14146.5.patch, HIVE-14146.6.patch, HIVE-14146.patch
>
>
> Create a table with the following(noting the \n in the COMMENT):
> {noformat}
> CREATE TABLE commtest(first_nm string COMMENT 'Indicates First name\nof an 
> individual’);
> {noformat}
> Describe shows that now the metadata is messed up:
> {noformat}
> beeline> describe commtest;
> +---++---+--+
> | col_name  | data_type  |comment|
> +---++---+--+
> | first_nm | string   | Indicates First name  |
> | of an individual  | NULL   | NULL  |
> +---++---+--+
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14380) Queries on tables with remote HDFS paths fail in "encryption" checks.

2016-07-29 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-14380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399600#comment-15399600
 ] 

Sergio Peña commented on HIVE-14380:


Cool [~mithun], thanks for taking a look at this. 
I reviewed the patch and it looks good. +1

Let's see what HiveQA says.

> Queries on tables with remote HDFS paths fail in "encryption" checks.
> -
>
> Key: HIVE-14380
> URL: https://issues.apache.org/jira/browse/HIVE-14380
> Project: Hive
>  Issue Type: Bug
>  Components: Encryption
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
> Attachments: HIVE-14380.1.patch
>
>
> If a table has table/partition locations set to remote HDFS paths, querying 
> them will cause the following IAException:
> {noformat}
> 2016-07-26 01:16:27,471 ERROR parse.CalcitePlanner 
> (SemanticAnalyzer.java:getMetaData(1867)) - 
> org.apache.hadoop.hive.ql.metadata.HiveException: Unable to determine if 
> hdfs://foo.ygrid.yahoo.com:8020/projects/my_db/my_table is encrypted: 
> java.lang.IllegalArgumentException: Wrong FS: 
> hdfs://foo.ygrid.yahoo.com:8020/projects/my_db/my_table, expected: 
> hdfs://bar.ygrid.yahoo.com:8020
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.isPathEncrypted(SemanticAnalyzer.java:2204)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getStrongestEncryptedTablePath(SemanticAnalyzer.java:2274)
> ...
> {noformat}
> This is because of the following code in {{SessionState}}:
> {code:title=SessionState.java|borderStyle=solid}
>  public HadoopShims.HdfsEncryptionShim getHdfsEncryptionShim() throws 
> HiveException {
> if (hdfsEncryptionShim == null) {
>   try {
> FileSystem fs = FileSystem.get(sessionConf);
> if ("hdfs".equals(fs.getUri().getScheme())) {
>   hdfsEncryptionShim = 
> ShimLoader.getHadoopShims().createHdfsEncryptionShim(fs, sessionConf);
> } else {
>   LOG.debug("Could not get hdfsEncryptionShim, it is only applicable 
> to hdfs filesystem.");
> }
>   } catch (Exception e) {
> throw new HiveException(e);
>   }
> }
> return hdfsEncryptionShim;
>   }
> {code}
> When the {{FileSystem}} instance is created, using the {{sessionConf}} 
> implies that the current HDFS is going to be used. This call should instead 
> fetch the {{FileSystem}} instance corresponding to the path being checked.
> A fix is forthcoming...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14150) Hive does not compile against Hadoop-2.9.0-SNAPSHOT

2016-07-29 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HIVE-14150:
-
Status: Patch Available  (was: Open)

> Hive does not compile against Hadoop-2.9.0-SNAPSHOT
> ---
>
> Key: HIVE-14150
> URL: https://issues.apache.org/jira/browse/HIVE-14150
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Siddharth Seth
>Assignee: Akira Ajisaka
>Priority: Critical
> Attachments: HIVE-14150.01.patch
>
>
> JvmPauseMonitor, JvmMetrics used in LLAP are private classes in Hadoop - and 
> have changed in 2.9.
> Hive has it's own version of JvmPauseMonitor. Need to see what can be done 
> with JvmMetrics



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14150) Hive does not compile against Hadoop-2.9.0-SNAPSHOT

2016-07-29 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HIVE-14150:
-
Attachment: HIVE-14150.01.patch

Attaching a patch to use the JvmPauseMonitor of Hive. In addition, I've created 
Hive's JvmMetrics and other related classes.
The JvmMetrics of Hadoop is {{@Private}}, so I'm thinking it's better to 
implement Hive's own JvmMetrics.

> Hive does not compile against Hadoop-2.9.0-SNAPSHOT
> ---
>
> Key: HIVE-14150
> URL: https://issues.apache.org/jira/browse/HIVE-14150
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Siddharth Seth
>Assignee: Akira Ajisaka
>Priority: Critical
> Attachments: HIVE-14150.01.patch
>
>
> JvmPauseMonitor, JvmMetrics used in LLAP are private classes in Hadoop - and 
> have changed in 2.9.
> Hive has it's own version of JvmPauseMonitor. Need to see what can be done 
> with JvmMetrics



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14329) fix flapping qtests - because of output string ordering

2016-07-29 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399565#comment-15399565
 ] 

Hive QA commented on HIVE-14329:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12820719/HIVE-14329.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 60 failed/errored test(s), 10366 tests 
executed
*Failed tests:*
{noformat}
TestMiniTezCliDriver-tez_self_join.q-filter_join_breaktask.q-vector_decimal_precision.q-and-12-more
 - did not produce a TEST-*.xml file
TestMsgBusConnection - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avro_nullable_union
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_columnStatsUpdateForStatsOptimizer_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_alter_list_bucketing_table1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_skewed_table1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_extrapolate_part_stats_full
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_extrapolate_part_stats_partial
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_extrapolate_part_stats_partial_ndv
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_infer_bucket_sort_grouping_operators
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_infer_bucket_sort_list_bucket
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_lb_fs_stats
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_14
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_9
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_query_multiskew_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_query_multiskew_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_query_multiskew_3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_query_oneskew_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_query_oneskew_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_query_oneskew_3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_list_bucket_dml_10
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_alter_merge_stats_orc
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_column_names_with_leading_and_trailing_spaces
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_ctas
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_deleteAnalyze
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_dynpart_sort_optimization2
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_orc_analyze
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_orc_nonvec_fetchwork_part
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_orc_nonvec_fetchwork_table
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_orc_nonvec_mapwork_part
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_orc_nonvec_mapwork_table
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_orc_vec_mapwork_part
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_orc_vec_mapwork_table
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_stats
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_text_nonvec_mapwork_part
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_text_nonvec_mapwork_table
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_text_vec_mapwork_part
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_text_vec_mapwork_table
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_text_vecrow_mapwork_part

[jira] [Assigned] (HIVE-14150) Hive does not compile against Hadoop-2.9.0-SNAPSHOT

2016-07-29 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HIVE-14150:


Assignee: Akira Ajisaka

> Hive does not compile against Hadoop-2.9.0-SNAPSHOT
> ---
>
> Key: HIVE-14150
> URL: https://issues.apache.org/jira/browse/HIVE-14150
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Siddharth Seth
>Assignee: Akira Ajisaka
>Priority: Critical
>
> JvmPauseMonitor, JvmMetrics used in LLAP are private classes in Hadoop - and 
> have changed in 2.9.
> Hive has it's own version of JvmPauseMonitor. Need to see what can be done 
> with JvmMetrics



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14374) BeeLine argument, and configuration handling cleanup

2016-07-29 Thread Peter Vary (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399546#comment-15399546
 ] 

Peter Vary commented on HIVE-14374:
---

{noformat}
  public boolean set(String key, String value, boolean quiet) {
try {
  beeLine.getReflector().invoke(this, "set" + key, new Object[] {value});
  return true;
} catch (Exception e) {
  if (!quiet) {
beeLine.error(beeLine.loc("error-setting", new Object[] {key, e}));
  }
  return false;
}
}
{noformat}

> BeeLine argument, and configuration handling cleanup
> 
>
> Key: HIVE-14374
> URL: https://issues.apache.org/jira/browse/HIVE-14374
> Project: Hive
>  Issue Type: Improvement
>  Components: Beeline
>Affects Versions: 2.2.0
>Reporter: Peter Vary
>Assignee: Peter Vary
>
> BeeLine uses reflection, to set the BeeLineOpts attributes when parsing 
> command line arguments, and when loading the configuration file.
> This means, that creating a setXXX, getXXX method in BeeLineOpts is a 
> potential risk of exposing an attribute for the user unintentionally. There 
> is a possibility to exclude an attribute from saving the value in the 
> configuration file with the Ignore annotation. This does not restrict the 
> loading or command line setting of these parameters which means there are 
> many undocumented "features" as-is, like setting the lastConnectedUrl, 
> allowMultilineCommand, maxHeight, trimScripts, etc. from command line.
> This part of the code needs a little cleanup.
> I think we should make this exposure more explicit, and be able to 
> differentiate the configurable options depending on the source (command line, 
> and configuration file), so I propose to create a mechanism to tell 
> explicitly which BeeLineOpts attributes are settable by command line, and 
> configuration file, and every other attribute should be inaccessible by the 
> user of the beeline cli.
> One possible solution could be two annotations like these:
> - CommandLineOption - there could be a mandatory text parameter here, so the 
> developer had to provide the help text for it which could be displayed to the 
> user
> - ConfigurationFileOption - no text is required here
> Something like this:
> - This attribute could be provided by command line, and from a configuration 
> file too:
> {noformat}
> @CommandLineOption("automatically save preferences")
> @ConfigurationFileOption
> public void setAutosave(boolean autosave) {
>   this.autosave = autosave;
> }
> public void getAutosave() {
>   return this.autosave;
> }
> {noformat}
> - This attribute could be set through the configuration only
> {noformat}
> @ConfigurationFileOption
> public void setLastConnectedUrl(String lastConnectedUrl) {
>   this.lastConnectedUrl = lastConnectedUrl;

> }
> 

> public String getLastConnectedUrl()
> {

>   return lastConnectedUrl;
> 
}
> 
{noformat}
> - Attribute could be set through command line only - I think this is not too 
> relevant, but possible
> {noformat}
> @CommandLineOption("specific command line option")
> public void setSpecificCommandLineOption(String specificCommandLineOption) {
> 
  this.specificCommandLineOption = specificCommandLineOption;
> 
}
> 

> public String getSpecificCommandLineOption() {
> 
  return specificCommandLineOption;
> 
}
> 
{noformat}
> - Attribute could not be set
> {noformat}
> public static Env getEnv() {
> 
  return env;
> 
}
> 

public static void setEnv(Env envToUse) {
> 
  env = envToUse;
> 
}
> {noformat}
> Accouring to our previous conversations, I think you might be interested in: 
> [~spena], [~vihangk1], [~aihuaxu], [~ngangam], [~ychena], [~xuefuz]
> but anyone is welcome to discuss this.
> What do you think about the proposed solution?
> Any better ideas, or extensions?
> Thanks,
> Peter



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14367) Estimated size for constant nulls is 0

2016-07-29 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399548#comment-15399548
 ] 

Jesus Camacho Rodriguez commented on HIVE-14367:


+1 pending the change in RB and passing QA

> Estimated size for constant nulls is 0
> --
>
> Key: HIVE-14367
> URL: https://issues.apache.org/jira/browse/HIVE-14367
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Affects Versions: 2.0.0, 2.1.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-14367.1.patch
>
>
> since type is incorrectly assumed as void.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14374) BeeLine argument, and configuration handling cleanup

2016-07-29 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399545#comment-15399545
 ] 

Sergio Peña commented on HIVE-14374:


How is reflection used on BeeLineOpts? I was looking in the code, but I did not 
find it.

> BeeLine argument, and configuration handling cleanup
> 
>
> Key: HIVE-14374
> URL: https://issues.apache.org/jira/browse/HIVE-14374
> Project: Hive
>  Issue Type: Improvement
>  Components: Beeline
>Affects Versions: 2.2.0
>Reporter: Peter Vary
>Assignee: Peter Vary
>
> BeeLine uses reflection, to set the BeeLineOpts attributes when parsing 
> command line arguments, and when loading the configuration file.
> This means, that creating a setXXX, getXXX method in BeeLineOpts is a 
> potential risk of exposing an attribute for the user unintentionally. There 
> is a possibility to exclude an attribute from saving the value in the 
> configuration file with the Ignore annotation. This does not restrict the 
> loading or command line setting of these parameters which means there are 
> many undocumented "features" as-is, like setting the lastConnectedUrl, 
> allowMultilineCommand, maxHeight, trimScripts, etc. from command line.
> This part of the code needs a little cleanup.
> I think we should make this exposure more explicit, and be able to 
> differentiate the configurable options depending on the source (command line, 
> and configuration file), so I propose to create a mechanism to tell 
> explicitly which BeeLineOpts attributes are settable by command line, and 
> configuration file, and every other attribute should be inaccessible by the 
> user of the beeline cli.
> One possible solution could be two annotations like these:
> - CommandLineOption - there could be a mandatory text parameter here, so the 
> developer had to provide the help text for it which could be displayed to the 
> user
> - ConfigurationFileOption - no text is required here
> Something like this:
> - This attribute could be provided by command line, and from a configuration 
> file too:
> {noformat}
> @CommandLineOption("automatically save preferences")
> @ConfigurationFileOption
> public void setAutosave(boolean autosave) {
>   this.autosave = autosave;
> }
> public void getAutosave() {
>   return this.autosave;
> }
> {noformat}
> - This attribute could be set through the configuration only
> {noformat}
> @ConfigurationFileOption
> public void setLastConnectedUrl(String lastConnectedUrl) {
>   this.lastConnectedUrl = lastConnectedUrl;

> }
> 

> public String getLastConnectedUrl()
> {

>   return lastConnectedUrl;
> 
}
> 
{noformat}
> - Attribute could be set through command line only - I think this is not too 
> relevant, but possible
> {noformat}
> @CommandLineOption("specific command line option")
> public void setSpecificCommandLineOption(String specificCommandLineOption) {
> 
  this.specificCommandLineOption = specificCommandLineOption;
> 
}
> 

> public String getSpecificCommandLineOption() {
> 
  return specificCommandLineOption;
> 
}
> 
{noformat}
> - Attribute could not be set
> {noformat}
> public static Env getEnv() {
> 
  return env;
> 
}
> 

public static void setEnv(Env envToUse) {
> 
  env = envToUse;
> 
}
> {noformat}
> Accouring to our previous conversations, I think you might be interested in: 
> [~spena], [~vihangk1], [~aihuaxu], [~ngangam], [~ychena], [~xuefuz]
> but anyone is welcome to discuss this.
> What do you think about the proposed solution?
> Any better ideas, or extensions?
> Thanks,
> Peter



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14146) Column comments with "\n" character "corrupts" table metadata

2016-07-29 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-14146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399532#comment-15399532
 ] 

Sergio Peña commented on HIVE-14146:


Latest patch looks good.
+1

Wait for HiveQa before commit.

> Column comments with "\n" character "corrupts" table metadata
> -
>
> Key: HIVE-14146
> URL: https://issues.apache.org/jira/browse/HIVE-14146
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 2.2.0
>Reporter: Peter Vary
>Assignee: Peter Vary
> Attachments: HIVE-14146.2.patch, HIVE-14146.3.patch, 
> HIVE-14146.4.patch, HIVE-14146.5.patch, HIVE-14146.6.patch, HIVE-14146.patch
>
>
> Create a table with the following(noting the \n in the COMMENT):
> {noformat}
> CREATE TABLE commtest(first_nm string COMMENT 'Indicates First name\nof an 
> individual’);
> {noformat}
> Describe shows that now the metadata is messed up:
> {noformat}
> beeline> describe commtest;
> +---++---+--+
> | col_name  | data_type  |comment|
> +---++---+--+
> | first_nm | string   | Indicates First name  |
> | of an individual  | NULL   | NULL  |
> +---++---+--+
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14146) Column comments with "\n" character "corrupts" table metadata

2016-07-29 Thread Peter Vary (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-14146:
--
Attachment: HIVE-14146.6.patch

Latest patch, see review

> Column comments with "\n" character "corrupts" table metadata
> -
>
> Key: HIVE-14146
> URL: https://issues.apache.org/jira/browse/HIVE-14146
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 2.2.0
>Reporter: Peter Vary
>Assignee: Peter Vary
> Attachments: HIVE-14146.2.patch, HIVE-14146.3.patch, 
> HIVE-14146.4.patch, HIVE-14146.5.patch, HIVE-14146.6.patch, HIVE-14146.patch
>
>
> Create a table with the following(noting the \n in the COMMENT):
> {noformat}
> CREATE TABLE commtest(first_nm string COMMENT 'Indicates First name\nof an 
> individual’);
> {noformat}
> Describe shows that now the metadata is messed up:
> {noformat}
> beeline> describe commtest;
> +---++---+--+
> | col_name  | data_type  |comment|
> +---++---+--+
> | first_nm | string   | Indicates First name  |
> | of an individual  | NULL   | NULL  |
> +---++---+--+
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14294) HiveSchemaConverter for Parquet doesn't translate TINYINT and SMALLINT into proper Parquet types

2016-07-29 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-14294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-14294:
---
   Resolution: Fixed
Fix Version/s: 2.2.0
   Status: Resolved  (was: Patch Available)

Thanks [~gszadovszky] for your contribution.
I committed this to 2.2

> HiveSchemaConverter for Parquet doesn't translate TINYINT and SMALLINT into 
> proper Parquet types
> 
>
> Key: HIVE-14294
> URL: https://issues.apache.org/jira/browse/HIVE-14294
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1, 2.1.0
>Reporter: Cheng Lian
>Assignee: Gabor Szadovszky
> Fix For: 2.2.0
>
> Attachments: HIVE-14294.patch
>
>
> To reproduce this issue, run the following DDL:
> {code:sql}
> CREATE TABLE foo STORED AS PARQUET AS SELECT CAST(1 AS TINYINT);
> {code}
> And then check the schema of the written Parquet file:
> {noformat}
> $ parquet-schema $WAREHOUSE_PATH/foo/00_0
> message hive_schema {
>   optional int32 _c0;
> }
> {noformat}
> When translating Hive types into Parquet types, {{TINYINT}} and {{SMALLINT}} 
> should be translated into the {{int32 (INT_8)}} and {{int32 (INT_16)}} 
> respectively. However, {{HiveSchemaConverter}} converts all of {{TINYINT}}, 
> {{SMALLINT}}, and {{INT}} into Parquet {{int32}}. This causes problem when 
> accessing Parquet files generated by Hive in other systems since type 
> information gets wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12646) beeline and HIVE CLI do not parse ; in quote properly

2016-07-29 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-12646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399411#comment-15399411
 ] 

Sergio Peña commented on HIVE-12646:


It does not need documentation. This fixes a bug with ; only.
Thanks for asking.

> beeline and HIVE CLI do not parse ; in quote properly
> -
>
> Key: HIVE-12646
> URL: https://issues.apache.org/jira/browse/HIVE-12646
> Project: Hive
>  Issue Type: Bug
>  Components: CLI, Clients
>Reporter: Yongzhi Chen
>Assignee: Sahil Takiar
> Fix For: 2.2.0
>
> Attachments: HIVE-12646.2.patch, HIVE-12646.3.patch, 
> HIVE-12646.4.patch, HIVE-12646.5.patch, HIVE-12646.patch
>
>
> Beeline and Cli have to escape ; in the quote while most other shell scripts 
> need not. For example:
> in Beeline:
> {noformat}
> 0: jdbc:hive2://localhost:1> select ';' from tlb1;
> select ';' from tlb1;
> 15/12/10 10:45:26 DEBUG TSaslTransport: writing data length: 115
> 15/12/10 10:45:26 DEBUG TSaslTransport: CLIENT: reading data length: 3403
> Error: Error while compiling statement: FAILED: ParseException line 1:8 
> cannot recognize input near '' '
> {noformat}
> while in mysql shell:
> {noformat}
> mysql> SELECT CONCAT(';', 'foo') FROM test limit 3;
> ++
> | ;foo   |
> | ;foo   |
> | ;foo   |
> ++
> 3 rows in set (0.00 sec)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14359) Hive on Spark might fail in HS2 with LDAP authentication in a kerberized cluster

2016-07-29 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang updated HIVE-14359:
---
   Resolution: Fixed
Fix Version/s: 2.1.1
   2.2.0
   Status: Resolved  (was: Patch Available)

Committed to 2.2.0 and 2.1.1. Thanks [~xuefuz] for review.

> Hive on Spark might fail in HS2 with LDAP authentication in a kerberized 
> cluster
> 
>
> Key: HIVE-14359
> URL: https://issues.apache.org/jira/browse/HIVE-14359
> Project: Hive
>  Issue Type: Bug
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
> Fix For: 2.2.0, 2.1.1
>
> Attachments: HIVE-14359.patch
>
>
> When HS2 is used as a gateway for the LDAP users to access and run the 
> queries in kerberized cluster, it's authentication mode is configured as LDAP 
> and at this time, HoS might fail by the same reason as HIVE-10594. 
> hive.server2.authentication is not a proper property to determine if a 
> cluster is kerberized, instead hadoop.security.authentication should be used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14359) Hive on Spark might fail in HS2 with LDAP authentication in a kerberized cluster

2016-07-29 Thread Chaoyu Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399317#comment-15399317
 ] 

Chaoyu Tang commented on HIVE-14359:


The failed tests are aged and not related to this patch.

> Hive on Spark might fail in HS2 with LDAP authentication in a kerberized 
> cluster
> 
>
> Key: HIVE-14359
> URL: https://issues.apache.org/jira/browse/HIVE-14359
> Project: Hive
>  Issue Type: Bug
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
> Attachments: HIVE-14359.patch
>
>
> When HS2 is used as a gateway for the LDAP users to access and run the 
> queries in kerberized cluster, it's authentication mode is configured as LDAP 
> and at this time, HoS might fail by the same reason as HIVE-10594. 
> hive.server2.authentication is not a proper property to determine if a 
> cluster is kerberized, instead hadoop.security.authentication should be used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14340) Add a new hook triggers before query compilation and after query execution

2016-07-29 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399295#comment-15399295
 ] 

Hive QA commented on HIVE-14340:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12820523/HIVE-14340.0.patch

{color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 10381 tests 
executed
*Failed tests:*
{noformat}
TestMsgBusConnection - did not produce a TEST-*.xml file
TestQueryHook - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avro_nullable_union
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_avro_non_nullable_union
org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskSchedulerService.testDelayedLocalityNodeCommErrorImmediateAllocation
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/683/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/683/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-683/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 9 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12820523 - PreCommit-HIVE-MASTER-Build

> Add a new hook triggers before query compilation and after query execution
> --
>
> Key: HIVE-14340
> URL: https://issues.apache.org/jira/browse/HIVE-14340
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 2.2.0
>Reporter: Chao Sun
>Assignee: Chao Sun
> Attachments: HIVE-14340.0.patch
>
>
> In some cases we may need to have a hook that activates before a query 
> compilation and after its execution. For instance, dynamically generate a UDF 
> specifically for the running query and clean up the resource after the query 
> is done. The current hooks only covers pre & post semantic analysis, pre & 
> post query execution, which doesn't fit the requirement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14382) Improve the Functionality of Reverse FOR Statement

2016-07-29 Thread Dmitry Tolpeko (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399292#comment-15399292
 ] 

Dmitry Tolpeko commented on HIVE-14382:
---

Akash, right now this functionality is part of Hive HPL/SQL component, see 
docs: http://hplsql.org/for-range

> Improve the Functionality of Reverse  FOR Statement
> ---
>
> Key: HIVE-14382
> URL: https://issues.apache.org/jira/browse/HIVE-14382
> Project: Hive
>  Issue Type: Improvement
>Reporter: Akash Sethi
>Assignee: Akash Sethi
>Priority: Minor
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> According to SQL Standards, Reverse FOR Statement should be like this:-
> FOR index IN Optional[Reverse] Lower_Bound Upper_Bound
> but in hive it is like this :- 
> FOR index IN Optional[Reverse]  Upper_Bound Lower_Bound
> so i m just trying to improve the functionality for Reverse FOR Statement
> REFERNCES :- 
> https://docs.oracle.com/cloud/latest/db112/LNPLS/for_loop_statement.htm#LNPLS1536



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HIVE-14382) Improve the Functionality of Reverse FOR Statement

2016-07-29 Thread Akash Sethi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-14382 started by Akash Sethi.
--
> Improve the Functionality of Reverse  FOR Statement
> ---
>
> Key: HIVE-14382
> URL: https://issues.apache.org/jira/browse/HIVE-14382
> Project: Hive
>  Issue Type: Improvement
>Reporter: Akash Sethi
>Assignee: Akash Sethi
>Priority: Minor
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> According to SQL Standards, Reverse FOR Statement should be like this:-
> FOR index IN Optional[Reverse] Lower_Bound Upper_Bound
> but in hive it is like this :- 
> FOR index IN Optional[Reverse]  Upper_Bound Lower_Bound
> so i m just trying to improve the functionality for Reverse FOR Statement
> REFERNCES :- 
> https://docs.oracle.com/cloud/latest/db112/LNPLS/for_loop_statement.htm#LNPLS1536



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work stopped] (HIVE-14382) Improve the Functionality of Reverse FOR Statement

2016-07-29 Thread Akash Sethi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-14382 stopped by Akash Sethi.
--
> Improve the Functionality of Reverse  FOR Statement
> ---
>
> Key: HIVE-14382
> URL: https://issues.apache.org/jira/browse/HIVE-14382
> Project: Hive
>  Issue Type: Improvement
>Reporter: Akash Sethi
>Assignee: Akash Sethi
>Priority: Minor
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> According to SQL Standards, Reverse FOR Statement should be like this:-
> FOR index IN Optional[Reverse] Lower_Bound Upper_Bound
> but in hive it is like this :- 
> FOR index IN Optional[Reverse]  Upper_Bound Lower_Bound
> so i m just trying to improve the functionality for Reverse FOR Statement
> REFERNCES :- 
> https://docs.oracle.com/cloud/latest/db112/LNPLS/for_loop_statement.htm#LNPLS1536



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14335) TaskDisplay's return value is not getting deserialized properly

2016-07-29 Thread Amareshwari Sriramadasu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399185#comment-15399185
 ] 

Amareshwari Sriramadasu commented on HIVE-14335:


If there are no objections, I would like to commit to branch-2.1 

cc [~szehon] [~jcamachorodriguez]

> TaskDisplay's return value is not getting deserialized properly
> ---
>
> Key: HIVE-14335
> URL: https://issues.apache.org/jira/browse/HIVE-14335
> Project: Hive
>  Issue Type: Bug
>Reporter: Rajat Khandelwal
>Assignee: Rajat Khandelwal
> Fix For: 2.2.0
>
> Attachments: HIVE-14335.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12954) NPE with str_to_map on null strings

2016-07-29 Thread Marta Kuczora (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marta Kuczora updated HIVE-12954:
-
Attachment: HIVE-12954.2.patch

Fixed the review findings in the second patch:
- removed additional spaces
- added the @Test annotations to the unit tests

> NPE with str_to_map on null strings
> ---
>
> Key: HIVE-12954
> URL: https://issues.apache.org/jira/browse/HIVE-12954
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0, 1.0.0
>Reporter: Charles Pritchard
>Assignee: Marta Kuczora
> Attachments: HIVE-12954.2.patch, HIVE-12954.patch
>
>
> Running str_to_map on a null string will return a NullPointerException.
> Workaround is to use coalesce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12954) NPE with str_to_map on null strings

2016-07-29 Thread Marta Kuczora (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marta Kuczora updated HIVE-12954:
-
Status: Patch Available  (was: In Progress)

> NPE with str_to_map on null strings
> ---
>
> Key: HIVE-12954
> URL: https://issues.apache.org/jira/browse/HIVE-12954
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.0.0, 0.14.0
>Reporter: Charles Pritchard
>Assignee: Marta Kuczora
> Attachments: HIVE-12954.2.patch, HIVE-12954.patch
>
>
> Running str_to_map on a null string will return a NullPointerException.
> Workaround is to use coalesce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12954) NPE with str_to_map on null strings

2016-07-29 Thread Marta Kuczora (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marta Kuczora updated HIVE-12954:
-
Status: In Progress  (was: Patch Available)

> NPE with str_to_map on null strings
> ---
>
> Key: HIVE-12954
> URL: https://issues.apache.org/jira/browse/HIVE-12954
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.0.0, 0.14.0
>Reporter: Charles Pritchard
>Assignee: Marta Kuczora
> Attachments: HIVE-12954.patch
>
>
> Running str_to_map on a null string will return a NullPointerException.
> Workaround is to use coalesce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12954) NPE with str_to_map on null strings

2016-07-29 Thread Marta Kuczora (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marta Kuczora updated HIVE-12954:
-
Attachment: (was: HIVE-12954.2.patch)

> NPE with str_to_map on null strings
> ---
>
> Key: HIVE-12954
> URL: https://issues.apache.org/jira/browse/HIVE-12954
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0, 1.0.0
>Reporter: Charles Pritchard
>Assignee: Marta Kuczora
> Attachments: HIVE-12954.patch
>
>
> Running str_to_map on a null string will return a NullPointerException.
> Workaround is to use coalesce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12954) NPE with str_to_map on null strings

2016-07-29 Thread Marta Kuczora (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marta Kuczora updated HIVE-12954:
-
Attachment: HIVE-12954.2.patch

> NPE with str_to_map on null strings
> ---
>
> Key: HIVE-12954
> URL: https://issues.apache.org/jira/browse/HIVE-12954
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0, 1.0.0
>Reporter: Charles Pritchard
>Assignee: Marta Kuczora
> Attachments: HIVE-12954.2.patch, HIVE-12954.patch
>
>
> Running str_to_map on a null string will return a NullPointerException.
> Workaround is to use coalesce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14341) Altered skewed location is not respected for list bucketing

2016-07-29 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399119#comment-15399119
 ] 

Hive QA commented on HIVE-14341:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12820522/HIVE-14341.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 28 failed/errored test(s), 10348 tests 
executed
*Failed tests:*
{noformat}
TestMiniTezCliDriver-script_pipe.q-orc_ppd_schema_evol_2a.q-join1.q-and-12-more 
- did not produce a TEST-*.xml file
TestMiniTezCliDriver-vector_decimal_aggregate.q-vector_decimal_10_0.q-orc_vectorization_ppd.q-and-12-more
 - did not produce a TEST-*.xml file
TestMsgBusConnection - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avro_nullable_union
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_infer_bucket_sort_list_bucket
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_lb_fs_stats
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_14
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_9
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_query_multiskew_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_query_multiskew_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_query_multiskew_3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_list_bucket_dml_10
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_list_bucket_dml_10
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_avro_non_nullable_union
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_list_bucket_dml_2
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/682/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/682/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-682/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 28 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12820522 - PreCommit-HIVE-MASTER-Build

> Altered skewed location is not respected for list bucketing
> ---
>
> Key: HIVE-14341
> URL: https://issues.apache.org/jira/browse/HIVE-14341
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 2.0.1
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-14341.1.patch
>
>
> CREATE TABLE list_bucket_single (key STRING, value STRING)
>   SKEWED BY (key) ON (1,5,6) STORED AS DIRECTORIES;
> alter table list_bucket_single set skewed location 
> (''1"="/user/hive/warehouse/hdfs_skewed/new1");
> While when you insert a row to key 1, the location falls back to the default 
> one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14336) Make usage of VectorUDFAdaptor configurable

2016-07-29 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399027#comment-15399027
 ] 

Lefty Leverenz commented on HIVE-14336:
---

Doc note:  Aarrgh, typos in the parameter description -- "cooresponding" should 
be "corresponding" and "choosen" should be "chosen" (sigh).  Anyway, this adds 
the configuration parameter *hive.vectorized.adaptor.usage.mode* to 
HiveConf.java so it will need to be documented in the wiki for release 2.2.0.  
If the patch is applied to any other branches, please fix the typos.

* [Configuration Properties -- Vectorization | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-Vectorization]

Adding a TODOC2.2 label.

> Make usage of VectorUDFAdaptor configurable
> ---
>
> Key: HIVE-14336
> URL: https://issues.apache.org/jira/browse/HIVE-14336
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 2.2.0
>
> Attachments: HIVE-14336.01.patch
>
>
> Add a Hive configuration variable:
> {code}
> hive.vectorized.adaptor.usage.mode = {none, chosen, all}
> {code}
> for configuring whether to attempt vectorization using the VectorUDFAdaptor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >