[jira] [Resolved] (HIVE-13935) Changing char column of textfile table to string/varchar leaves white space.

2016-06-02 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline resolved HIVE-13935.
-
Resolution: Won't Fix

> Changing char column of textfile table to string/varchar leaves white space.
> 
>
> Key: HIVE-13935
> URL: https://issues.apache.org/jira/browse/HIVE-13935
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1
>Reporter: Takahiko Saito
>
> {noformat}
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> create table test_text (c 
> char(16)) stored as textfile;
> No rows affected (0.091 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> insert into table test_text 
> values ('horton works ');
> INFO  : Session is already open
> INFO  : Dag name: insert into table test_text values ('ho...')(Stage-1)
> INFO  :
> INFO  : Status: Running (Executing on YARN cluster with App id 
> application_1464222003837_0399)
> INFO  : Map 1: 0/1
> INFO  : Map 1: 0(+1)/1
> INFO  : Map 1: 1/1
> INFO  : Loading data to table default.test_text from 
> hdfs://os-r6-ifsmes-hiveserver2-11-5.openstacklocal:8020/apps/hive/warehouse/test_text/.hive-staging_hive_2016-05-26_17-45-29_669_2888061873550824337-1/-ext-1
> INFO  : Table default.test_text stats: [numFiles=1, numRows=1, totalSize=17, 
> rawDataSize=16]
> No rows affected (6.849 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> select * from test_text;
> +---+--+
> |test_text.c|
> +---+--+
> | horton works  |
> +---+--+
> 1 row selected (0.098 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> describe test_text;
> +---++--+--+
> | col_name  | data_type  | comment  |
> +---++--+--+
> | c | char(16)   |  |
> +---++--+--+
> 1 row selected (0.127 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> alter table test_text change 
> column c c string;
> No rows affected (0.145 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> describe test_text;
> +---++--+--+
> | col_name  | data_type  | comment  |
> +---++--+--+
> | c | string |  |
> +---++--+--+
> 1 row selected (0.127 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> select * from test_text;
> +---+--+
> |test_text.c|
> +---+--+
> | horton works  |
> +---+--+
> 1 row selected (0.066 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13935) Changing char column of textfile table to string/varchar leaves white space.

2016-06-02 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313593#comment-15313593
 ] 

Matt McCline commented on HIVE-13935:
-

Well, well.

You might be amazed to know this issue *cannot be fixed*!

It only happens for tables and not for partitioned tables.  With tables there 
is no original schema kept like you get with the original schema being kept as 
the partition schema.  So, there is no way to know that the original table was 
written as CHAR and thus no way to know the STRING value needs to be white 
space trimmed.  And, we cannot indiscriminately trim all STRING fields.

> Changing char column of textfile table to string/varchar leaves white space.
> 
>
> Key: HIVE-13935
> URL: https://issues.apache.org/jira/browse/HIVE-13935
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1
>Reporter: Takahiko Saito
>
> {noformat}
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> create table test_text (c 
> char(16)) stored as textfile;
> No rows affected (0.091 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> insert into table test_text 
> values ('horton works ');
> INFO  : Session is already open
> INFO  : Dag name: insert into table test_text values ('ho...')(Stage-1)
> INFO  :
> INFO  : Status: Running (Executing on YARN cluster with App id 
> application_1464222003837_0399)
> INFO  : Map 1: 0/1
> INFO  : Map 1: 0(+1)/1
> INFO  : Map 1: 1/1
> INFO  : Loading data to table default.test_text from 
> hdfs://os-r6-ifsmes-hiveserver2-11-5.openstacklocal:8020/apps/hive/warehouse/test_text/.hive-staging_hive_2016-05-26_17-45-29_669_2888061873550824337-1/-ext-1
> INFO  : Table default.test_text stats: [numFiles=1, numRows=1, totalSize=17, 
> rawDataSize=16]
> No rows affected (6.849 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> select * from test_text;
> +---+--+
> |test_text.c|
> +---+--+
> | horton works  |
> +---+--+
> 1 row selected (0.098 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> describe test_text;
> +---++--+--+
> | col_name  | data_type  | comment  |
> +---++--+--+
> | c | char(16)   |  |
> +---++--+--+
> 1 row selected (0.127 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> alter table test_text change 
> column c c string;
> No rows affected (0.145 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> describe test_text;
> +---++--+--+
> | col_name  | data_type  | comment  |
> +---++--+--+
> | c | string |  |
> +---++--+--+
> 1 row selected (0.127 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> select * from test_text;
> +---+--+
> |test_text.c|
> +---+--+
> | horton works  |
> +---+--+
> 1 row selected (0.066 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13882) When hive.server2.async.exec.async.compile is turned on, from JDBC we will get "The query did not generate a result set"

2016-06-02 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313586#comment-15313586
 ] 

Vaibhav Gumashta commented on HIVE-13882:
-

Reverted from 2.1. Will commit again once itest build issue is fixed.

> When hive.server2.async.exec.async.compile is turned on, from JDBC we will 
> get "The query did not generate a result set" 
> -
>
> Key: HIVE-13882
> URL: https://issues.apache.org/jira/browse/HIVE-13882
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 2.1.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>  Labels: TODOC2.1
> Fix For: 2.1.0
>
> Attachments: HIVE-13882.1.patch, HIVE-13882.2.patch
>
>
>  The following would fail with  "The query did not generate a result set"
> stmt.execute("SET hive.driver.parallel.compilation=true");
> stmt.execute("SET hive.server2.async.exec.async.compile=true");
> ResultSet res =  stmt.executeQuery("SELECT * FROM " + tableName);
> res.next();



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13882) When hive.server2.async.exec.async.compile is turned on, from JDBC we will get "The query did not generate a result set"

2016-06-02 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313583#comment-15313583
 ] 

Vaibhav Gumashta commented on HIVE-13882:
-

I'll revert it from 2.1 since it's due to release soon. [~aihuaxu] Looks like 
there is a build issue on master that [~ekoifman] is pointing to. Thanks.

> When hive.server2.async.exec.async.compile is turned on, from JDBC we will 
> get "The query did not generate a result set" 
> -
>
> Key: HIVE-13882
> URL: https://issues.apache.org/jira/browse/HIVE-13882
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 2.1.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>  Labels: TODOC2.1
> Fix For: 2.1.0
>
> Attachments: HIVE-13882.1.patch, HIVE-13882.2.patch
>
>
>  The following would fail with  "The query did not generate a result set"
> stmt.execute("SET hive.driver.parallel.compilation=true");
> stmt.execute("SET hive.server2.async.exec.async.compile=true");
> ResultSet res =  stmt.executeQuery("SELECT * FROM " + tableName);
> res.next();



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13862) org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getNumPartitionsViaSqlFilter falls back to ORM

2016-06-02 Thread Amareshwari Sriramadasu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amareshwari Sriramadasu updated HIVE-13862:
---
Target Version/s: 2.1.1  (was: 2.1.0)
   Fix Version/s: (was: 2.1.0)
  2.1.1

I see release-2.1.0-rc1 tag is already created. [~jcamachorodriguez], Changed 
fix version to 2.1.1 - Please change if there is another RC for 2.1.0. Thanks

> org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getNumPartitionsViaSqlFilter
>  falls back to ORM 
> ---
>
> Key: HIVE-13862
> URL: https://issues.apache.org/jira/browse/HIVE-13862
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Amareshwari Sriramadasu
>Assignee: Rajat Khandelwal
> Fix For: 2.1.1
>
> Attachments: HIVE-13862.1.patch, HIVE-13862.patch
>
>
> We are seeing following exception and calls fall back to ORM which make it 
> costly :
> {noformat}
>  WARN  org.apache.hadoop.hive.metastore.ObjectStore - Direct SQL failed, 
> falling back to ORM
> java.lang.ClassCastException: 
> org.datanucleus.store.rdbms.query.ForwardQueryResult cannot be cast to 
> java.lang.Number
> at 
> org.apache.hadoop.hive.metastore.MetaStoreDirectSql.extractSqlInt(MetaStoreDirectSql.java:892)
>  ~[hive-exec-2.1.2-inm-SNAPSHOT.jar:2.1.2-inm-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getNumPartitionsViaSqlFilterInternal(MetaStoreDirectSql.java:855)
>  ~[hive-exec-2.1.2-inm-SNAPSHOT.jar:2.1.2-inm-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getNumPartitionsViaSqlFilter(MetaStoreDirectSql.java:405)
>  ~[hive-exec-2.1.2-inm-SNAPSHOT.jar:2.1.2-inm-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore$5.getSqlResult(ObjectStore.java:2763)
>  ~[hive-exec-2.1.2-inm-SNAPSHOT.jar:2.1.2-inm-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore$5.getSqlResult(ObjectStore.java:2755)
>  ~[hive-exec-2.1.2-inm-SNAPSHOT.jar:2.1.2-inm-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:2606)
>  ~[hive-exec-2.1.2-inm-SNAPSHOT.jar:2.1.2-inm-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.getNumPartitionsByFilterInternal(ObjectStore.java:2770)
>  [hive-exec-2.1.2-inm-SNAPSHOT.jar:2.1.2-inm-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.getNumPartitionsByFilter(ObjectStore.java:2746)
>  [hive-exec-2.1.2-inm-SNAPSHOT.jar:2.1.2-inm-SNAPSHOT]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13866) flatten callstack for directSQL errors

2016-06-02 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313571#comment-15313571
 ] 

Ashutosh Chauhan commented on HIVE-13866:
-

can you paste before & after log message for comparison?

> flatten callstack for directSQL errors
> --
>
> Key: HIVE-13866
> URL: https://issues.apache.org/jira/browse/HIVE-13866
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13866.patch
>
>
> These errors look like final errors and confuse people. The callstack may be 
> useful if it's some datanucleus/db issue, but it needs to be flattened and 
> logged with a warning that this is not a final query error and that there's a 
> fallback



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13862) org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getNumPartitionsViaSqlFilter falls back to ORM

2016-06-02 Thread Amareshwari Sriramadasu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amareshwari Sriramadasu updated HIVE-13862:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: 2.1.0
Target Version/s: 2.1.0  (was: 2.2.0)
  Status: Resolved  (was: Patch Available)

Committed. Thanks [~prongs].

[~jcamachorodriguez], Have merged in master and branch-2.1, Please include if 
you are putting up one more RC, if not please change the fix version to 2.1.1.
Thanks!

> org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getNumPartitionsViaSqlFilter
>  falls back to ORM 
> ---
>
> Key: HIVE-13862
> URL: https://issues.apache.org/jira/browse/HIVE-13862
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Amareshwari Sriramadasu
>Assignee: Rajat Khandelwal
> Fix For: 2.1.0
>
> Attachments: HIVE-13862.1.patch, HIVE-13862.patch
>
>
> We are seeing following exception and calls fall back to ORM which make it 
> costly :
> {noformat}
>  WARN  org.apache.hadoop.hive.metastore.ObjectStore - Direct SQL failed, 
> falling back to ORM
> java.lang.ClassCastException: 
> org.datanucleus.store.rdbms.query.ForwardQueryResult cannot be cast to 
> java.lang.Number
> at 
> org.apache.hadoop.hive.metastore.MetaStoreDirectSql.extractSqlInt(MetaStoreDirectSql.java:892)
>  ~[hive-exec-2.1.2-inm-SNAPSHOT.jar:2.1.2-inm-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getNumPartitionsViaSqlFilterInternal(MetaStoreDirectSql.java:855)
>  ~[hive-exec-2.1.2-inm-SNAPSHOT.jar:2.1.2-inm-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getNumPartitionsViaSqlFilter(MetaStoreDirectSql.java:405)
>  ~[hive-exec-2.1.2-inm-SNAPSHOT.jar:2.1.2-inm-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore$5.getSqlResult(ObjectStore.java:2763)
>  ~[hive-exec-2.1.2-inm-SNAPSHOT.jar:2.1.2-inm-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore$5.getSqlResult(ObjectStore.java:2755)
>  ~[hive-exec-2.1.2-inm-SNAPSHOT.jar:2.1.2-inm-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:2606)
>  ~[hive-exec-2.1.2-inm-SNAPSHOT.jar:2.1.2-inm-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.getNumPartitionsByFilterInternal(ObjectStore.java:2770)
>  [hive-exec-2.1.2-inm-SNAPSHOT.jar:2.1.2-inm-SNAPSHOT]
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.getNumPartitionsByFilter(ObjectStore.java:2746)
>  [hive-exec-2.1.2-inm-SNAPSHOT.jar:2.1.2-inm-SNAPSHOT]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13905) optimize ColumnStatsTask::constructColumnStatsFromPackedRows to have lesser number of getTable calls

2016-06-02 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-13905:
---
Affects Version/s: 2.0.0

> optimize ColumnStatsTask::constructColumnStatsFromPackedRows to have lesser 
> number of getTable calls
> 
>
> Key: HIVE-13905
> URL: https://issues.apache.org/jira/browse/HIVE-13905
> Project: Hive
>  Issue Type: Sub-task
>  Components: Query Planning
>Affects Versions: 2.0.0
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Fix For: 2.1.0
>
> Attachments: HIVE-13905.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13905) optimize ColumnStatsTask::constructColumnStatsFromPackedRows to have lesser number of getTable calls

2016-06-02 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-13905:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> optimize ColumnStatsTask::constructColumnStatsFromPackedRows to have lesser 
> number of getTable calls
> 
>
> Key: HIVE-13905
> URL: https://issues.apache.org/jira/browse/HIVE-13905
> Project: Hive
>  Issue Type: Sub-task
>  Components: Query Planning
>Affects Versions: 2.0.0
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HIVE-13905.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13905) optimize ColumnStatsTask::constructColumnStatsFromPackedRows to have lesser number of getTable calls

2016-06-02 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-13905:

Fix Version/s: (was: 2.2.0)
   2.1.0

> optimize ColumnStatsTask::constructColumnStatsFromPackedRows to have lesser 
> number of getTable calls
> 
>
> Key: HIVE-13905
> URL: https://issues.apache.org/jira/browse/HIVE-13905
> Project: Hive
>  Issue Type: Sub-task
>  Components: Query Planning
>Affects Versions: 2.0.0
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Fix For: 2.1.0
>
> Attachments: HIVE-13905.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13905) optimize ColumnStatsTask::constructColumnStatsFromPackedRows to have lesser number of getTable calls

2016-06-02 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-13905:
---
Fix Version/s: 2.2.0

> optimize ColumnStatsTask::constructColumnStatsFromPackedRows to have lesser 
> number of getTable calls
> 
>
> Key: HIVE-13905
> URL: https://issues.apache.org/jira/browse/HIVE-13905
> Project: Hive
>  Issue Type: Sub-task
>  Components: Query Planning
>Affects Versions: 2.0.0
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Fix For: 2.1.0
>
> Attachments: HIVE-13905.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13905) optimize ColumnStatsTask::constructColumnStatsFromPackedRows to have lesser number of getTable calls

2016-06-02 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313568#comment-15313568
 ] 

Pengcheng Xiong commented on HIVE-13905:


failed tests are unrelated. pushed to master. Thanks [~rajesh.balamohan] for 
the patch!

> optimize ColumnStatsTask::constructColumnStatsFromPackedRows to have lesser 
> number of getTable calls
> 
>
> Key: HIVE-13905
> URL: https://issues.apache.org/jira/browse/HIVE-13905
> Project: Hive
>  Issue Type: Sub-task
>  Components: Query Planning
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HIVE-13905.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13905) optimize ColumnStatsTask::constructColumnStatsFromPackedRows to have lesser number of getTable calls

2016-06-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313551#comment-15313551
 ] 

Hive QA commented on HIVE-13905:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12807457/HIVE-13905.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 10197 tests 
executed
*Failed tests:*
{noformat}
TestJdbcWithMiniHA - did not produce a TEST-*.xml file
TestJdbcWithMiniMr - did not produce a TEST-*.xml file
TestOperationLoggingAPIWithTez - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_constprog_partitioner
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
{noformat}

Test results: 
http://ec2-54-177-240-2.us-west-1.compute.amazonaws.com/job/PreCommit-HIVE-MASTER-Build/505/testReport
Console output: 
http://ec2-54-177-240-2.us-west-1.compute.amazonaws.com/job/PreCommit-HIVE-MASTER-Build/505/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-505/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 9 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12807457 - PreCommit-HIVE-MASTER-Build

> optimize ColumnStatsTask::constructColumnStatsFromPackedRows to have lesser 
> number of getTable calls
> 
>
> Key: HIVE-13905
> URL: https://issues.apache.org/jira/browse/HIVE-13905
> Project: Hive
>  Issue Type: Sub-task
>  Components: Query Planning
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HIVE-13905.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HIVE-13882) When hive.server2.async.exec.async.compile is turned on, from JDBC we will get "The query did not generate a result set"

2016-06-02 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman reopened HIVE-13882:
---

this broke the build under itests/ in both 2.1 and 2.2

> When hive.server2.async.exec.async.compile is turned on, from JDBC we will 
> get "The query did not generate a result set" 
> -
>
> Key: HIVE-13882
> URL: https://issues.apache.org/jira/browse/HIVE-13882
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 2.1.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>  Labels: TODOC2.1
> Fix For: 2.1.0
>
> Attachments: HIVE-13882.1.patch, HIVE-13882.2.patch
>
>
>  The following would fail with  "The query did not generate a result set"
> stmt.execute("SET hive.driver.parallel.compilation=true");
> stmt.execute("SET hive.server2.async.exec.async.compile=true");
> ResultSet res =  stmt.executeQuery("SELECT * FROM " + tableName);
> res.next();



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13929) org.apache.hadoop.hive.metastore.api.DataOperationType class not found error when a job is submitted by hive

2016-06-02 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-13929:
--
Summary: org.apache.hadoop.hive.metastore.api.DataOperationType class not 
found error when a job is submitted by hive  (was: 
"org.apache.hadoop.hive.metastore.api.DataOperationType" class not found error 
when a job is submitted by hive)

> org.apache.hadoop.hive.metastore.api.DataOperationType class not found error 
> when a job is submitted by hive
> 
>
> Key: HIVE-13929
> URL: https://issues.apache.org/jira/browse/HIVE-13929
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.3.0, 2.1.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Blocker
> Attachments: HIVE-13929.patch
>
>
> {noformat}
> 0: jdbc:hive2://os-r6-atlas-ha-re-re-2.openst> create table source1 (abc 
> String);
> No rows affected (0.268 seconds)
> 0: jdbc:hive2://os-r6-atlas-ha-re-re-2.openst> create table ctas_src as 
> select * from source1;
> INFO  : Tez session hasn't been created yet. Opening session
> INFO  : Dag name: create table ctas_src as select * ...source1(Stage-1)
> INFO  :
> INFO  : Status: Running (Executing on YARN cluster with App id 
> application_1464692782033_0005)
> INFO  : Map 1: -/-
> ERROR : Status: Failed
> ERROR : Vertex failed, vertexName=Map 1, 
> vertexId=vertex_1464692782033_0005_1_00, diagnostics=[Vertex 
> vertex_1464692782033_0005_1_00 [Map 1] killed/failed due to:INIT_FAILURE, 
> Fail to create InputInitializerManager, 
> org.apache.tez.dag.api.TezReflectionException: Unable to instantiate class 
> with 1 arguments: org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator
> at 
> org.apache.tez.common.ReflectionUtils.getNewInstance(ReflectionUtils.java:70)
> at 
> org.apache.tez.common.ReflectionUtils.createClazzInstance(ReflectionUtils.java:89)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$1.run(RootInputInitializerManager.java:151)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$1.run(RootInputInitializerManager.java:148)
> at java..AccessController.doPrivileged(Native Method)
> at javax..auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop..UserGroupInformation.doAs(UserGroupInformation.java:1724)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager.createInitializer(RootInputInitializerManager.java:148)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager.runInputInitializers(RootInputInitializerManager.java:121)
> at 
> org.apache.tez.dag.app.dag.impl.VertexImpl.setupInputInitializerManager(VertexImpl.java:4607)
> at org.apache.tez.dag.app.dag.impl.VertexImpl.access$4400(VertexImpl.java:202)
> at 
> org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.handleInitEvent(VertexImpl.java:3423)
> at 
> org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.transition(VertexImpl.java:3372)
> at 
> org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.transition(VertexImpl.java:3353)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at org.apache.tez.state.StateMachineTez.doTransition(StateMachineTez.java:57)
> at org.apache.tez.dag.app.dag.impl.VertexImpl.handle(VertexImpl.java:1925)
> at org.apache.tez.dag.app.dag.impl.VertexImpl.handle(VertexImpl.java:201)
> at 
> org.apache.tez.dag.app.DAGAppMaster$VertexEventDispatcher.handle(DAGAppMaster.java:2053)
> at 
> org.apache.tez.dag.app.DAGAppMaster$VertexEventDispatcher.handle(DAGAppMaster.java:2039)
> at org.apache.tez.common.AsyncDispatcher.dispatch(AsyncDispatcher.java:183)
> at org.apache.tez.common.AsyncDispatcher$1.run(AsyncDispatcher.java:114)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.tez.common.ReflectionUtils.getNewInstance(ReflectionUtils.java:68)
> ... 25 more
> Caused by: java.lang.NoClassDefFoundError: 
> org/apache/hadoop/hive/metastore/api/DataOperationType
> at 
> 

[jira] [Commented] (HIVE-9777) LLAP: Add an option to disable uberization

2016-06-02 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313496#comment-15313496
 ] 

Lefty Leverenz commented on HIVE-9777:
--

Doc done:  *hive.llap.auto.allow.uber* is documented in the LLAP section of 
Configuration Properties.

* [Configuration Properties -- hive.llap.auto.allow.uber | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.llap.auto.allow.uber]

HIVE-13281 changes its default value to false in release 2.1.0.

> LLAP: Add an option to disable uberization
> --
>
> Key: HIVE-9777
> URL: https://issues.apache.org/jira/browse/HIVE-9777
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Gunther Hagleitner
> Fix For: llap
>
> Attachments: HIVE-9777.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13281) Update some default configs for LLAP - disable default uber enabled

2016-06-02 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313489#comment-15313489
 ] 

Lefty Leverenz commented on HIVE-13281:
---

Doc note:  This changes the default value of *hive.llap.auto.allow.uber* in 
2.1.0, so the wiki needs to be updated.

* [Configuration Properties -- hive.llap.auto.allow.uber | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.llap.auto.allow.uber]

Added a TODOC2.1 label.

> Update some default configs for LLAP - disable default uber enabled
> ---
>
> Key: HIVE-13281
> URL: https://issues.apache.org/jira/browse/HIVE-13281
> Project: Hive
>  Issue Type: Task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
>  Labels: TODOC2.1
> Fix For: 2.1.0
>
> Attachments: HIVE-13281.03.patch, HIVE-13281.03.patch, 
> HIVE-13281.1.patch, HIVE-13281.2.patch
>
>
> Disable uber mode.
> Enable llap.io by default



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13281) Update some default configs for LLAP - disable default uber enabled

2016-06-02 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-13281:
--
Labels: TODOC2.1  (was: )

> Update some default configs for LLAP - disable default uber enabled
> ---
>
> Key: HIVE-13281
> URL: https://issues.apache.org/jira/browse/HIVE-13281
> Project: Hive
>  Issue Type: Task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
>  Labels: TODOC2.1
> Fix For: 2.1.0
>
> Attachments: HIVE-13281.03.patch, HIVE-13281.03.patch, 
> HIVE-13281.1.patch, HIVE-13281.2.patch
>
>
> Disable uber mode.
> Enable llap.io by default



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13904) Ignore case when retrieving ColumnInfo from RowResolver

2016-06-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313475#comment-15313475
 ] 

Hive QA commented on HIVE-13904:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12807459/HIVE-13904.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 17 failed/errored test(s), 10182 tests 
executed
*Failed tests:*
{noformat}
TestJdbcWithMiniHA - did not produce a TEST-*.xml file
TestJdbcWithMiniMr - did not produce a TEST-*.xml file
TestMiniTezCliDriver-auto_join30.q-script_pipe.q-vector_decimal_10_0.q-and-12-more
 - did not produce a TEST-*.xml file
TestOperationLoggingAPIWithTez - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_rp_udaf_percentile_approx_23
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_duplicate_key
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udaf_percentile_approx_23
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_string_concat
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_windowing_distinct
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_constprog_partitioner
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_subquery_exists
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_string_concat
org.apache.hadoop.hive.cli.TestPerfCliDriver.testPerfCliDriver_query43
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_subquery_exists
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_vector_string_concat
{noformat}

Test results: 
http://ec2-54-177-240-2.us-west-1.compute.amazonaws.com/job/PreCommit-HIVE-MASTER-Build/504/testReport
Console output: 
http://ec2-54-177-240-2.us-west-1.compute.amazonaws.com/job/PreCommit-HIVE-MASTER-Build/504/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-504/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 17 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12807459 - PreCommit-HIVE-MASTER-Build

> Ignore case when retrieving ColumnInfo from RowResolver
> ---
>
> Key: HIVE-13904
> URL: https://issues.apache.org/jira/browse/HIVE-13904
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Affects Versions: 2.1.0, 2.0.1, 2.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-13904.patch
>
>
> To reproduce:
> {noformat}
> -- upper case in subq
> explain
> select * from src b
> where exists
>   (select a.key from src a
>   where b.VALUE = a.VALUE
>   );
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13929) "org.apache.hadoop.hive.metastore.api.DataOperationType" class not found error when a job is submitted by hive

2016-06-02 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313464#comment-15313464
 ] 

Ashutosh Chauhan commented on HIVE-13929:
-

+1

> "org.apache.hadoop.hive.metastore.api.DataOperationType" class not found 
> error when a job is submitted by hive
> --
>
> Key: HIVE-13929
> URL: https://issues.apache.org/jira/browse/HIVE-13929
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.3.0, 2.1.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Blocker
> Attachments: HIVE-13929.patch
>
>
> {noformat}
> 0: jdbc:hive2://os-r6-atlas-ha-re-re-2.openst> create table source1 (abc 
> String);
> No rows affected (0.268 seconds)
> 0: jdbc:hive2://os-r6-atlas-ha-re-re-2.openst> create table ctas_src as 
> select * from source1;
> INFO  : Tez session hasn't been created yet. Opening session
> INFO  : Dag name: create table ctas_src as select * ...source1(Stage-1)
> INFO  :
> INFO  : Status: Running (Executing on YARN cluster with App id 
> application_1464692782033_0005)
> INFO  : Map 1: -/-
> ERROR : Status: Failed
> ERROR : Vertex failed, vertexName=Map 1, 
> vertexId=vertex_1464692782033_0005_1_00, diagnostics=[Vertex 
> vertex_1464692782033_0005_1_00 [Map 1] killed/failed due to:INIT_FAILURE, 
> Fail to create InputInitializerManager, 
> org.apache.tez.dag.api.TezReflectionException: Unable to instantiate class 
> with 1 arguments: org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator
> at 
> org.apache.tez.common.ReflectionUtils.getNewInstance(ReflectionUtils.java:70)
> at 
> org.apache.tez.common.ReflectionUtils.createClazzInstance(ReflectionUtils.java:89)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$1.run(RootInputInitializerManager.java:151)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$1.run(RootInputInitializerManager.java:148)
> at java..AccessController.doPrivileged(Native Method)
> at javax..auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop..UserGroupInformation.doAs(UserGroupInformation.java:1724)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager.createInitializer(RootInputInitializerManager.java:148)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager.runInputInitializers(RootInputInitializerManager.java:121)
> at 
> org.apache.tez.dag.app.dag.impl.VertexImpl.setupInputInitializerManager(VertexImpl.java:4607)
> at org.apache.tez.dag.app.dag.impl.VertexImpl.access$4400(VertexImpl.java:202)
> at 
> org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.handleInitEvent(VertexImpl.java:3423)
> at 
> org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.transition(VertexImpl.java:3372)
> at 
> org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.transition(VertexImpl.java:3353)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at org.apache.tez.state.StateMachineTez.doTransition(StateMachineTez.java:57)
> at org.apache.tez.dag.app.dag.impl.VertexImpl.handle(VertexImpl.java:1925)
> at org.apache.tez.dag.app.dag.impl.VertexImpl.handle(VertexImpl.java:201)
> at 
> org.apache.tez.dag.app.DAGAppMaster$VertexEventDispatcher.handle(DAGAppMaster.java:2053)
> at 
> org.apache.tez.dag.app.DAGAppMaster$VertexEventDispatcher.handle(DAGAppMaster.java:2039)
> at org.apache.tez.common.AsyncDispatcher.dispatch(AsyncDispatcher.java:183)
> at org.apache.tez.common.AsyncDispatcher$1.run(AsyncDispatcher.java:114)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.tez.common.ReflectionUtils.getNewInstance(ReflectionUtils.java:68)
> ... 25 more
> Caused by: java.lang.NoClassDefFoundError: 
> org/apache/hadoop/hive/metastore/api/DataOperationType
> at 
> org.apache.hadoop.hive.ql.io.AcidUtils$Operation.(AcidUtils.java:196)
> at org.apache.hadoop.hive.ql.plan.FileSinkDesc.(FileSinkDesc.java:93)
> at 
> org.apache.hadoop.hive.ql.plan.FileSinkDescConstructorAccess.newInstance(Unknown
>  

[jira] [Updated] (HIVE-13866) flatten callstack for directSQL errors

2016-06-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13866:

Status: Patch Available  (was: Open)

> flatten callstack for directSQL errors
> --
>
> Key: HIVE-13866
> URL: https://issues.apache.org/jira/browse/HIVE-13866
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13866.patch
>
>
> These errors look like final errors and confuse people. The callstack may be 
> useful if it's some datanucleus/db issue, but it needs to be flattened and 
> logged with a warning that this is not a final query error and that there's a 
> fallback



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13866) flatten callstack for directSQL errors

2016-06-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13866:

Attachment: HIVE-13866.patch

[~ashutoshc] can you take a look?

> flatten callstack for directSQL errors
> --
>
> Key: HIVE-13866
> URL: https://issues.apache.org/jira/browse/HIVE-13866
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13866.patch
>
>
> These errors look like final errors and confuse people. The callstack may be 
> useful if it's some datanucleus/db issue, but it needs to be flattened and 
> logged with a warning that this is not a final query error and that there's a 
> fallback



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13760) Add a HIVE_QUERY_TIMEOUT configuration to kill a query if a query is running for more than the configured timeout value.

2016-06-02 Thread zhihai xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313457#comment-15313457
 ] 

zhihai xu commented on HIVE-13760:
--

Thanks for the review [~csun]! That is a good suggestion! I attached a patch 
which added more description for this new config: if both are set, the smaller 
one will be taken. Please review it thanks

> Add a HIVE_QUERY_TIMEOUT configuration to kill a query if a query is running 
> for more than the configured timeout value.
> 
>
> Key: HIVE-13760
> URL: https://issues.apache.org/jira/browse/HIVE-13760
> Project: Hive
>  Issue Type: Improvement
>  Components: Configuration
>Affects Versions: 2.0.0
>Reporter: zhihai xu
>Assignee: zhihai xu
> Attachments: HIVE-13760.000.patch, HIVE-13760.001.patch
>
>
> Add a HIVE_QUERY_TIMEOUT configuration to kill a query if a query is running 
> for more than the configured timeout value. The default value will be -1 , 
> which means no timeout. This will be useful for  user to manage queries with 
> SLA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13760) Add a HIVE_QUERY_TIMEOUT configuration to kill a query if a query is running for more than the configured timeout value.

2016-06-02 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated HIVE-13760:
-
Attachment: HIVE-13760.001.patch

> Add a HIVE_QUERY_TIMEOUT configuration to kill a query if a query is running 
> for more than the configured timeout value.
> 
>
> Key: HIVE-13760
> URL: https://issues.apache.org/jira/browse/HIVE-13760
> Project: Hive
>  Issue Type: Improvement
>  Components: Configuration
>Affects Versions: 2.0.0
>Reporter: zhihai xu
>Assignee: zhihai xu
> Attachments: HIVE-13760.000.patch, HIVE-13760.001.patch
>
>
> Add a HIVE_QUERY_TIMEOUT configuration to kill a query if a query is running 
> for more than the configured timeout value. The default value will be -1 , 
> which means no timeout. This will be useful for  user to manage queries with 
> SLA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-4239) Remove lock on compilation stage

2016-06-02 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313453#comment-15313453
 ] 

Lefty Leverenz commented on HIVE-4239:
--

HIVE-13882 changes the description of *hive.driver.parallel.compilation* in 
release 2.1.0.

> Remove lock on compilation stage
> 
>
> Key: HIVE-4239
> URL: https://issues.apache.org/jira/browse/HIVE-4239
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, Query Processor
>Reporter: Carl Steinbach
>Assignee: Sergey Shelukhin
>  Labels: TODOC2.0
> Fix For: 2.0.0
>
> Attachments: HIVE-4239.01.patch, HIVE-4239.02.patch, 
> HIVE-4239.03.patch, HIVE-4239.04.patch, HIVE-4239.05.patch, 
> HIVE-4239.06.patch, HIVE-4239.07.patch, HIVE-4239.08.patch, HIVE-4239.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13882) When hive.server2.async.exec.async.compile is turned on, from JDBC we will get "The query did not generate a result set"

2016-06-02 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313448#comment-15313448
 ] 

Lefty Leverenz commented on HIVE-13882:
---

Doc note:  This changes the description of *hive.driver.parallel.compilation*, 
which was introduced by HIVE-4239 in 2.0.0.  It needs to be documented in the 
HiveServer2 section of Configuration Properties.

* [Configuration Properties -- HiveServer2 | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-HiveServer2]

Added a TODOC2.1 label.

> When hive.server2.async.exec.async.compile is turned on, from JDBC we will 
> get "The query did not generate a result set" 
> -
>
> Key: HIVE-13882
> URL: https://issues.apache.org/jira/browse/HIVE-13882
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 2.1.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>  Labels: TODOC2.1
> Fix For: 2.1.0
>
> Attachments: HIVE-13882.1.patch, HIVE-13882.2.patch
>
>
>  The following would fail with  "The query did not generate a result set"
> stmt.execute("SET hive.driver.parallel.compilation=true");
> stmt.execute("SET hive.server2.async.exec.async.compile=true");
> ResultSet res =  stmt.executeQuery("SELECT * FROM " + tableName);
> res.next();



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13882) When hive.server2.async.exec.async.compile is turned on, from JDBC we will get "The query did not generate a result set"

2016-06-02 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-13882:
--
Labels: TODOC2.1  (was: )

> When hive.server2.async.exec.async.compile is turned on, from JDBC we will 
> get "The query did not generate a result set" 
> -
>
> Key: HIVE-13882
> URL: https://issues.apache.org/jira/browse/HIVE-13882
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 2.1.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>  Labels: TODOC2.1
> Fix For: 2.1.0
>
> Attachments: HIVE-13882.1.patch, HIVE-13882.2.patch
>
>
>  The following would fail with  "The query did not generate a result set"
> stmt.execute("SET hive.driver.parallel.compilation=true");
> stmt.execute("SET hive.server2.async.exec.async.compile=true");
> ResultSet res =  stmt.executeQuery("SELECT * FROM " + tableName);
> res.next();



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13759) LlapTaskUmbilicalExternalClient should be closed by the record reader

2016-06-02 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-13759:
--
Attachment: HIVE-13759.2.patch

new patch

> LlapTaskUmbilicalExternalClient should be closed by the record reader
> -
>
> Key: HIVE-13759
> URL: https://issues.apache.org/jira/browse/HIVE-13759
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-13759.1.patch, HIVE-13759.2.patch
>
>
> The umbilical external client (and the server socket it creates) doesn't look 
> like it's getting closed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13935) Changing char column of textfile table to string/varchar leaves white space.

2016-06-02 Thread Takahiko Saito (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313423#comment-15313423
 ] 

Takahiko Saito commented on HIVE-13935:
---

Thanks for the suggestion, [~gopalv]
Yes, HDFS contains extra spaces:
{noformat}
0: jdbc:hive2://ts-0531-1.openstacklocal:2181> dfs -cat 
hdfs://ts-0531-5.openstacklocal:8020/apps/hive/warehouse/test_text/00_0
0: jdbc:hive2://ts-0531-1.openstacklocal:2181> ;
+---+--+
|DFS Output |
+---+--+
| horton works  |
+---+--+
{noformat}
Cc: [~mmccline]

> Changing char column of textfile table to string/varchar leaves white space.
> 
>
> Key: HIVE-13935
> URL: https://issues.apache.org/jira/browse/HIVE-13935
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1
>Reporter: Takahiko Saito
>
> {noformat}
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> create table test_text (c 
> char(16)) stored as textfile;
> No rows affected (0.091 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> insert into table test_text 
> values ('horton works ');
> INFO  : Session is already open
> INFO  : Dag name: insert into table test_text values ('ho...')(Stage-1)
> INFO  :
> INFO  : Status: Running (Executing on YARN cluster with App id 
> application_1464222003837_0399)
> INFO  : Map 1: 0/1
> INFO  : Map 1: 0(+1)/1
> INFO  : Map 1: 1/1
> INFO  : Loading data to table default.test_text from 
> hdfs://os-r6-ifsmes-hiveserver2-11-5.openstacklocal:8020/apps/hive/warehouse/test_text/.hive-staging_hive_2016-05-26_17-45-29_669_2888061873550824337-1/-ext-1
> INFO  : Table default.test_text stats: [numFiles=1, numRows=1, totalSize=17, 
> rawDataSize=16]
> No rows affected (6.849 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> select * from test_text;
> +---+--+
> |test_text.c|
> +---+--+
> | horton works  |
> +---+--+
> 1 row selected (0.098 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> describe test_text;
> +---++--+--+
> | col_name  | data_type  | comment  |
> +---++--+--+
> | c | char(16)   |  |
> +---++--+--+
> 1 row selected (0.127 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> alter table test_text change 
> column c c string;
> No rows affected (0.145 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> describe test_text;
> +---++--+--+
> | col_name  | data_type  | comment  |
> +---++--+--+
> | c | string |  |
> +---++--+--+
> 1 row selected (0.127 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> select * from test_text;
> +---+--+
> |test_text.c|
> +---+--+
> | horton works  |
> +---+--+
> 1 row selected (0.066 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13617) LLAP: support non-vectorized execution in IO

2016-06-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13617:

Attachment: HIVE-13617.04.patch

Rebased the patch... also resubmitting for HiveQA since it appears that it has 
forgotten about the patch; and so did I.

> LLAP: support non-vectorized execution in IO
> 
>
> Key: HIVE-13617
> URL: https://issues.apache.org/jira/browse/HIVE-13617
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13617-wo-11417.patch, HIVE-13617-wo-11417.patch, 
> HIVE-13617.01.patch, HIVE-13617.03.patch, HIVE-13617.04.patch, 
> HIVE-13617.patch, HIVE-13617.patch, HIVE-15396-with-oi.patch
>
>
> Two approaches - a separate decoding path, into rows instead of VRBs; or 
> decoding VRBs into rows on a higher level (the original LlapInputFormat). I 
> think the latter might be better - it's not a hugely important path, and perf 
> in non-vectorized case is not the best anyway, so it's better to make do with 
> much less new code and architectural disruption. 
> Some ORC patches in progress introduce an easy to reuse (or so I hope, 
> anyway) VRB-to-row conversion, so we should just use that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13901) Hivemetastore add partitions can be slow depending on filesystems

2016-06-02 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313410#comment-15313410
 ] 

Sergey Shelukhin commented on HIVE-13901:
-

left some comments on RB

> Hivemetastore add partitions can be slow depending on filesystems
> -
>
> Key: HIVE-13901
> URL: https://issues.apache.org/jira/browse/HIVE-13901
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HIVE-13901.1.patch, HIVE-13901.2.patch
>
>
> Depending on FS, creating external tables & adding partitions can be 
> expensive (e.g msck which adds all partitions).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13935) Changing char column of textfile table to string/varchar leaves white space.

2016-06-02 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313408#comment-15313408
 ] 

Gopal V commented on HIVE-13935:


[~taksaito]: this possibly a WONTFIX scenario - the char() columns are encoded 
in TextFile by padding it with extra spaces. You can probably verify that the 
HDFS file actually contains extra spaces.

> Changing char column of textfile table to string/varchar leaves white space.
> 
>
> Key: HIVE-13935
> URL: https://issues.apache.org/jira/browse/HIVE-13935
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1
>Reporter: Takahiko Saito
>
> {noformat}
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> create table test_text (c 
> char(16)) stored as textfile;
> No rows affected (0.091 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> insert into table test_text 
> values ('horton works ');
> INFO  : Session is already open
> INFO  : Dag name: insert into table test_text values ('ho...')(Stage-1)
> INFO  :
> INFO  : Status: Running (Executing on YARN cluster with App id 
> application_1464222003837_0399)
> INFO  : Map 1: 0/1
> INFO  : Map 1: 0(+1)/1
> INFO  : Map 1: 1/1
> INFO  : Loading data to table default.test_text from 
> hdfs://os-r6-ifsmes-hiveserver2-11-5.openstacklocal:8020/apps/hive/warehouse/test_text/.hive-staging_hive_2016-05-26_17-45-29_669_2888061873550824337-1/-ext-1
> INFO  : Table default.test_text stats: [numFiles=1, numRows=1, totalSize=17, 
> rawDataSize=16]
> No rows affected (6.849 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> select * from test_text;
> +---+--+
> |test_text.c|
> +---+--+
> | horton works  |
> +---+--+
> 1 row selected (0.098 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> describe test_text;
> +---++--+--+
> | col_name  | data_type  | comment  |
> +---++--+--+
> | c | char(16)   |  |
> +---++--+--+
> 1 row selected (0.127 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> alter table test_text change 
> column c c string;
> No rows affected (0.145 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> describe test_text;
> +---++--+--+
> | col_name  | data_type  | comment  |
> +---++--+--+
> | c | string |  |
> +---++--+--+
> 1 row selected (0.127 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> select * from test_text;
> +---+--+
> |test_text.c|
> +---+--+
> | horton works  |
> +---+--+
> 1 row selected (0.066 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13935) Changing char column of textfile table to string/varchar leaves white space.

2016-06-02 Thread Takahiko Saito (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takahiko Saito updated HIVE-13935:
--
Summary: Changing char column of textfile table to string/varchar leaves 
white space.  (was: Changing char column of orc table to string/var char drops 
white space.)

> Changing char column of textfile table to string/varchar leaves white space.
> 
>
> Key: HIVE-13935
> URL: https://issues.apache.org/jira/browse/HIVE-13935
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1
>Reporter: Takahiko Saito
>
> {noformat}
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> create table test_text (c 
> char(16)) stored as textfile;
> No rows affected (0.091 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> insert into table test_text 
> values ('horton works ');
> INFO  : Session is already open
> INFO  : Dag name: insert into table test_text values ('ho...')(Stage-1)
> INFO  :
> INFO  : Status: Running (Executing on YARN cluster with App id 
> application_1464222003837_0399)
> INFO  : Map 1: 0/1
> INFO  : Map 1: 0(+1)/1
> INFO  : Map 1: 1/1
> INFO  : Loading data to table default.test_text from 
> hdfs://os-r6-ifsmes-hiveserver2-11-5.openstacklocal:8020/apps/hive/warehouse/test_text/.hive-staging_hive_2016-05-26_17-45-29_669_2888061873550824337-1/-ext-1
> INFO  : Table default.test_text stats: [numFiles=1, numRows=1, totalSize=17, 
> rawDataSize=16]
> No rows affected (6.849 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> select * from test_text;
> +---+--+
> |test_text.c|
> +---+--+
> | horton works  |
> +---+--+
> 1 row selected (0.098 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> describe test_text;
> +---++--+--+
> | col_name  | data_type  | comment  |
> +---++--+--+
> | c | char(16)   |  |
> +---++--+--+
> 1 row selected (0.127 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> alter table test_text change 
> column c c string;
> No rows affected (0.145 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> describe test_text;
> +---++--+--+
> | col_name  | data_type  | comment  |
> +---++--+--+
> | c | string |  |
> +---++--+--+
> 1 row selected (0.127 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> select * from test_text;
> +---+--+
> |test_text.c|
> +---+--+
> | horton works  |
> +---+--+
> 1 row selected (0.066 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-13866) flatten callstack for directSQL errors

2016-06-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin reassigned HIVE-13866:
---

Assignee: Sergey Shelukhin

> flatten callstack for directSQL errors
> --
>
> Key: HIVE-13866
> URL: https://issues.apache.org/jira/browse/HIVE-13866
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>
> These errors look like final errors and confuse people. The callstack may be 
> useful if it's some datanucleus/db issue, but it needs to be flattened and 
> logged with a warning that this is not a final query error and that there's a 
> fallback



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13901) Hivemetastore add partitions can be slow depending on filesystems

2016-06-02 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HIVE-13901:

Attachment: HIVE-13901.2.patch

Hiveconf changes got missed out in earlier patch.

> Hivemetastore add partitions can be slow depending on filesystems
> -
>
> Key: HIVE-13901
> URL: https://issues.apache.org/jira/browse/HIVE-13901
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HIVE-13901.1.patch, HIVE-13901.2.patch
>
>
> Depending on FS, creating external tables & adding partitions can be 
> expensive (e.g msck which adds all partitions).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13913) LLAP: introduce backpressure to recordreader

2016-06-02 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313389#comment-15313389
 ] 

Gopal V commented on HIVE-13913:


Throws 

{code}
Caused by: java.io.IOException: java.lang.IllegalStateException: Queue full
at 
org.apache.hadoop.hive.llap.io.api.impl.LlapInputFormat$LlapRecordReader.rethrowErrorIfAny(LlapInputFormat.java:327)
at 
org.apache.hadoop.hive.llap.io.api.impl.LlapInputFormat$LlapRecordReader.nextCvb(LlapInputFormat.java:287)
at 
org.apache.hadoop.hive.llap.io.api.impl.LlapInputFormat$LlapRecordReader.next(LlapInputFormat.java:213)
at 
org.apache.hadoop.hive.llap.io.api.impl.LlapInputFormat$LlapRecordReader.next(LlapInputFormat.java:129)
at 
org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:350)
... 17 more
Caused by: java.lang.IllegalStateException: Queue full
at java.util.AbstractQueue.add(AbstractQueue.java:98)
at 
java.util.concurrent.ArrayBlockingQueue.add(ArrayBlockingQueue.java:312)
at 
org.apache.hadoop.hive.llap.io.api.impl.LlapInputFormat$LlapRecordReader.consumeData(LlapInputFormat.java:352)
at 
org.apache.hadoop.hive.llap.io.api.impl.LlapInputFormat$LlapRecordReader.consumeData(LlapInputFormat.java:129)
at 
org.apache.hadoop.hive.llap.io.decode.OrcEncodedDataConsumer.decodeBatch(OrcEncodedDataConsumer.java:163)
at 
org.apache.hadoop.hive.llap.io.decode.OrcEncodedDataConsumer.decodeBatch(OrcEncodedDataConsumer.java:49)
at 
org.apache.hadoop.hive.llap.io.decode.EncodedDataConsumer.consumeData(EncodedDataConsumer.java:76)
at 
org.apache.hadoop.hive.llap.io.decode.EncodedDataConsumer.consumeData(EncodedDataConsumer.java:30)
at 
org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.readEncodedColumns(EncodedReaderImpl.java:408)
at 
org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.performDataRead(OrcEncodedDataReader.java:403)
at 
org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:195)
at 
org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:192)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at 
org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:192)
at 
org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:90)
... 5 more
]], Vertex did not succeed du
{code}

> LLAP: introduce backpressure to recordreader
> 
>
> Key: HIVE-13913
> URL: https://issues.apache.org/jira/browse/HIVE-13913
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13913.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13731) LLAP: return LLAP token with the splits

2016-06-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13731:

Attachment: HIVE-13731.01.patch
HIVE-13731.01.wo.13675-13443.patch

Rebasing for now

> LLAP: return LLAP token with the splits
> ---
>
> Key: HIVE-13731
> URL: https://issues.apache.org/jira/browse/HIVE-13731
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13731.01.patch, HIVE-13731.01.wo.13675-13443.patch, 
> HIVE-13731.patch, HIVE-13731.wo.13444-13675-13443.patch
>
>
> Need to return the token with the splits, then take it in LLAPIF and make 
> sure it's used when talking to LLAP



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13917) Investigate and fix tests which are timing out in the precommit build

2016-06-02 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313388#comment-15313388
 ] 

Ashutosh Chauhan commented on HIVE-13917:
-

+1

> Investigate and fix tests which are timing out in the precommit build
> -
>
> Key: HIVE-13917
> URL: https://issues.apache.org/jira/browse/HIVE-13917
> Project: Hive
>  Issue Type: Task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
>Priority: Critical
> Attachments: HIVE-13917.01.patch
>
>
> Three tests seem to timeout consistently.
> TestJdbcWithMiniHA
> TestJdbcWithMiniMr
> TestOperationLoggingAPIWithTez



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13917) Investigate and fix tests which are timing out in the precommit build

2016-06-02 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-13917:
--
Attachment: HIVE-13917.01.patch

This patch seems to fix all three failing tests, by pulling in the  yarn-api 
dependency explicitly.

cc [~ashutoshc]  - please review.

> Investigate and fix tests which are timing out in the precommit build
> -
>
> Key: HIVE-13917
> URL: https://issues.apache.org/jira/browse/HIVE-13917
> Project: Hive
>  Issue Type: Task
>Reporter: Siddharth Seth
>Priority: Critical
> Attachments: HIVE-13917.01.patch
>
>
> Three tests seem to timeout consistently.
> TestJdbcWithMiniHA
> TestJdbcWithMiniMr
> TestOperationLoggingAPIWithTez



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13917) Investigate and fix tests which are timing out in the precommit build

2016-06-02 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-13917:
--
Assignee: Siddharth Seth
  Status: Patch Available  (was: Open)

> Investigate and fix tests which are timing out in the precommit build
> -
>
> Key: HIVE-13917
> URL: https://issues.apache.org/jira/browse/HIVE-13917
> Project: Hive
>  Issue Type: Task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
>Priority: Critical
> Attachments: HIVE-13917.01.patch
>
>
> Three tests seem to timeout consistently.
> TestJdbcWithMiniHA
> TestJdbcWithMiniMr
> TestOperationLoggingAPIWithTez



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13917) Investigate and fix tests which are timing out in the precommit build

2016-06-02 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313372#comment-15313372
 ] 

Siddharth Seth commented on HIVE-13917:
---

Looks like the MiniCluster was not starting properly, and the failure did not 
cause the test to fail - likely due to the guice injector in YARN running into 
the issue.
>From target/surefire-reports/org.apache.hive.jdbc.TestJdbcWithMiniHA-output.txt
{code}
Exception in thread "RM-0" java.lang.NoClassDefFoundError: 
org/apache/hadoop/yarn/api/ApplicationBaseProtocol
  at 
org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebApp.setup(RMWebApp.java:57)
  at org.apache.hadoop.yarn.webapp.WebApp.configureServlets(WebApp.java:159)
  at com.google.inject.servlet.ServletModule.configure(ServletModule.java:53)
  at com.google.inject.AbstractModule.configure(AbstractModule.java:59)
  at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:223)
  at com.google.inject.spi.Elements.getElements(Elements.java:101)
  at 
com.google.inject.internal.InjectorShell$Builder.build(InjectorShell.java:133)
  at 
com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:103)
  at com.google.inject.Guice.createInjector(Guice.java:95)
  at com.google.inject.Guice.createInjector(Guice.java:72)
  at com.google.inject.Guice.createInjector(Guice.java:62)
  at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:280)
  at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:987)
  at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1087)
  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
  at 
org.apache.hadoop.yarn.server.MiniYARNCluster$2.run(MiniYARNCluster.java:310)
Caused by: java.lang.ClassNotFoundException: 
org.apache.hadoop.yarn.api.ApplicationBaseProtocol
  at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
  at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
  ... 16 more
{code}


The test output file 
(target/surefire-reports/org.apache.hive.jdbc.TestJdbcWithMiniHA-output.txt in 
this case) is not copied out from the test system. I'll open another jira to 
try copying such files out as well.


> Investigate and fix tests which are timing out in the precommit build
> -
>
> Key: HIVE-13917
> URL: https://issues.apache.org/jira/browse/HIVE-13917
> Project: Hive
>  Issue Type: Task
>Reporter: Siddharth Seth
>Priority: Critical
>
> Three tests seem to timeout consistently.
> TestJdbcWithMiniHA
> TestJdbcWithMiniMr
> TestOperationLoggingAPIWithTez



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13933) Add an option to turn off parallel file moves

2016-06-02 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313370#comment-15313370
 ] 

Hari Sankar Sivarama Subramaniyan commented on HIVE-13933:
--

+1 

> Add an option to turn off parallel file moves
> -
>
> Key: HIVE-13933
> URL: https://issues.apache.org/jira/browse/HIVE-13933
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 2.1.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-13933.patch
>
>
> Since this is a new feature, it make sense to have an ability to turn it off.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13909) upgrade ACLs in LLAP registry when the cluster is upgraded to secure

2016-06-02 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313362#comment-15313362
 ] 

Prasanth Jayachandran commented on HIVE-13909:
--

lgtm, +1

> upgrade ACLs in LLAP registry when the cluster is upgraded to secure
> 
>
> Key: HIVE-13909
> URL: https://issues.apache.org/jira/browse/HIVE-13909
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13909.01.patch, HIVE-13909.patch
>
>
> ZK model has authentication and authorization mixed together, so it's 
> impossible to set up acls that would carry over between unsecure and secure 
> clusters in the normal case (i.e. work for a specific users no matter the 
> authentication method).
> To support cluster updates from unsecure to secure, we'd need to change the 
> ACLs ourselves.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13159) TxnHandler should support datanucleus.connectionPoolingType = None

2016-06-02 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313346#comment-15313346
 ] 

Sergey Shelukhin commented on HIVE-13159:
-

Hmm

> TxnHandler should support datanucleus.connectionPoolingType = None
> --
>
> Key: HIVE-13159
> URL: https://issues.apache.org/jira/browse/HIVE-13159
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Sergey Shelukhin
>Assignee: Alan Gates
> Attachments: HIVE-13159.2.patch, HIVE-13159.patch
>
>
> Right now, one has to choose bonecp or dbcp.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13772) LLAPIF: encryption for the output streaming channel

2016-06-02 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313347#comment-15313347
 ] 

Sergey Shelukhin commented on HIVE-13772:
-

Security is covered by HIVE-13827

> LLAPIF: encryption for the output streaming channel
> ---
>
> Key: HIVE-13772
> URL: https://issues.apache.org/jira/browse/HIVE-13772
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Jason Dere
>
> As far as I understood from some discussion, the channel that is used for the 
> streaming of results doesn't have proper security and/or encryption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13772) LLAPIF: encryption for the output streaming channel

2016-06-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13772:

Summary: LLAPIF: encryption for the output streaming channel  (was: LLAPIF: 
security and encryption for the output streaming channel)

> LLAPIF: encryption for the output streaming channel
> ---
>
> Key: HIVE-13772
> URL: https://issues.apache.org/jira/browse/HIVE-13772
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Jason Dere
>
> As far as I understood from some discussion, the channel that is used for the 
> streaming of results doesn't have proper security and/or encryption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13934) Configure Tez to make nocondiional task size memory available for the Processor

2016-06-02 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-13934:
--
Description: 
Currently, noconditionaltasksize is not validated against the container size, 
the reservations made in the container by Tez for Inputs / Outputs etc.

Check this at compile time to see if enough memory is available, or set up the 
vertex to reserve additional memory for the Processor.

  was:Otherwise it's very easy to run OOM


> Configure Tez to make nocondiional task size memory available for the 
> Processor
> ---
>
> Key: HIVE-13934
> URL: https://issues.apache.org/jira/browse/HIVE-13934
> Project: Hive
>  Issue Type: Bug
>Reporter: Wei Zheng
>Assignee: Siddharth Seth
>
> Currently, noconditionaltasksize is not validated against the container size, 
> the reservations made in the container by Tez for Inputs / Outputs etc.
> Check this at compile time to see if enough memory is available, or set up 
> the vertex to reserve additional memory for the Processor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13934) Configure Tez to make nocondiional task size memory available for the Processor

2016-06-02 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-13934:
--
Summary: Configure Tez to make nocondiional task size memory available for 
the Processor  (was: Tez needs to allocate extra buffer space for joins)

> Configure Tez to make nocondiional task size memory available for the 
> Processor
> ---
>
> Key: HIVE-13934
> URL: https://issues.apache.org/jira/browse/HIVE-13934
> Project: Hive
>  Issue Type: Bug
>Reporter: Wei Zheng
>Assignee: Siddharth Seth
>
> Otherwise it's very easy to run OOM



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13932) Hive SMB Map Join with small set of LIMIT failed with NPE

2016-06-02 Thread Yongzhi Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongzhi Chen updated HIVE-13932:

Status: Patch Available  (was: Open)

Need code review.

> Hive SMB Map Join with small set of LIMIT failed with NPE
> -
>
> Key: HIVE-13932
> URL: https://issues.apache.org/jira/browse/HIVE-13932
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.0.0
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
> Attachments: HIVE-13932.1.patch
>
>
> 1) prepare sample data:
> a=1
> while [[ $a -lt 100 ]]; do echo $a ; let a=$a+1; done > data
> 2) prepare source hive table:
> CREATE TABLE `s`(`c` string);
> load data local inpath 'data' into table s;
> 3) prepare the bucketed table:
> set hive.enforce.bucketing=true;
> set hive.enforce.sorting=true;
> CREATE TABLE `t`(`c` string) CLUSTERED BY (c) SORTED BY (c) INTO 5 BUCKETS;
> insert into t select * from s;
> 4) reproduce this issue:
> SET hive.auto.convert.sortmerge.join = true;
> SET hive.auto.convert.sortmerge.join.bigtable.selection.policy = 
> org.apache.hadoop.hive.ql.optimizer.LeftmostBigTableSelectorForAutoSMJ;
> SET hive.auto.convert.sortmerge.join.noconditionaltask = true;
> SET hive.optimize.bucketmapjoin = true;
> SET hive.optimize.bucketmapjoin.sortedmerge = true;
> select * from t join t t1 on t.c=t1.c limit 1;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13932) Hive SMB Map Join with small set of LIMIT failed with NPE

2016-06-02 Thread Yongzhi Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongzhi Chen updated HIVE-13932:

Attachment: HIVE-13932.1.patch

The code should have null value check. Attach the fix.

> Hive SMB Map Join with small set of LIMIT failed with NPE
> -
>
> Key: HIVE-13932
> URL: https://issues.apache.org/jira/browse/HIVE-13932
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.0.0, 2.0.0
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
> Attachments: HIVE-13932.1.patch
>
>
> 1) prepare sample data:
> a=1
> while [[ $a -lt 100 ]]; do echo $a ; let a=$a+1; done > data
> 2) prepare source hive table:
> CREATE TABLE `s`(`c` string);
> load data local inpath 'data' into table s;
> 3) prepare the bucketed table:
> set hive.enforce.bucketing=true;
> set hive.enforce.sorting=true;
> CREATE TABLE `t`(`c` string) CLUSTERED BY (c) SORTED BY (c) INTO 5 BUCKETS;
> insert into t select * from s;
> 4) reproduce this issue:
> SET hive.auto.convert.sortmerge.join = true;
> SET hive.auto.convert.sortmerge.join.bigtable.selection.policy = 
> org.apache.hadoop.hive.ql.optimizer.LeftmostBigTableSelectorForAutoSMJ;
> SET hive.auto.convert.sortmerge.join.noconditionaltask = true;
> SET hive.optimize.bucketmapjoin = true;
> SET hive.optimize.bucketmapjoin.sortedmerge = true;
> select * from t join t t1 on t.c=t1.c limit 1;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13836) DbNotifications giving an error = Invalid state. Transaction has already started

2016-06-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313329#comment-15313329
 ] 

Hive QA commented on HIVE-13836:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12807451/HIVE-13836.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 10197 tests 
executed
*Failed tests:*
{noformat}
TestJdbcWithMiniHA - did not produce a TEST-*.xml file
TestJdbcWithMiniMr - did not produce a TEST-*.xml file
TestOperationLoggingAPIWithTez - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_constprog_partitioner
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
{noformat}

Test results: 
http://ec2-54-177-240-2.us-west-1.compute.amazonaws.com/job/PreCommit-HIVE-MASTER-Build/503/testReport
Console output: 
http://ec2-54-177-240-2.us-west-1.compute.amazonaws.com/job/PreCommit-HIVE-MASTER-Build/503/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-503/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 8 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12807451 - PreCommit-HIVE-MASTER-Build

> DbNotifications giving an error = Invalid state. Transaction has already 
> started
> 
>
> Key: HIVE-13836
> URL: https://issues.apache.org/jira/browse/HIVE-13836
> Project: Hive
>  Issue Type: Bug
>Reporter: Nachiket Vaidya
>Assignee: Nachiket Vaidya
>Priority: Critical
>  Labels: patch-available
> Attachments: HIVE-13836.2.patch, HIVE-13836.patch
>
>
> I used pyhs2 python client to create tables/partitions in hive. I was working 
> fine until I moved to multithreaded scripts which created 8 connections and 
> ran DDL queries concurrently.
> I got the error as
> {noformat}
> 2016-05-04 17:49:26,226 ERROR 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-4-thread-194]: 
> HMSHandler Fatal error: Invalid state. Transaction has already started
> org.datanucleus.transaction.NucleusTransactionException: Invalid state. 
> Transaction has already started
> at 
> org.datanucleus.transaction.TransactionManager.begin(TransactionManager.java:47)
> at org.datanucleus.TransactionImpl.begin(TransactionImpl.java:131)
> at 
> org.datanucleus.api.jdo.JDOTransaction.internalBegin(JDOTransaction.java:88)
> at 
> org.datanucleus.api.jdo.JDOTransaction.begin(JDOTransaction.java:80)
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.openTransaction(ObjectStore.java:463)
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.addNotificationEvent(ObjectStore.java:7522)
> at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:114)
> at com.sun.proxy.$Proxy10.addNotificationEvent(Unknown Source)
> at 
> org.apache.hive.hcatalog.listener.DbNotificationListener.enqueue(DbNotificationListener.java:261)
> at 
> org.apache.hive.hcatalog.listener.DbNotificationListener.onCreateTable(DbNotificationListener.java:123)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1483)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1502)
> at sun.reflect.GeneratedMethodAccessor57.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:138)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99)
> at 
> com.sun.proxy.$Proxy14.create_table_with_environment_context(Unknown Source)
> at 
> 

[jira] [Commented] (HIVE-13911) load inpath fails throwing org.apache.hadoop.security.AccessControlException

2016-06-02 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313294#comment-15313294
 ] 

Ashutosh Chauhan commented on HIVE-13911:
-

+1 pending tests

> load inpath fails throwing org.apache.hadoop.security.AccessControlException
> 
>
> Key: HIVE-13911
> URL: https://issues.apache.org/jira/browse/HIVE-13911
> Project: Hive
>  Issue Type: Bug
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Attachments: HIVE-13911.1.patch, HIVE-13911.2.patch, 
> HIVE-13911.3.patch
>
>
> Similar to HIVE-13857



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13809) Hybrid Grace Hash Join memory usage estimation didn't take into account the bloom filter size

2016-06-02 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-13809:
-
Description: 
Memory estimation is important during hash table loading, because we need to 
make the decision of whether to load the next hash partition in memory or spill 
it. If the assumption is there's enough memory but it turns out not the case, 
we will run into OOM problem.

Currently hybrid grace hash join memory usage estimation didn't take into 
account the bloom filter size. In large test cases (TB scale) the bloom filter 
grows as big as hundreds of MB, big enough to cause estimation error.

The solution is to count in the bloom filter size into memory estimation.

Another issue this patch will fix is possible NPE due to object cache reuse 
during hybrid grace hash join.

  was:
Memory estimation is important during hash table loading, because we need to 
make the decision of whether to load the next hash partition in memory or spill 
it. If the assumption is there's enough memory but it turns out not the case, 
we will run into OOM problem.

Currently hybrid grace hash join memory usage estimation didn't take into 
account the bloom filter size. In large test cases (TB scale) the bloom filter 
grows as big as hundreds of MB, big enough to cause estimation error.

The solution is to count in the bloom filter size into memory estimation.


> Hybrid Grace Hash Join memory usage estimation didn't take into account the 
> bloom filter size
> -
>
> Key: HIVE-13809
> URL: https://issues.apache.org/jira/browse/HIVE-13809
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.0.0, 2.1.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>
> Memory estimation is important during hash table loading, because we need to 
> make the decision of whether to load the next hash partition in memory or 
> spill it. If the assumption is there's enough memory but it turns out not the 
> case, we will run into OOM problem.
> Currently hybrid grace hash join memory usage estimation didn't take into 
> account the bloom filter size. In large test cases (TB scale) the bloom 
> filter grows as big as hundreds of MB, big enough to cause estimation error.
> The solution is to count in the bloom filter size into memory estimation.
> Another issue this patch will fix is possible NPE due to object cache reuse 
> during hybrid grace hash join.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13913) LLAP: introduce backpressure to recordreader

2016-06-02 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313291#comment-15313291
 ] 

Sergey Shelukhin commented on HIVE-13913:
-

[~gopalv] [~prasanth_j] can you please review?

> LLAP: introduce backpressure to recordreader
> 
>
> Key: HIVE-13913
> URL: https://issues.apache.org/jira/browse/HIVE-13913
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13913.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-13913) LLAP: introduce backpressure to recordreader

2016-06-02 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313289#comment-15313289
 ] 

Sergey Shelukhin edited comment on HIVE-13913 at 6/2/16 11:29 PM:
--

Simple patch. I am removing the pause mechanism since it's hard to propagate to 
reader. Better backpressure is possible, but for that a thread boundary is 
needed between reading and decompression (too much memory usage potential after 
decompression if we use the existing one before decoding to avoid small 
allocations in decoding) and probably an additional check so we don't read to 
much.


was (Author: sershe):
Simple patch. I am removing the pause mechanism since it's hard to propagate to 
recordreader. 
Better backpressure is possible, but for that a thread boundary is needed 
between reading and decompression (too much memory usage potential after 
decompression if we use the existing one before decoding to avoid small 
allocations in decoding) and probably an additional check so we don't read to 
much.

> LLAP: introduce backpressure to recordreader
> 
>
> Key: HIVE-13913
> URL: https://issues.apache.org/jira/browse/HIVE-13913
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13913.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13913) LLAP: introduce backpressure to recordreader

2016-06-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13913:

Status: Patch Available  (was: Open)

> LLAP: introduce backpressure to recordreader
> 
>
> Key: HIVE-13913
> URL: https://issues.apache.org/jira/browse/HIVE-13913
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13913.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13913) LLAP: introduce backpressure to recordreader

2016-06-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13913:

Attachment: HIVE-13913.patch

Simple patch. I am removing the pause mechanism since it's hard to propagate to 
recordreader. 
Better backpressure is possible, but for that a thread boundary is needed 
between reading and decompression (too much memory usage potential after 
decompression if we use the existing one before decoding to avoid small 
allocations in decoding) and probably an additional check so we don't read to 
much.

> LLAP: introduce backpressure to recordreader
> 
>
> Key: HIVE-13913
> URL: https://issues.apache.org/jira/browse/HIVE-13913
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13913.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13931) Add support for HikariCP and replace BoneCP usage with HikariCP

2016-06-02 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313287#comment-15313287
 ] 

Sushanth Sowmyan commented on HIVE-13931:
-

Actually, never mind - looks like HIVE-7496 removed that from further git 
tracking. So the original patch remains as is.

> Add support for HikariCP and replace BoneCP usage with HikariCP
> ---
>
> Key: HIVE-13931
> URL: https://issues.apache.org/jira/browse/HIVE-13931
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-13931.patch
>
>
> Currently, we use BoneCP as our primary connection pooling mechanism 
> (overridable by users). However, BoneCP is no longer being actively 
> developed, and is considered deprecated, replaced by HikariCP.
> Thus, we should add support for HikariCP, and try to replace our primary 
> usage of BoneCP with it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13931) Add support for HikariCP and replace BoneCP usage with HikariCP

2016-06-02 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313286#comment-15313286
 ] 

Sushanth Sowmyan commented on HIVE-13931:
-

But that's a good point - even if that's generated(and I'm unsure what 
generated it), it seems to be checked in. Let me update it.

> Add support for HikariCP and replace BoneCP usage with HikariCP
> ---
>
> Key: HIVE-13931
> URL: https://issues.apache.org/jira/browse/HIVE-13931
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-13931.patch
>
>
> Currently, we use BoneCP as our primary connection pooling mechanism 
> (overridable by users). However, BoneCP is no longer being actively 
> developed, and is considered deprecated, replaced by HikariCP.
> Thus, we should add support for HikariCP, and try to replace our primary 
> usage of BoneCP with it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13540) Casts to numeric types don't seem to work in hplsql

2016-06-02 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313266#comment-15313266
 ] 

Alan Gates commented on HIVE-13540:
---

+1

> Casts to numeric types don't seem to work in hplsql
> ---
>
> Key: HIVE-13540
> URL: https://issues.apache.org/jira/browse/HIVE-13540
> Project: Hive
>  Issue Type: Bug
>  Components: hpl/sql
>Affects Versions: 2.2.0
>Reporter: Carter Shanklin
>Assignee: Dmitry Tolpeko
> Attachments: HIVE-13540.1.patch
>
>
> Maybe I'm doing this wrong? But it seems to be broken.
> Casts to string types seem to work fine, but not numbers.
> This code:
> {code}
> temp_int = CAST('1' AS int);
> print temp_int
> temp_float   = CAST('1.2' AS float);
> print temp_float
> temp_double  = CAST('1.2' AS double);
> print temp_double
> temp_decimal = CAST('1.2' AS decimal(10, 4));
> print temp_decimal
> temp_string = CAST('1.2' AS string);
> print temp_string
> {code}
> Produces this output:
> {code}
> [vagrant@hdp250 hplsql]$ hplsql -f temp2.hplsql
> which: no hbase in 
> (/usr/lib64/qt-3.3/bin:/usr/lib/jvm/java/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/puppetlabs/bin:/usr/local/share/jmeter/bin:/home/vagrant/bin)
> WARNING: Use "yarn jar" to launch YARN applications.
> null
> null
> null
> null
> 1.2
> {code}
> The software I'm using is not anything released but is pretty close to the 
> trunk, 2 weeks old at most.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13931) Add support for HikariCP and replace BoneCP usage with HikariCP

2016-06-02 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313265#comment-15313265
 ] 

Sergey Shelukhin commented on HIVE-13931:
-

Ah., nm, looks like it's a generated file

> Add support for HikariCP and replace BoneCP usage with HikariCP
> ---
>
> Key: HIVE-13931
> URL: https://issues.apache.org/jira/browse/HIVE-13931
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-13931.patch
>
>
> Currently, we use BoneCP as our primary connection pooling mechanism 
> (overridable by users). However, BoneCP is no longer being actively 
> developed, and is considered deprecated, replaced by HikariCP.
> Thus, we should add support for HikariCP, and try to replace our primary 
> usage of BoneCP with it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13335) get rid of TxnHandler.TIMED_OUT_TXN_ABORT_BATCH_SIZE

2016-06-02 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-13335:
--
Target Version/s: 1.3.0, 2.2.0

> get rid of TxnHandler.TIMED_OUT_TXN_ABORT_BATCH_SIZE
> 
>
> Key: HIVE-13335
> URL: https://issues.apache.org/jira/browse/HIVE-13335
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.3.0, 2.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>
> look for usages - it's no longer useful; in fact may be a perf hit
> made obsolete by HIVE-12439



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13931) Add support for HikariCP and replace BoneCP usage with HikariCP

2016-06-02 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313263#comment-15313263
 ] 

Sergey Shelukhin commented on HIVE-13931:
-

Sorry, I meant ./conf/hive-default.xml.template

> Add support for HikariCP and replace BoneCP usage with HikariCP
> ---
>
> Key: HIVE-13931
> URL: https://issues.apache.org/jira/browse/HIVE-13931
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-13931.patch
>
>
> Currently, we use BoneCP as our primary connection pooling mechanism 
> (overridable by users). However, BoneCP is no longer being actively 
> developed, and is considered deprecated, replaced by HikariCP.
> Thus, we should add support for HikariCP, and try to replace our primary 
> usage of BoneCP with it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13933) Add an option to turn off parallel file moves

2016-06-02 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-13933:

Status: Patch Available  (was: Open)

> Add an option to turn off parallel file moves
> -
>
> Key: HIVE-13933
> URL: https://issues.apache.org/jira/browse/HIVE-13933
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 2.1.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-13933.patch
>
>
> Since this is a new feature, it make sense to have an ability to turn it off.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13933) Add an option to turn off parallel file moves

2016-06-02 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-13933:

Attachment: HIVE-13933.patch

> Add an option to turn off parallel file moves
> -
>
> Key: HIVE-13933
> URL: https://issues.apache.org/jira/browse/HIVE-13933
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 2.1.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-13933.patch
>
>
> Since this is a new feature, it make sense to have an ability to turn it off.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13931) Add support for HikariCP and replace BoneCP usage with HikariCP

2016-06-02 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313256#comment-15313256
 ] 

Sushanth Sowmyan commented on HIVE-13931:
-

:D

That was done too, in the HiveConf change.

> Add support for HikariCP and replace BoneCP usage with HikariCP
> ---
>
> Key: HIVE-13931
> URL: https://issues.apache.org/jira/browse/HIVE-13931
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-13931.patch
>
>
> Currently, we use BoneCP as our primary connection pooling mechanism 
> (overridable by users). However, BoneCP is no longer being actively 
> developed, and is considered deprecated, replaced by HikariCP.
> Thus, we should add support for HikariCP, and try to replace our primary 
> usage of BoneCP with it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13931) Add support for HikariCP and replace BoneCP usage with HikariCP

2016-06-02 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313250#comment-15313250
 ] 

Sergey Shelukhin commented on HIVE-13931:
-

Oh, right. +1 pending tests. Can we set it as default too?

> Add support for HikariCP and replace BoneCP usage with HikariCP
> ---
>
> Key: HIVE-13931
> URL: https://issues.apache.org/jira/browse/HIVE-13931
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-13931.patch
>
>
> Currently, we use BoneCP as our primary connection pooling mechanism 
> (overridable by users). However, BoneCP is no longer being actively 
> developed, and is considered deprecated, replaced by HikariCP.
> Thus, we should add support for HikariCP, and try to replace our primary 
> usage of BoneCP with it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13842) Expose ability to set number of connections in the pool in TxnHandler

2016-06-02 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-13842:
--
Issue Type: Improvement  (was: Bug)

> Expose ability to set number of connections in the pool in TxnHandler
> -
>
> Key: HIVE-13842
> URL: https://issues.apache.org/jira/browse/HIVE-13842
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>
> Current defaults are hardcoded 8/10 for dbcp/bonecp



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13931) Add support for HikariCP and replace BoneCP usage with HikariCP

2016-06-02 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313248#comment-15313248
 ] 

Sushanth Sowmyan commented on HIVE-13931:
-

The patch adds that, with changes to TxnHandler.java.

> Add support for HikariCP and replace BoneCP usage with HikariCP
> ---
>
> Key: HIVE-13931
> URL: https://issues.apache.org/jira/browse/HIVE-13931
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-13931.patch
>
>
> Currently, we use BoneCP as our primary connection pooling mechanism 
> (overridable by users). However, BoneCP is no longer being actively 
> developed, and is considered deprecated, replaced by HikariCP.
> Thus, we should add support for HikariCP, and try to replace our primary 
> usage of BoneCP with it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-13884) Disallow queries fetching more than a configured number of partitions in PartitionPruner

2016-06-02 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-13884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña reassigned HIVE-13884:
--

Assignee: Sergio Peña  (was: Mohit Sabharwal)

> Disallow queries fetching more than a configured number of partitions in 
> PartitionPruner
> 
>
> Key: HIVE-13884
> URL: https://issues.apache.org/jira/browse/HIVE-13884
> Project: Hive
>  Issue Type: Improvement
>Reporter: Mohit Sabharwal
>Assignee: Sergio Peña
>
> Currently the PartitionPruner requests either all partitions or partitions 
> based on filter expression. In either scenarios, if the number of partitions 
> accessed is large there can be significant memory pressure at the HMS server 
> end.
> We already have a config {{hive.limit.query.max.table.partition}} that 
> enforces limits on number of partitions that may be scanned per operator. But 
> this check happens after the PartitionPruner has already fetched all 
> partitions.
> We should add an option at PartitionPruner level to disallow queries that 
> attempt to access number of partitions beyond a configurable limit.
> Note that {{hive.mapred.mode=strict}} disallow queries without a partition 
> filter in PartitionPruner, but this check accepts any query with a pruning 
> condition, even if partitions fetched are large. In multi-tenant 
> environments, admins could use more control w.r.t. number of partitions 
> allowed based on HMS memory capacity.
> One option is to have PartitionPruner first fetch the partition names 
> (instead of partition specs) and throw an exception if number of partitions 
> exceeds the configured value. Otherwise, fetch the partition specs.
> Looks like the existing {{listPartitionNames}} call could be used if extended 
> to take partition filter expressions like {{getPartitionsByExpr}} call does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HIVE-13884) Disallow queries fetching more than a configured number of partitions in PartitionPruner

2016-06-02 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-13884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-13884 started by Sergio Peña.
--
> Disallow queries fetching more than a configured number of partitions in 
> PartitionPruner
> 
>
> Key: HIVE-13884
> URL: https://issues.apache.org/jira/browse/HIVE-13884
> Project: Hive
>  Issue Type: Improvement
>Reporter: Mohit Sabharwal
>Assignee: Sergio Peña
>
> Currently the PartitionPruner requests either all partitions or partitions 
> based on filter expression. In either scenarios, if the number of partitions 
> accessed is large there can be significant memory pressure at the HMS server 
> end.
> We already have a config {{hive.limit.query.max.table.partition}} that 
> enforces limits on number of partitions that may be scanned per operator. But 
> this check happens after the PartitionPruner has already fetched all 
> partitions.
> We should add an option at PartitionPruner level to disallow queries that 
> attempt to access number of partitions beyond a configurable limit.
> Note that {{hive.mapred.mode=strict}} disallow queries without a partition 
> filter in PartitionPruner, but this check accepts any query with a pruning 
> condition, even if partitions fetched are large. In multi-tenant 
> environments, admins could use more control w.r.t. number of partitions 
> allowed based on HMS memory capacity.
> One option is to have PartitionPruner first fetch the partition names 
> (instead of partition specs) and throw an exception if number of partitions 
> exceeds the configured value. Otherwise, fetch the partition specs.
> Looks like the existing {{listPartitionNames}} call could be used if extended 
> to take partition filter expressions like {{getPartitionsByExpr}} call does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13539) HiveHFileOutputFormat searching the wrong directory for HFiles

2016-06-02 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313240#comment-15313240
 ] 

Matt McCline commented on HIVE-13539:
-

The command I used from itests directory was:

{code}
mvn test -Dtest=TestHBaseCliDriver -Dqfile=hive_hfile_output_format.q 
-Dtest.output.overwrite=true -Phadoop-2
{code}


> HiveHFileOutputFormat searching the wrong directory for HFiles
> --
>
> Key: HIVE-13539
> URL: https://issues.apache.org/jira/browse/HIVE-13539
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Affects Versions: 1.1.0
> Environment: Built into CDH 5.4.7
>Reporter: Tim Robertson
>Assignee: Matt McCline
>Priority: Blocker
> Attachments: hive_hfile_output_format.q, 
> hive_hfile_output_format.q.out
>
>
> When creating HFiles for a bulkload in HBase I believe it is looking in the 
> wrong directory to find the HFiles, resulting in the following exception:
> {code}
> Error: java.lang.RuntimeException: Hive Runtime Error while closing 
> operators: java.io.IOException: Multiple family directories found in 
> hdfs://c1n1.gbif.org:8020/user/hive/warehouse/tim.db/coords_hbase/_temporary/2/_temporary
>   at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:295)
>   at 
> org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:453)
>   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.io.IOException: Multiple family directories found in 
> hdfs://c1n1.gbif.org:8020/user/hive/warehouse/tim.db/coords_hbase/_temporary/2/_temporary
>   at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.closeWriters(FileSinkOperator.java:188)
>   at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:958)
>   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:598)
>   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:610)
>   at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:287)
>   ... 7 more
> Caused by: java.io.IOException: Multiple family directories found in 
> hdfs://c1n1.gbif.org:8020/user/hive/warehouse/tim.db/coords_hbase/_temporary/2/_temporary
>   at 
> org.apache.hadoop.hive.hbase.HiveHFileOutputFormat$1.close(HiveHFileOutputFormat.java:158)
>   at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.closeWriters(FileSinkOperator.java:185)
>   ... 11 more
> {code}
> The issue is that is looks for the HFiles in 
> {{hdfs://c1n1.gbif.org:8020/user/hive/warehouse/tim.db/coords_hbase/_temporary/2/_temporary}}
>  when I believe it should be looking in the task attempt subfolder, such as 
> {{hdfs://c1n1.gbif.org:8020/user/hive/warehouse/tim.db/coords_hbase/_temporary/2/_temporary/attempt_1461004169450_0002_r_00_1000}}.
> This can be reproduced in any HFile creation such as:
> {code:sql}
> CREATE TABLE coords_hbase(id INT, x DOUBLE, y DOUBLE)
> STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
> WITH SERDEPROPERTIES (
>   'hbase.columns.mapping' = ':key,o:x,o:y',
>   'hbase.table.default.storage.type' = 'binary');
> SET hfile.family.path=/tmp/coords_hfiles/o; 
> SET hive.hbase.generatehfiles=true;
> INSERT OVERWRITE TABLE coords_hbase 
> SELECT id, decimalLongitude, decimalLatitude
> FROM source
> CLUSTER BY id; 
> {code}
> Any advice greatly appreciated



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13759) LlapTaskUmbilicalExternalClient should be closed by the record reader

2016-06-02 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313228#comment-15313228
 ] 

Jason Dere commented on HIVE-13759:
---

Regarding LlapBaseInputFormat /LlapRowInputFormat.close(), that was supposed to 
be left to the application, but as you say this is not standard InputFormat 
usage .. if you think it makes more sense to simply close the connection in 
getSplits() I can make that change and eliminate the close() call, though this 
means we lose the support for more complex queries.

I will leave the Closeable interface for the umbilical client since it will 
cause dependency problems otherwise since LlapBaseRecordReader does not have 
visibility to the umbilical client class.

> LlapTaskUmbilicalExternalClient should be closed by the record reader
> -
>
> Key: HIVE-13759
> URL: https://issues.apache.org/jira/browse/HIVE-13759
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-13759.1.patch
>
>
> The umbilical external client (and the server socket it creates) doesn't look 
> like it's getting closed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13911) load inpath fails throwing org.apache.hadoop.security.AccessControlException

2016-06-02 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-13911:
-
Attachment: HIVE-13911.3.patch

patch#3 might be better since based on the claims in patch#2, 
setFullFileStatus() can be totally avoided when we copy the file.

> load inpath fails throwing org.apache.hadoop.security.AccessControlException
> 
>
> Key: HIVE-13911
> URL: https://issues.apache.org/jira/browse/HIVE-13911
> Project: Hive
>  Issue Type: Bug
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Attachments: HIVE-13911.1.patch, HIVE-13911.2.patch, 
> HIVE-13911.3.patch
>
>
> Similar to HIVE-13857



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13911) load inpath fails throwing org.apache.hadoop.security.AccessControlException

2016-06-02 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-13911:
-
Attachment: HIVE-13911.2.patch

Verified patch#2 with cluster.

> load inpath fails throwing org.apache.hadoop.security.AccessControlException
> 
>
> Key: HIVE-13911
> URL: https://issues.apache.org/jira/browse/HIVE-13911
> Project: Hive
>  Issue Type: Bug
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Attachments: HIVE-13911.1.patch, HIVE-13911.2.patch
>
>
> Similar to HIVE-13857



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13911) load inpath fails throwing org.apache.hadoop.security.AccessControlException

2016-06-02 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-13911:
-
Attachment: (was: HIVE-13911.2.patch)

> load inpath fails throwing org.apache.hadoop.security.AccessControlException
> 
>
> Key: HIVE-13911
> URL: https://issues.apache.org/jira/browse/HIVE-13911
> Project: Hive
>  Issue Type: Bug
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Attachments: HIVE-13911.1.patch, HIVE-13911.2.patch
>
>
> Similar to HIVE-13857



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13264) JDBC driver makes 2 Open Session Calls for every open session

2016-06-02 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-13264:

Assignee: NITHIN MAHESH

> JDBC driver makes 2 Open Session Calls for every open session
> -
>
> Key: HIVE-13264
> URL: https://issues.apache.org/jira/browse/HIVE-13264
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Reporter: NITHIN MAHESH
>Assignee: NITHIN MAHESH
>  Labels: jdbc
> Attachments: HIVE-13264.1.patch, HIVE-13264.2.patch, 
> HIVE-13264.3.patch, HIVE-13264.4.patch, HIVE-13264.5.patch, 
> HIVE-13264.6.patch, HIVE-13264.6.patch, HIVE-13264.patch
>
>
> When HTTP is used as the transport mode by the Hive JDBC driver, we noticed 
> that there is an additional open/close session just to validate the 
> connection. 
>  
> TCLIService.Iface client = new TCLIService.Client(new 
> TBinaryProtocol(transport));
>   TOpenSessionResp openResp = client.OpenSession(new TOpenSessionReq());
>   if (openResp != null) {
> client.CloseSession(new 
> TCloseSessionReq(openResp.getSessionHandle()));
>   }
>  
> The open session call is a costly one and should not be used to test 
> transport. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13882) When hive.server2.async.exec.async.compile is turned on, from JDBC we will get "The query did not generate a result set"

2016-06-02 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-13882:

Affects Version/s: (was: 2.2.0)
   2.1.0

> When hive.server2.async.exec.async.compile is turned on, from JDBC we will 
> get "The query did not generate a result set" 
> -
>
> Key: HIVE-13882
> URL: https://issues.apache.org/jira/browse/HIVE-13882
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 2.1.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Fix For: 2.1.0
>
> Attachments: HIVE-13882.1.patch, HIVE-13882.2.patch
>
>
>  The following would fail with  "The query did not generate a result set"
> stmt.execute("SET hive.driver.parallel.compilation=true");
> stmt.execute("SET hive.server2.async.exec.async.compile=true");
> ResultSet res =  stmt.executeQuery("SELECT * FROM " + tableName);
> res.next();



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13882) When hive.server2.async.exec.async.compile is turned on, from JDBC we will get "The query did not generate a result set"

2016-06-02 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-13882:

Fix Version/s: (was: 2.2.0)
   2.1.0

> When hive.server2.async.exec.async.compile is turned on, from JDBC we will 
> get "The query did not generate a result set" 
> -
>
> Key: HIVE-13882
> URL: https://issues.apache.org/jira/browse/HIVE-13882
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 2.1.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Fix For: 2.1.0
>
> Attachments: HIVE-13882.1.patch, HIVE-13882.2.patch
>
>
>  The following would fail with  "The query did not generate a result set"
> stmt.execute("SET hive.driver.parallel.compilation=true");
> stmt.execute("SET hive.server2.async.exec.async.compile=true");
> ResultSet res =  stmt.executeQuery("SELECT * FROM " + tableName);
> res.next();



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13882) When hive.server2.async.exec.async.compile is turned on, from JDBC we will get "The query did not generate a result set"

2016-06-02 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313150#comment-15313150
 ] 

Vaibhav Gumashta commented on HIVE-13882:
-

Committed to 2.1.0 branch.

> When hive.server2.async.exec.async.compile is turned on, from JDBC we will 
> get "The query did not generate a result set" 
> -
>
> Key: HIVE-13882
> URL: https://issues.apache.org/jira/browse/HIVE-13882
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 2.1.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Fix For: 2.1.0
>
> Attachments: HIVE-13882.1.patch, HIVE-13882.2.patch
>
>
>  The following would fail with  "The query did not generate a result set"
> stmt.execute("SET hive.driver.parallel.compilation=true");
> stmt.execute("SET hive.server2.async.exec.async.compile=true");
> ResultSet res =  stmt.executeQuery("SELECT * FROM " + tableName);
> res.next();



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13931) Add support for HikariCP and replace BoneCP usage with HikariCP

2016-06-02 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313144#comment-15313144
 ] 

Sergey Shelukhin commented on HIVE-13931:
-

I think the transaction support would need changes too, it sets up connection 
pooling separately.

> Add support for HikariCP and replace BoneCP usage with HikariCP
> ---
>
> Key: HIVE-13931
> URL: https://issues.apache.org/jira/browse/HIVE-13931
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-13931.patch
>
>
> Currently, we use BoneCP as our primary connection pooling mechanism 
> (overridable by users). However, BoneCP is no longer being actively 
> developed, and is considered deprecated, replaced by HikariCP.
> Thus, we should add support for HikariCP, and try to replace our primary 
> usage of BoneCP with it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13490) Change itests to be part of the main Hive build

2016-06-02 Thread Zoltan Haindrich (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313135#comment-15313135
 ] 

Zoltan Haindrich commented on HIVE-13490:
-

i've added some documentation about it to 
https://cwiki.apache.org/confluence/display/Hive/HiveDeveloperFAQ
I'm open to any suggestions ;)

> Change itests to be part of the main Hive build
> ---
>
> Key: HIVE-13490
> URL: https://issues.apache.org/jira/browse/HIVE-13490
> Project: Hive
>  Issue Type: Improvement
>Reporter: Siddharth Seth
>Assignee: Zoltan Haindrich
> Fix For: 2.2.0
>
> Attachments: HIVE-13490.01.patch, HIVE-13490.02.patch, 
> HIVE-13490.03.patch
>
>
> Instead of having to build Hive, and then itests separately.
> With IntelliJ, this ends up being loaded as two separate dependencies, and 
> there's a lot of hops involved to make changes.
> Does anyone know why these have been kept separate ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13887) LazySimpleSerDe should parse "NULL" dates faster

2016-06-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313103#comment-15313103
 ] 

Hive QA commented on HIVE-13887:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12807407/HIVE-13887.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 10197 tests 
executed
*Failed tests:*
{noformat}
TestHWISessionManager - did not produce a TEST-*.xml file
TestJdbcWithMiniHA - did not produce a TEST-*.xml file
TestJdbcWithMiniMr - did not produce a TEST-*.xml file
TestOperationLoggingAPIWithTez - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_constprog_partitioner
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
{noformat}

Test results: 
http://ec2-54-177-240-2.us-west-1.compute.amazonaws.com/job/PreCommit-HIVE-MASTER-Build/501/testReport
Console output: 
http://ec2-54-177-240-2.us-west-1.compute.amazonaws.com/job/PreCommit-HIVE-MASTER-Build/501/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-501/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 7 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12807407 - PreCommit-HIVE-MASTER-Build

> LazySimpleSerDe should parse "NULL" dates faster
> 
>
> Key: HIVE-13887
> URL: https://issues.apache.org/jira/browse/HIVE-13887
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers, Vectorization
>Affects Versions: 2.1.0
>Reporter: Gopal V
>Assignee: Gopal V
>  Labels: Performance
> Attachments: HIVE-13887.1.patch, HIVE-13887.1.patch
>
>
> Date string which contain "NULL" or "(null)" are being parsed through a very 
> slow codepath involving exception handling as a normal codepath.
> These are currently ~4x slower than parsing an actual date field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13901) Hivemetastore add partitions can be slow depending on filesystems

2016-06-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313105#comment-15313105
 ] 

Hive QA commented on HIVE-13901:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12807314/HIVE-13901.1.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
http://ec2-54-177-240-2.us-west-1.compute.amazonaws.com/job/PreCommit-HIVE-MASTER-Build/502/testReport
Console output: 
http://ec2-54-177-240-2.us-west-1.compute.amazonaws.com/job/PreCommit-HIVE-MASTER-Build/502/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-502/

Messages:
{noformat}
 This message was trimmed, see log for full details 
[INFO] --- maven-surefire-plugin:2.16:test (default-test) @ hive-service-rpc ---
[INFO] Tests are skipped.
[INFO] 
[INFO] --- maven-jar-plugin:2.2:jar (default-jar) @ hive-service-rpc ---
[INFO] Building jar: 
/data/hive-ptest/working/apache-github-source-source/service-rpc/target/hive-service-rpc-2.2.0-SNAPSHOT.jar
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ 
hive-service-rpc ---
[INFO] 
[INFO] --- maven-jar-plugin:2.2:test-jar (default) @ hive-service-rpc ---
[INFO] Building jar: 
/data/hive-ptest/working/apache-github-source-source/service-rpc/target/hive-service-rpc-2.2.0-SNAPSHOT-tests.jar
[INFO] 
[INFO] --- maven-install-plugin:2.4:install (default-install) @ 
hive-service-rpc ---
[INFO] Installing 
/data/hive-ptest/working/apache-github-source-source/service-rpc/target/hive-service-rpc-2.2.0-SNAPSHOT.jar
 to 
/data/hive-ptest/working/maven/org/apache/hive/hive-service-rpc/2.2.0-SNAPSHOT/hive-service-rpc-2.2.0-SNAPSHOT.jar
[INFO] Installing 
/data/hive-ptest/working/apache-github-source-source/service-rpc/pom.xml to 
/data/hive-ptest/working/maven/org/apache/hive/hive-service-rpc/2.2.0-SNAPSHOT/hive-service-rpc-2.2.0-SNAPSHOT.pom
[INFO] Installing 
/data/hive-ptest/working/apache-github-source-source/service-rpc/target/hive-service-rpc-2.2.0-SNAPSHOT-tests.jar
 to 
/data/hive-ptest/working/maven/org/apache/hive/hive-service-rpc/2.2.0-SNAPSHOT/hive-service-rpc-2.2.0-SNAPSHOT-tests.jar
[INFO] 
[INFO] 
[INFO] Building Hive Serde 2.2.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hive-serde ---
[INFO] Deleting 
/data/hive-ptest/working/apache-github-source-source/serde/target
[INFO] Deleting /data/hive-ptest/working/apache-github-source-source/serde 
(includes = [datanucleus.log, derby.log], excludes = [])
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce-no-snapshots) @ 
hive-serde ---
[INFO] 
[INFO] --- build-helper-maven-plugin:1.8:add-source (add-source) @ hive-serde 
---
[INFO] Source directory: 
/data/hive-ptest/working/apache-github-source-source/serde/src/gen/protobuf/gen-java
 added.
[INFO] Source directory: 
/data/hive-ptest/working/apache-github-source-source/serde/src/gen/thrift/gen-javabean
 added.
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hive-serde ---
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
hive-serde ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 
/data/hive-ptest/working/apache-github-source-source/serde/src/main/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ hive-serde ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hive-serde ---
[INFO] Compiling 414 source files to 
/data/hive-ptest/working/apache-github-source-source/serde/target/classes
[WARNING] 
/data/hive-ptest/working/apache-github-source-source/serde/src/java/org/apache/hadoop/hive/serde2/AbstractSerDe.java:
 Some input files use or override a deprecated API.
[WARNING] 
/data/hive-ptest/working/apache-github-source-source/serde/src/java/org/apache/hadoop/hive/serde2/AbstractSerDe.java:
 Recompile with -Xlint:deprecation for details.
[WARNING] 
/data/hive-ptest/working/apache-github-source-source/serde/src/java/org/apache/hadoop/hive/serde2/lazy/objectinspector/primitive/AbstractPrimitiveLazyObjectInspector.java:
 Some input files use unchecked or unsafe operations.
[WARNING] 
/data/hive-ptest/working/apache-github-source-source/serde/src/java/org/apache/hadoop/hive/serde2/lazy/objectinspector/primitive/AbstractPrimitiveLazyObjectInspector.java:
 Recompile with -Xlint:unchecked for details.
[INFO] 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ 
hive-serde ---
[INFO] Using 'UTF-8' encoding to copy filtered 

[jira] [Updated] (HIVE-13931) Add support for HikariCP and replace BoneCP usage with HikariCP

2016-06-02 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-13931:

Status: Patch Available  (was: Open)

> Add support for HikariCP and replace BoneCP usage with HikariCP
> ---
>
> Key: HIVE-13931
> URL: https://issues.apache.org/jira/browse/HIVE-13931
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-13931.patch
>
>
> Currently, we use BoneCP as our primary connection pooling mechanism 
> (overridable by users). However, BoneCP is no longer being actively 
> developed, and is considered deprecated, replaced by HikariCP.
> Thus, we should add support for HikariCP, and try to replace our primary 
> usage of BoneCP with it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13931) Add support for HikariCP and replace BoneCP usage with HikariCP

2016-06-02 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-13931:

Attachment: HIVE-13931.patch

Patch attached.

> Add support for HikariCP and replace BoneCP usage with HikariCP
> ---
>
> Key: HIVE-13931
> URL: https://issues.apache.org/jira/browse/HIVE-13931
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-13931.patch
>
>
> Currently, we use BoneCP as our primary connection pooling mechanism 
> (overridable by users). However, BoneCP is no longer being actively 
> developed, and is considered deprecated, replaced by HikariCP.
> Thus, we should add support for HikariCP, and try to replace our primary 
> usage of BoneCP with it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13930) upgrade Hive to latest Hadoop version

2016-06-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13930:

Status: Patch Available  (was: Open)

[~ashutoshc] can you take a look?

> upgrade Hive to latest Hadoop version
> -
>
> Key: HIVE-13930
> URL: https://issues.apache.org/jira/browse/HIVE-13930
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13930.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13930) upgrade Hive to latest Hadoop version

2016-06-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13930:

Attachment: HIVE-13930.patch

Also upgrading the mismatching dependencies (non-plugin ones)

> upgrade Hive to latest Hadoop version
> -
>
> Key: HIVE-13930
> URL: https://issues.apache.org/jira/browse/HIVE-13930
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13930.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-13865) Changing char column of orc table to string/var char drops white space.

2016-06-02 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline resolved HIVE-13865.
-
Resolution: Not A Bug

> Changing char column of orc table to string/var char drops white space. 
> 
>
> Key: HIVE-13865
> URL: https://issues.apache.org/jira/browse/HIVE-13865
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1
>Reporter: Takahiko Saito
>Assignee: Matt McCline
>
> Creating a orc table with char(16) column and insert some value with white 
> space followed by characters:
> {noformat}
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> create table test (c char(16)) 
> stored as orc;
> No rows affected (0.1 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> insert into table test values 
> ('horton works ');
> INFO  : Tez session hasn't been created yet. Opening session
> INFO  : Dag name: insert into table test values ('horton ...')(Stage-1)
> INFO  :
> INFO  : Status: Running (Executing on YARN cluster with App id 
> application_1464222003837_0399)
> INFO  : Map 1: -/-
> INFO  : Map 1: 0/1
> INFO  : Map 1: 0/1
> INFO  : Map 1: 0/1
> INFO  : Map 1: 0(+1)/1
> INFO  : Map 1: 0(+1)/1
> INFO  : Map 1: 1/1
> INFO  : Loading data to table default.test from 
> hdfs://os-r6-ifsmes-hiveserver2-11-5.openstacklocal:8020/apps/hive/warehouse/test/.hive-staging_hive_2016-05-26_17-43-07_098_2458959255563595485-1/-ext-1
> INFO  : Table default.test stats: [numFiles=1, numRows=1, totalSize=267, 
> rawDataSize=100]
> No rows affected (25.125 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> select * from test;
> +---+--+
> |  test.c   |
> +---+--+
> | horton works  |
> +---+--+
> 1 row selected (0.077 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> describe test;
> +---++--+--+
> | col_name  | data_type  | comment  |
> +---++--+--+
> | c | char(16)   |  |
> +---++--+--+
> 1 row selected (0.153 seconds)
> {noformat}
> Then after changing the column to string, the white space is lost:
> {noformat}
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> alter table test change column 
> c c string;
> No rows affected (0.155 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> describe test;
> +---++--+--+
> | col_name  | data_type  | comment  |
> +---++--+--+
> | c | string |  |
> +---++--+--+
> 1 row selected (0.115 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> select * from test;
> +---+--+
> |test.c |
> +---+--+
> | horton works  |
> +---+--+
> 1 row selected (0.068 seconds)
> {noformat}
> The issue is not seen with textfile formatted table:
> {noformat}
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> create table test_text (c 
> char(16)) stored as textfile;
> No rows affected (0.091 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> insert into table test_text 
> values ('horton works ');
> INFO  : Session is already open
> INFO  : Dag name: insert into table test_text values ('ho...')(Stage-1)
> INFO  :
> INFO  : Status: Running (Executing on YARN cluster with App id 
> application_1464222003837_0399)
> INFO  : Map 1: 0/1
> INFO  : Map 1: 0(+1)/1
> INFO  : Map 1: 1/1
> INFO  : Loading data to table default.test_text from 
> hdfs://os-r6-ifsmes-hiveserver2-11-5.openstacklocal:8020/apps/hive/warehouse/test_text/.hive-staging_hive_2016-05-26_17-45-29_669_2888061873550824337-1/-ext-1
> INFO  : Table default.test_text stats: [numFiles=1, numRows=1, totalSize=17, 
> rawDataSize=16]
> No rows affected (6.849 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> select * from test_text;
> +---+--+
> |test_text.c|
> +---+--+
> | horton works  |
> +---+--+
> 1 row selected (0.098 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> describe test_text;
> +---++--+--+
> | col_name  | data_type  | comment  |
> +---++--+--+
> | c | char(16)   |  |
> +---++--+--+
> 1 row selected (0.127 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> alter table test_text change 
> column c c string;
> No rows affected (0.145 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> describe test_text;
> +---++--+--+
> | col_name  | data_type  | comment  |
> +---++--+--+
> | c | string |  |
> +---++--+--+
> 1 row selected (0.127 seconds)
> 0: 

[jira] [Commented] (HIVE-13853) Add X-XSRF-Header filter to HS2 HTTP mode and WebHCat

2016-06-02 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313056#comment-15313056
 ] 

Sushanth Sowmyan commented on HIVE-13853:
-

Thanks, Vaibhav!

Yes, I think that would be useful, and will make the use between webhcat and 
hs2 uniform.



> Add X-XSRF-Header filter to HS2 HTTP mode and WebHCat
> -
>
> Key: HIVE-13853
> URL: https://issues.apache.org/jira/browse/HIVE-13853
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, WebHCat
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-13853.2.patch, HIVE-13853.patch
>
>
> There is a possibility that there may be a CSRF-based attack on various 
> hadoop components, and thus, there is an effort to add a block for all 
> incoming http requests if they do not contain a X-XSRF-Header header. (See 
> HADOOP-12691 for motivation)
> This has potential to affect HS2 when running on thrift-over-http mode(if 
> cookie-based-auth is used), and webhcat.
> We introduce new flags to determine whether or not we're using the filter, 
> and if we are, we will automatically reject any http requests which do not 
> contain this header.
> To allow this to work, we also need to make changes to our JDBC driver to 
> automatically inject this header into any requests it makes. Also, any 
> client-side programs/api not using the JDBC driver directly will need to make 
> changes to add a X-XSRF-Header header to the request to make calls to 
> HS2/WebHCat if this filter is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-13853) Add X-XSRF-Header filter to HS2 HTTP mode and WebHCat

2016-06-02 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313056#comment-15313056
 ] 

Sushanth Sowmyan edited comment on HIVE-13853 at 6/2/16 9:05 PM:
-

Thanks, Vaibhav!

Yes, I think that would be useful, and will make the use between webhcat and 
hs2 uniform. I will create that when committing this.



was (Author: sushanth):
Thanks, Vaibhav!

Yes, I think that would be useful, and will make the use between webhcat and 
hs2 uniform.



> Add X-XSRF-Header filter to HS2 HTTP mode and WebHCat
> -
>
> Key: HIVE-13853
> URL: https://issues.apache.org/jira/browse/HIVE-13853
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, WebHCat
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-13853.2.patch, HIVE-13853.patch
>
>
> There is a possibility that there may be a CSRF-based attack on various 
> hadoop components, and thus, there is an effort to add a block for all 
> incoming http requests if they do not contain a X-XSRF-Header header. (See 
> HADOOP-12691 for motivation)
> This has potential to affect HS2 when running on thrift-over-http mode(if 
> cookie-based-auth is used), and webhcat.
> We introduce new flags to determine whether or not we're using the filter, 
> and if we are, we will automatically reject any http requests which do not 
> contain this header.
> To allow this to work, we also need to make changes to our JDBC driver to 
> automatically inject this header into any requests it makes. Also, any 
> client-side programs/api not using the JDBC driver directly will need to make 
> changes to add a X-XSRF-Header header to the request to make calls to 
> HS2/WebHCat if this filter is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13865) Changing char column of orc table to string/var char drops white space.

2016-06-02 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313041#comment-15313041
 ] 

Matt McCline commented on HIVE-13865:
-

You can think of these situations as being like a CAST.

Let's say the old column char1 (type CHAR(50)) had its data type changed to 
STRING or VARCHAR(50).
Then changing the column data type is as if an implicit CAST has been added 
like:

CAST(char1 as STRING) or CAST(char1 as VARCHAR(50))

I think those casts drop trailing white space.

> Changing char column of orc table to string/var char drops white space. 
> 
>
> Key: HIVE-13865
> URL: https://issues.apache.org/jira/browse/HIVE-13865
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1
>Reporter: Takahiko Saito
>Assignee: Matt McCline
>
> Creating a orc table with char(16) column and insert some value with white 
> space followed by characters:
> {noformat}
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> create table test (c char(16)) 
> stored as orc;
> No rows affected (0.1 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> insert into table test values 
> ('horton works ');
> INFO  : Tez session hasn't been created yet. Opening session
> INFO  : Dag name: insert into table test values ('horton ...')(Stage-1)
> INFO  :
> INFO  : Status: Running (Executing on YARN cluster with App id 
> application_1464222003837_0399)
> INFO  : Map 1: -/-
> INFO  : Map 1: 0/1
> INFO  : Map 1: 0/1
> INFO  : Map 1: 0/1
> INFO  : Map 1: 0(+1)/1
> INFO  : Map 1: 0(+1)/1
> INFO  : Map 1: 1/1
> INFO  : Loading data to table default.test from 
> hdfs://os-r6-ifsmes-hiveserver2-11-5.openstacklocal:8020/apps/hive/warehouse/test/.hive-staging_hive_2016-05-26_17-43-07_098_2458959255563595485-1/-ext-1
> INFO  : Table default.test stats: [numFiles=1, numRows=1, totalSize=267, 
> rawDataSize=100]
> No rows affected (25.125 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> select * from test;
> +---+--+
> |  test.c   |
> +---+--+
> | horton works  |
> +---+--+
> 1 row selected (0.077 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> describe test;
> +---++--+--+
> | col_name  | data_type  | comment  |
> +---++--+--+
> | c | char(16)   |  |
> +---++--+--+
> 1 row selected (0.153 seconds)
> {noformat}
> Then after changing the column to string, the white space is lost:
> {noformat}
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> alter table test change column 
> c c string;
> No rows affected (0.155 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> describe test;
> +---++--+--+
> | col_name  | data_type  | comment  |
> +---++--+--+
> | c | string |  |
> +---++--+--+
> 1 row selected (0.115 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> select * from test;
> +---+--+
> |test.c |
> +---+--+
> | horton works  |
> +---+--+
> 1 row selected (0.068 seconds)
> {noformat}
> The issue is not seen with textfile formatted table:
> {noformat}
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> create table test_text (c 
> char(16)) stored as textfile;
> No rows affected (0.091 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> insert into table test_text 
> values ('horton works ');
> INFO  : Session is already open
> INFO  : Dag name: insert into table test_text values ('ho...')(Stage-1)
> INFO  :
> INFO  : Status: Running (Executing on YARN cluster with App id 
> application_1464222003837_0399)
> INFO  : Map 1: 0/1
> INFO  : Map 1: 0(+1)/1
> INFO  : Map 1: 1/1
> INFO  : Loading data to table default.test_text from 
> hdfs://os-r6-ifsmes-hiveserver2-11-5.openstacklocal:8020/apps/hive/warehouse/test_text/.hive-staging_hive_2016-05-26_17-45-29_669_2888061873550824337-1/-ext-1
> INFO  : Table default.test_text stats: [numFiles=1, numRows=1, totalSize=17, 
> rawDataSize=16]
> No rows affected (6.849 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> select * from test_text;
> +---+--+
> |test_text.c|
> +---+--+
> | horton works  |
> +---+--+
> 1 row selected (0.098 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> describe test_text;
> +---++--+--+
> | col_name  | data_type  | comment  |
> +---++--+--+
> | c | char(16)   |  |
> +---++--+--+
> 1 row selected (0.127 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> alter table test_text change 
> column c c string;
> 

[jira] [Assigned] (HIVE-13865) Changing char column of orc table to string/var char drops white space.

2016-06-02 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline reassigned HIVE-13865:
---

Assignee: Matt McCline

> Changing char column of orc table to string/var char drops white space. 
> 
>
> Key: HIVE-13865
> URL: https://issues.apache.org/jira/browse/HIVE-13865
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1
>Reporter: Takahiko Saito
>Assignee: Matt McCline
>
> Creating a orc table with char(16) column and insert some value with white 
> space followed by characters:
> {noformat}
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> create table test (c char(16)) 
> stored as orc;
> No rows affected (0.1 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> insert into table test values 
> ('horton works ');
> INFO  : Tez session hasn't been created yet. Opening session
> INFO  : Dag name: insert into table test values ('horton ...')(Stage-1)
> INFO  :
> INFO  : Status: Running (Executing on YARN cluster with App id 
> application_1464222003837_0399)
> INFO  : Map 1: -/-
> INFO  : Map 1: 0/1
> INFO  : Map 1: 0/1
> INFO  : Map 1: 0/1
> INFO  : Map 1: 0(+1)/1
> INFO  : Map 1: 0(+1)/1
> INFO  : Map 1: 1/1
> INFO  : Loading data to table default.test from 
> hdfs://os-r6-ifsmes-hiveserver2-11-5.openstacklocal:8020/apps/hive/warehouse/test/.hive-staging_hive_2016-05-26_17-43-07_098_2458959255563595485-1/-ext-1
> INFO  : Table default.test stats: [numFiles=1, numRows=1, totalSize=267, 
> rawDataSize=100]
> No rows affected (25.125 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> select * from test;
> +---+--+
> |  test.c   |
> +---+--+
> | horton works  |
> +---+--+
> 1 row selected (0.077 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> describe test;
> +---++--+--+
> | col_name  | data_type  | comment  |
> +---++--+--+
> | c | char(16)   |  |
> +---++--+--+
> 1 row selected (0.153 seconds)
> {noformat}
> Then after changing the column to string, the white space is lost:
> {noformat}
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> alter table test change column 
> c c string;
> No rows affected (0.155 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> describe test;
> +---++--+--+
> | col_name  | data_type  | comment  |
> +---++--+--+
> | c | string |  |
> +---++--+--+
> 1 row selected (0.115 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> select * from test;
> +---+--+
> |test.c |
> +---+--+
> | horton works  |
> +---+--+
> 1 row selected (0.068 seconds)
> {noformat}
> The issue is not seen with textfile formatted table:
> {noformat}
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> create table test_text (c 
> char(16)) stored as textfile;
> No rows affected (0.091 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> insert into table test_text 
> values ('horton works ');
> INFO  : Session is already open
> INFO  : Dag name: insert into table test_text values ('ho...')(Stage-1)
> INFO  :
> INFO  : Status: Running (Executing on YARN cluster with App id 
> application_1464222003837_0399)
> INFO  : Map 1: 0/1
> INFO  : Map 1: 0(+1)/1
> INFO  : Map 1: 1/1
> INFO  : Loading data to table default.test_text from 
> hdfs://os-r6-ifsmes-hiveserver2-11-5.openstacklocal:8020/apps/hive/warehouse/test_text/.hive-staging_hive_2016-05-26_17-45-29_669_2888061873550824337-1/-ext-1
> INFO  : Table default.test_text stats: [numFiles=1, numRows=1, totalSize=17, 
> rawDataSize=16]
> No rows affected (6.849 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> select * from test_text;
> +---+--+
> |test_text.c|
> +---+--+
> | horton works  |
> +---+--+
> 1 row selected (0.098 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> describe test_text;
> +---++--+--+
> | col_name  | data_type  | comment  |
> +---++--+--+
> | c | char(16)   |  |
> +---++--+--+
> 1 row selected (0.127 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> alter table test_text change 
> column c c string;
> No rows affected (0.145 seconds)
> 0: jdbc:hive2://os-r6-ifsmes-hiveserver2-11-4> describe test_text;
> +---++--+--+
> | col_name  | data_type  | comment  |
> +---++--+--+
> | c | string |  |
> +---++--+--+
> 1 row selected (0.127 seconds)
> 

[jira] [Updated] (HIVE-13443) LLAP: signing for the second state of submit (the event)

2016-06-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13443:

Attachment: (was: HIVE-13443.WIP.nogen.patch)

> LLAP: signing for the second state of submit (the event)
> 
>
> Key: HIVE-13443
> URL: https://issues.apache.org/jira/browse/HIVE-13443
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13443.01.patch, HIVE-13443.02.patch, 
> HIVE-13443.02.wo.13675.nogen.patch, HIVE-13443.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13443) LLAP: signing for the second state of submit (the event)

2016-06-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13443:

Attachment: (was: HIVE-13443.02.wo.13675.patch)

> LLAP: signing for the second state of submit (the event)
> 
>
> Key: HIVE-13443
> URL: https://issues.apache.org/jira/browse/HIVE-13443
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13443.01.patch, HIVE-13443.02.patch, 
> HIVE-13443.02.wo.13675.nogen.patch, HIVE-13443.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13443) LLAP: signing for the second state of submit (the event)

2016-06-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13443:

Attachment: (was: HIVE-13443.wo.13444.13675.nogen.patch)

> LLAP: signing for the second state of submit (the event)
> 
>
> Key: HIVE-13443
> URL: https://issues.apache.org/jira/browse/HIVE-13443
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13443.01.patch, HIVE-13443.02.patch, 
> HIVE-13443.02.wo.13675.nogen.patch, HIVE-13443.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13443) LLAP: signing for the second state of submit (the event)

2016-06-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13443:

Attachment: HIVE-13443.02.wo.13675.nogen.patch

> LLAP: signing for the second state of submit (the event)
> 
>
> Key: HIVE-13443
> URL: https://issues.apache.org/jira/browse/HIVE-13443
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13443.01.patch, HIVE-13443.02.patch, 
> HIVE-13443.02.wo.13675.nogen.patch, HIVE-13443.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13443) LLAP: signing for the second state of submit (the event)

2016-06-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13443:

Attachment: HIVE-13443.02.patch

Rebased on top of HIVE-13675 patch that is the final or almost the final form 
that is going to be committed.

> LLAP: signing for the second state of submit (the event)
> 
>
> Key: HIVE-13443
> URL: https://issues.apache.org/jira/browse/HIVE-13443
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13443.01.patch, HIVE-13443.02.patch, 
> HIVE-13443.02.wo.13675.patch, HIVE-13443.WIP.nogen.patch, HIVE-13443.patch, 
> HIVE-13443.wo.13444.13675.nogen.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13443) LLAP: signing for the second state of submit (the event)

2016-06-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13443:

Attachment: HIVE-13443.02.wo.13675.patch

> LLAP: signing for the second state of submit (the event)
> 
>
> Key: HIVE-13443
> URL: https://issues.apache.org/jira/browse/HIVE-13443
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13443.01.patch, HIVE-13443.02.wo.13675.patch, 
> HIVE-13443.WIP.nogen.patch, HIVE-13443.patch, 
> HIVE-13443.wo.13444.13675.nogen.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13853) Add X-XSRF-Header filter to HS2 HTTP mode and WebHCat

2016-06-02 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313031#comment-15313031
 ] 

Vaibhav Gumashta commented on HIVE-13853:
-

+1.
Should we create a jira to investigate the filtering not working via 
ServletContextHandler?

> Add X-XSRF-Header filter to HS2 HTTP mode and WebHCat
> -
>
> Key: HIVE-13853
> URL: https://issues.apache.org/jira/browse/HIVE-13853
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, WebHCat
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-13853.2.patch, HIVE-13853.patch
>
>
> There is a possibility that there may be a CSRF-based attack on various 
> hadoop components, and thus, there is an effort to add a block for all 
> incoming http requests if they do not contain a X-XSRF-Header header. (See 
> HADOOP-12691 for motivation)
> This has potential to affect HS2 when running on thrift-over-http mode(if 
> cookie-based-auth is used), and webhcat.
> We introduce new flags to determine whether or not we're using the filter, 
> and if we are, we will automatically reject any http requests which do not 
> contain this header.
> To allow this to work, we also need to make changes to our JDBC driver to 
> automatically inject this header into any requests it makes. Also, any 
> client-side programs/api not using the JDBC driver directly will need to make 
> changes to add a X-XSRF-Header header to the request to make calls to 
> HS2/WebHCat if this filter is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   4   >