[jira] [Commented] (HIVE-13958) hive.strict.checks.type.safety should apply to decimals, as well as IN... and BETWEEN... ops

2016-06-08 Thread Takuma Wakamori (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321948#comment-15321948
 ] 

Takuma Wakamori commented on HIVE-13958:


Hi, [~sershe].
Could you please assign me to the assignee of this issue?
I will try to fix it. Thanks!

> hive.strict.checks.type.safety should apply to decimals, as well as IN... and 
> BETWEEN... ops
> 
>
> Key: HIVE-13958
> URL: https://issues.apache.org/jira/browse/HIVE-13958
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>
> String to decimal auto-casts should be prohibited for compares



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13443) LLAP: signing for the second state of submit (the event)

2016-06-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321872#comment-15321872
 ] 

Hive QA commented on HIVE-13443:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12809051/HIVE-13443.04.patch

{color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 10223 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_constprog_partitioner
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/56/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/56/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-56/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 7 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12809051 - PreCommit-HIVE-MASTER-Build

> LLAP: signing for the second state of submit (the event)
> 
>
> Key: HIVE-13443
> URL: https://issues.apache.org/jira/browse/HIVE-13443
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13443.01.patch, HIVE-13443.02.patch, 
> HIVE-13443.02.wo.13675.nogen.patch, HIVE-13443.03.patch, HIVE-13443.03.patch, 
> HIVE-13443.03.wo.13675.nogen.patch, HIVE-13443.04.patch, HIVE-13443.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13976) UNION ALL which takes actual source table in one side failed

2016-06-08 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HIVE-13976:
--
Description: 
UNION ALL must take actual source table in both side or none exclusively.

* UNION ALL with actual table in both side -> Succeed as expected
{code}
SELECT 
  1 AS id,
  'Alice' AS name
FROM
  table1
UNION ALL 
SELECT 
  2 AS id,
  'Bob' AS name
FROM
  table2
{code}

* UNION ALL without actual table in both side -> Succeed as expected
{code}
SELECT 
  1 AS id,
  'Alice' AS name
UNION ALL 
SELECT 
  2 AS id,
  'Bob' AS name
{code}

* UNION ALL with actual table on one side -> Failed
{code}
SELECT 
  1 AS id,
  'Alice' AS name
UNION ALL 
SELECT 
  2 AS id,
  'Bob' AS name
FROM
   some_table
{code}

The error message from map task of third case is this.
{code}
Diagnostic Messages for this Task:
Error: java.lang.IllegalArgumentException: Can not create a Path from an empty 
string
at org.apache.hadoop.fs.Path.checkPathArg(Path.java:127)
at org.apache.hadoop.fs.Path.(Path.java:135)
at 
org.apache.hadoop.hive.ql.io.HiveInputFormat$HiveInputSplit.getPath(HiveInputFormat.java:116)
at org.apache.hadoop.mapred.MapTask.updateJobWithSplit(MapTask.java:458)
{code}

  was:
UNION ALL must take actual source table in both side or none exclusively.

UNION ALL with actual table in both side -> Succeed as expected
{code}
SELECT 
  1 AS id,
  'Alice' AS name
FROM
  table1
UNION ALL 
SELECT 
  2 AS id,
  'Bob' AS name
FROM
  table2
{code}

UNION ALL without actual table in both side -> Succeed as expected
{code}
SELECT 
  1 AS id,
  'Alice' AS name
UNION ALL 
SELECT 
  2 AS id,
  'Bob' AS name
{code}

UNION ALL with actual table on one side -> Failed
{code}
SELECT 
  1 AS id,
  'Alice' AS name
UNION ALL 
SELECT 
  2 AS id,
  'Bob' AS name
FROM
   some_table
{code}

The error message from map task is this.
{code}
Diagnostic Messages for this Task:
Error: java.lang.IllegalArgumentException: Can not create a Path from an empty 
string
at org.apache.hadoop.fs.Path.checkPathArg(Path.java:127)
at org.apache.hadoop.fs.Path.(Path.java:135)
at 
org.apache.hadoop.hive.ql.io.HiveInputFormat$HiveInputSplit.getPath(HiveInputFormat.java:116)
at org.apache.hadoop.mapred.MapTask.updateJobWithSplit(MapTask.java:458)
{code}


> UNION ALL which takes actual source table in one side failed
> 
>
> Key: HIVE-13976
> URL: https://issues.apache.org/jira/browse/HIVE-13976
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.13.0
> Environment: Ubuntu 12.04, JDK 7
>Reporter: Kai Sasaki
>
> UNION ALL must take actual source table in both side or none exclusively.
> * UNION ALL with actual table in both side -> Succeed as expected
> {code}
> SELECT 
>   1 AS id,
>   'Alice' AS name
> FROM
>   table1
> UNION ALL 
> SELECT 
>   2 AS id,
>   'Bob' AS name
> FROM
>   table2
> {code}
> * UNION ALL without actual table in both side -> Succeed as expected
> {code}
> SELECT 
>   1 AS id,
>   'Alice' AS name
> UNION ALL 
> SELECT 
>   2 AS id,
>   'Bob' AS name
> {code}
> * UNION ALL with actual table on one side -> Failed
> {code}
> SELECT 
>   1 AS id,
>   'Alice' AS name
> UNION ALL 
> SELECT 
>   2 AS id,
>   'Bob' AS name
> FROM
>some_table
> {code}
> The error message from map task of third case is this.
> {code}
> Diagnostic Messages for this Task:
> Error: java.lang.IllegalArgumentException: Can not create a Path from an 
> empty string
>   at org.apache.hadoop.fs.Path.checkPathArg(Path.java:127)
>   at org.apache.hadoop.fs.Path.(Path.java:135)
>   at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat$HiveInputSplit.getPath(HiveInputFormat.java:116)
>   at org.apache.hadoop.mapred.MapTask.updateJobWithSplit(MapTask.java:458)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13617) LLAP: support non-vectorized execution in IO

2016-06-08 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13617:

Attachment: HIVE-13617.05.patch

Updated the out files

> LLAP: support non-vectorized execution in IO
> 
>
> Key: HIVE-13617
> URL: https://issues.apache.org/jira/browse/HIVE-13617
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13617-wo-11417.patch, HIVE-13617-wo-11417.patch, 
> HIVE-13617.01.patch, HIVE-13617.03.patch, HIVE-13617.04.patch, 
> HIVE-13617.05.patch, HIVE-13617.patch, HIVE-13617.patch, 
> HIVE-15396-with-oi.patch
>
>
> Two approaches - a separate decoding path, into rows instead of VRBs; or 
> decoding VRBs into rows on a higher level (the original LlapInputFormat). I 
> think the latter might be better - it's not a hugely important path, and perf 
> in non-vectorized case is not the best anyway, so it's better to make do with 
> much less new code and architectural disruption. 
> Some ORC patches in progress introduce an easy to reuse (or so I hope, 
> anyway) VRB-to-row conversion, so we should just use that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13540) Casts to numeric types don't seem to work in hplsql

2016-06-08 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321812#comment-15321812
 ] 

Lefty Leverenz commented on HIVE-13540:
---

> I lost my Jira permissions

There's a lot of that going around.  Thanks!

> Casts to numeric types don't seem to work in hplsql
> ---
>
> Key: HIVE-13540
> URL: https://issues.apache.org/jira/browse/HIVE-13540
> Project: Hive
>  Issue Type: Bug
>  Components: hpl/sql
>Affects Versions: 2.2.0
>Reporter: Carter Shanklin
>Assignee: Dmitry Tolpeko
> Fix For: 2.2.0
>
> Attachments: HIVE-13540.1.patch
>
>
> Maybe I'm doing this wrong? But it seems to be broken.
> Casts to string types seem to work fine, but not numbers.
> This code:
> {code}
> temp_int = CAST('1' AS int);
> print temp_int
> temp_float   = CAST('1.2' AS float);
> print temp_float
> temp_double  = CAST('1.2' AS double);
> print temp_double
> temp_decimal = CAST('1.2' AS decimal(10, 4));
> print temp_decimal
> temp_string = CAST('1.2' AS string);
> print temp_string
> {code}
> Produces this output:
> {code}
> [vagrant@hdp250 hplsql]$ hplsql -f temp2.hplsql
> which: no hbase in 
> (/usr/lib64/qt-3.3/bin:/usr/lib/jvm/java/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/puppetlabs/bin:/usr/local/share/jmeter/bin:/home/vagrant/bin)
> WARNING: Use "yarn jar" to launch YARN applications.
> null
> null
> null
> null
> 1.2
> {code}
> The software I'm using is not anything released but is pretty close to the 
> trunk, 2 weeks old at most.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-13380) Decimal should have lower precedence than double in type hierachy

2016-06-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321807#comment-15321807
 ] 

Sergey Shelukhin edited comment on HIVE-13380 at 6/9/16 2:40 AM:
-

This results in unexpected (and arguably wrong) results in some queries (see 
attached file). 
In the query without decimal casts, decimal value 0.07 is not included in the 
range between 0.06-0.01 and 0.06+0.01 (it is included in the range between 0.05 
and 0.07).

I think floating point types are an abomination that should never be used in 
data systems unless explicitly called for... I wonder if we should revert this 
before it's released in 2.1. cc [~jcamachorodriguez] just in case
At the very least we need to make sure that decimal is the default if the 
column is decimal and non-column is double.
Thoughts?


was (Author: sershe):
This results in unexpected (and arguably wrong) results in some queries (see 
attached file). 
In the query without decimal casts, 0.07 is not included in the range between 
0.06-0.01 and 0.06+0.01 (it is included in the range between 0.05 and 0.07).

I think floating point types are an abomination that should never be used in 
data systems unless explicitly called for... I wonder if we should revert this 
before it's released in 2.1. cc [~jcamachorodriguez] just in case
At the very least we need to make sure that decimal is the default if the 
column is decimal and non-column is double.
Thoughts?

> Decimal should have lower precedence than double in type hierachy
> -
>
> Key: HIVE-13380
> URL: https://issues.apache.org/jira/browse/HIVE-13380
> Project: Hive
>  Issue Type: Bug
>  Components: Types
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>  Labels: TODOC2.1
> Fix For: 2.1.0
>
> Attachments: HIVE-13380.2.patch, HIVE-13380.4.patch, 
> HIVE-13380.5.patch, HIVE-13380.patch, decimal_filter.q
>
>
> Currently its other way round. Also, decimal should be lower than float.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13391) add an option to LLAP to use keytab to authenticate to read data

2016-06-08 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321808#comment-15321808
 ] 

Lefty Leverenz commented on HIVE-13391:
---

Doc note:  This adds *hive.llap.task.principal* and 
*hive.llap.task.keytab.file* to HiveConf.java, so they will need to be 
documented in the LLAP section of Configuration Properties.

* [Configuration Properties -- LLAP | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-LLAP]

Added a TODOC2.2 label.

> add an option to LLAP to use keytab to authenticate to read data
> 
>
> Key: HIVE-13391
> URL: https://issues.apache.org/jira/browse/HIVE-13391
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>  Labels: TODOC2.2
> Fix For: 2.2.0
>
> Attachments: HIVE-13391.01.patch, HIVE-13391.02.patch, 
> HIVE-13391.03.patch, HIVE-13391.04.patch, HIVE-13391.05.patch, 
> HIVE-13391.06.patch, HIVE-13391.07.patch, HIVE-13391.08.patch, 
> HIVE-13391.09.patch, HIVE-13391.10.patch, HIVE-13391.10.patch, 
> HIVE-13391.11.patch, HIVE-13391.patch
>
>
> This can be used for non-doAs case to allow access to clients who don't 
> propagate HDFS tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13380) Decimal should have lower precedence than double in type hierachy

2016-06-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321807#comment-15321807
 ] 

Sergey Shelukhin commented on HIVE-13380:
-

This results in unexpected (and arguably wrong) results in some queries (see 
attached file). 
In the query without decimal casts, 0.07 is not included in the range between 
0.06-0.01 and 0.06+0.01 (it is included in the range between 0.05 and 0.07).

I think floating point types are an abomination that should never be used in 
data systems unless explicitly called for... I wonder if we should revert this 
before it's released in 2.1. cc [~jcamachorodriguez] just in case
At the very least we need to make sure that decimal is the default if the 
column is decimal and non-column is double.
Thoughts?

> Decimal should have lower precedence than double in type hierachy
> -
>
> Key: HIVE-13380
> URL: https://issues.apache.org/jira/browse/HIVE-13380
> Project: Hive
>  Issue Type: Bug
>  Components: Types
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>  Labels: TODOC2.1
> Fix For: 2.1.0
>
> Attachments: HIVE-13380.2.patch, HIVE-13380.4.patch, 
> HIVE-13380.5.patch, HIVE-13380.patch, decimal_filter.q
>
>
> Currently its other way round. Also, decimal should be lower than float.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13391) add an option to LLAP to use keytab to authenticate to read data

2016-06-08 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-13391:
--
Labels: TODOC2.2  (was: )

> add an option to LLAP to use keytab to authenticate to read data
> 
>
> Key: HIVE-13391
> URL: https://issues.apache.org/jira/browse/HIVE-13391
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>  Labels: TODOC2.2
> Fix For: 2.2.0
>
> Attachments: HIVE-13391.01.patch, HIVE-13391.02.patch, 
> HIVE-13391.03.patch, HIVE-13391.04.patch, HIVE-13391.05.patch, 
> HIVE-13391.06.patch, HIVE-13391.07.patch, HIVE-13391.08.patch, 
> HIVE-13391.09.patch, HIVE-13391.10.patch, HIVE-13391.10.patch, 
> HIVE-13391.11.patch, HIVE-13391.patch
>
>
> This can be used for non-doAs case to allow access to clients who don't 
> propagate HDFS tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13380) Decimal should have lower precedence than double in type hierachy

2016-06-08 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13380:

Assignee: Ashutosh Chauhan  (was: Sergey Shelukhin)

> Decimal should have lower precedence than double in type hierachy
> -
>
> Key: HIVE-13380
> URL: https://issues.apache.org/jira/browse/HIVE-13380
> Project: Hive
>  Issue Type: Bug
>  Components: Types
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>  Labels: TODOC2.1
> Fix For: 2.1.0
>
> Attachments: HIVE-13380.2.patch, HIVE-13380.4.patch, 
> HIVE-13380.5.patch, HIVE-13380.patch, decimal_filter.q
>
>
> Currently its other way round. Also, decimal should be lower than float.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13380) Decimal should have lower precedence than double in type hierachy

2016-06-08 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13380:

Attachment: decimal_filter.q

> Decimal should have lower precedence than double in type hierachy
> -
>
> Key: HIVE-13380
> URL: https://issues.apache.org/jira/browse/HIVE-13380
> Project: Hive
>  Issue Type: Bug
>  Components: Types
>Reporter: Ashutosh Chauhan
>Assignee: Sergey Shelukhin
>  Labels: TODOC2.1
> Fix For: 2.1.0
>
> Attachments: HIVE-13380.2.patch, HIVE-13380.4.patch, 
> HIVE-13380.5.patch, HIVE-13380.patch, decimal_filter.q
>
>
> Currently its other way round. Also, decimal should be lower than float.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-13380) Decimal should have lower precedence than double in type hierachy

2016-06-08 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin reassigned HIVE-13380:
---

Assignee: Sergey Shelukhin  (was: Ashutosh Chauhan)

> Decimal should have lower precedence than double in type hierachy
> -
>
> Key: HIVE-13380
> URL: https://issues.apache.org/jira/browse/HIVE-13380
> Project: Hive
>  Issue Type: Bug
>  Components: Types
>Reporter: Ashutosh Chauhan
>Assignee: Sergey Shelukhin
>  Labels: TODOC2.1
> Fix For: 2.1.0
>
> Attachments: HIVE-13380.2.patch, HIVE-13380.4.patch, 
> HIVE-13380.5.patch, HIVE-13380.patch
>
>
> Currently its other way round. Also, decimal should be lower than float.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13248) Change date_add/date_sub/to_date functions to return Date type rather than String

2016-06-08 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321790#comment-15321790
 ] 

Lefty Leverenz commented on HIVE-13248:
---

Changed the TODOC2.2 label to TODOC2.1.

> Change date_add/date_sub/to_date functions to return Date type rather than 
> String
> -
>
> Key: HIVE-13248
> URL: https://issues.apache.org/jira/browse/HIVE-13248
> Project: Hive
>  Issue Type: Improvement
>  Components: UDF
>Affects Versions: 2.0.0, 2.1.0
>Reporter: Jason Dere
>Assignee: Jason Dere
>  Labels: TODOC2.1
> Fix For: 2.1.0
>
> Attachments: HIVE-13248.1.patch, HIVE-13248.2.patch, 
> HIVE-13248.3.patch, HIVE-13248.4.patch
>
>
> Some of the original "date" related functions return string values rather 
> than Date values, because they were created before the Date type existed in 
> Hive. We can try to change these to return Date in the 2.x line.
> Date values should be implicitly convertible to String.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13248) Change date_add/date_sub/to_date functions to return Date type rather than String

2016-06-08 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-13248:
--
Labels: TODOC2.1  (was: TODOC2.2)

> Change date_add/date_sub/to_date functions to return Date type rather than 
> String
> -
>
> Key: HIVE-13248
> URL: https://issues.apache.org/jira/browse/HIVE-13248
> Project: Hive
>  Issue Type: Improvement
>  Components: UDF
>Affects Versions: 2.0.0, 2.1.0
>Reporter: Jason Dere
>Assignee: Jason Dere
>  Labels: TODOC2.1
> Fix For: 2.1.0
>
> Attachments: HIVE-13248.1.patch, HIVE-13248.2.patch, 
> HIVE-13248.3.patch, HIVE-13248.4.patch
>
>
> Some of the original "date" related functions return string values rather 
> than Date values, because they were created before the Date type existed in 
> Hive. We can try to change these to return Date in the 2.x line.
> Date values should be implicitly convertible to String.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13264) JDBC driver makes 2 Open Session Calls for every open session

2016-06-08 Thread NITHIN MAHESH (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

NITHIN MAHESH updated HIVE-13264:
-
Attachment: HIVE-13264.9.patch

> JDBC driver makes 2 Open Session Calls for every open session
> -
>
> Key: HIVE-13264
> URL: https://issues.apache.org/jira/browse/HIVE-13264
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Reporter: NITHIN MAHESH
>Assignee: NITHIN MAHESH
>  Labels: jdbc
> Attachments: HIVE-13264.1.patch, HIVE-13264.2.patch, 
> HIVE-13264.3.patch, HIVE-13264.4.patch, HIVE-13264.5.patch, 
> HIVE-13264.6.patch, HIVE-13264.6.patch, HIVE-13264.7.patch, 
> HIVE-13264.8.patch, HIVE-13264.9.patch, HIVE-13264.patch
>
>
> When HTTP is used as the transport mode by the Hive JDBC driver, we noticed 
> that there is an additional open/close session just to validate the 
> connection. 
>  
> TCLIService.Iface client = new TCLIService.Client(new 
> TBinaryProtocol(transport));
>   TOpenSessionResp openResp = client.OpenSession(new TOpenSessionReq());
>   if (openResp != null) {
> client.CloseSession(new 
> TCloseSessionReq(openResp.getSessionHandle()));
>   }
>  
> The open session call is a costly one and should not be used to test 
> transport. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13968) CombineHiveInputFormat does not honor InputFormat that implements AvoidSplitCombination

2016-06-08 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321768#comment-15321768
 ] 

Rui Li commented on HIVE-13968:
---

Thanks [~prasanna@gmail.com] for adding the test. Just one minor question: 
will the test leave any temp files in local disk? If so please clean up after 
the test.

> CombineHiveInputFormat does not honor InputFormat that implements 
> AvoidSplitCombination
> ---
>
> Key: HIVE-13968
> URL: https://issues.apache.org/jira/browse/HIVE-13968
> Project: Hive
>  Issue Type: Bug
>Reporter: Prasanna Rajaperumal
>Assignee: Prasanna Rajaperumal
> Attachments: HIVE-13968.1.patch, HIVE-13968.2.patch
>
>
> If I have 100 path[] , the nonCombinablePaths will have only the paths 
> paths[0-9] and the rest of the paths will be in combinablePaths, even if the 
> inputformat returns false for AvoidSplitCombination.shouldSkipCombine() for 
> all the paths. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13264) JDBC driver makes 2 Open Session Calls for every open session

2016-06-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321724#comment-15321724
 ] 

Hive QA commented on HIVE-13264:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12808992/HIVE-13264.8.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/54/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/54/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-54/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.8.0_25 ]]
+ export JAVA_HOME=/usr/java/jdk1.8.0_25
+ JAVA_HOME=/usr/java/jdk1.8.0_25
+ export 
PATH=/usr/java/jdk1.8.0_25/bin/:/usr/lib64/qt-3.3/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.8.0_25/bin/:/usr/lib64/qt-3.3/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-MASTER-Build-54/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 7a4fd33 HIVE-13391 : add an option to LLAP to use keytab to 
authenticate to read data (Sergey Shelukhin, reviewed by Siddharth Seth)
+ git clean -f -d
+ git checkout master
Already on 'master'
+ git reset --hard origin/master
HEAD is now at 7a4fd33 HIVE-13391 : add an option to LLAP to use keytab to 
authenticate to read data (Sergey Shelukhin, reviewed by Siddharth Seth)
+ git merge --ff-only origin/master
Already up-to-date.
+ git gc
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
patch:  Only garbage was found in the patch input.
patch:  Only garbage was found in the patch input.
patch:  Only garbage was found in the patch input.
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12808992 - PreCommit-HIVE-MASTER-Build

> JDBC driver makes 2 Open Session Calls for every open session
> -
>
> Key: HIVE-13264
> URL: https://issues.apache.org/jira/browse/HIVE-13264
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Reporter: NITHIN MAHESH
>Assignee: NITHIN MAHESH
>  Labels: jdbc
> Attachments: HIVE-13264.1.patch, HIVE-13264.2.patch, 
> HIVE-13264.3.patch, HIVE-13264.4.patch, HIVE-13264.5.patch, 
> HIVE-13264.6.patch, HIVE-13264.6.patch, HIVE-13264.7.patch, 
> HIVE-13264.8.patch, HIVE-13264.patch
>
>
> When HTTP is used as the transport mode by the Hive JDBC driver, we noticed 
> that there is an additional open/close session just to validate the 
> connection. 
>  
> TCLIService.Iface client = new TCLIService.Client(new 
> TBinaryProtocol(transport));
>   TOpenSessionResp openResp = client.OpenSession(new TOpenSessionReq());
>   if (openResp != null) {
> client.CloseSession(new 
> TCloseSessionReq(openResp.getSessionHandle()));
>   }
>  
> The open session call is a costly one and should not be used to test 
> transport. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13563) Hive Streaming does not honor orc.compress.size and orc.stripe.size table properties

2016-06-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321720#comment-15321720
 ] 

Hive QA commented on HIVE-13563:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12808978/HIVE-13563.4.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 10210 tests 
executed
*Failed tests:*
{noformat}
TestMiniTezCliDriver-transform_ppr2.q-vector_outer_join0.q-vector_bround.q-and-10-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_constprog_partitioner
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/53/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/53/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-53/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 8 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12808978 - PreCommit-HIVE-MASTER-Build

> Hive Streaming does not honor orc.compress.size and orc.stripe.size table 
> properties
> 
>
> Key: HIVE-13563
> URL: https://issues.apache.org/jira/browse/HIVE-13563
> Project: Hive
>  Issue Type: Bug
>  Components: ORC
>Affects Versions: 2.1.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>  Labels: TODOC2.1
> Attachments: HIVE-13563.1.patch, HIVE-13563.2.patch, 
> HIVE-13563.3.patch, HIVE-13563.4.patch
>
>
> According to the doc:
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+ORC#LanguageManualORC-HiveQLSyntax
> One should be able to specify tblproperties for many ORC options.
> But the settings for orc.compress.size and orc.stripe.size don't take effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13961) ACID: Major compaction fails to include the original bucket files if there's no delta directory

2016-06-08 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-13961:
-
Attachment: HIVE-13961.5.patch

> ACID: Major compaction fails to include the original bucket files if there's 
> no delta directory
> ---
>
> Key: HIVE-13961
> URL: https://issues.apache.org/jira/browse/HIVE-13961
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.3.0, 2.1.0, 2.2.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>Priority: Blocker
> Attachments: HIVE-13961.1.patch, HIVE-13961.2.patch, 
> HIVE-13961.3.patch, HIVE-13961.4.patch, HIVE-13961.5.patch
>
>
> The issue can be reproduced by steps below:
> 1. Insert a row to Non-ACID table
> 2. Convert Non-ACID to ACID table (i.e. set transactional=true table property)
> 3. Perform Major compaction



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13159) TxnHandler should support datanucleus.connectionPoolingType = None

2016-06-08 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-13159:
--
Status: Open  (was: Patch Available)

> TxnHandler should support datanucleus.connectionPoolingType = None
> --
>
> Key: HIVE-13159
> URL: https://issues.apache.org/jira/browse/HIVE-13159
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Sergey Shelukhin
>Assignee: Alan Gates
> Attachments: HIVE-13159.2.patch, HIVE-13159.patch
>
>
> Right now, one has to choose bonecp or dbcp.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13964) Add a parameter to beeline to allow a properties file to be passed in

2016-06-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321605#comment-15321605
 ] 

Hive QA commented on HIVE-13964:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12808769/HIVE-13964.01.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 10225 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_constprog_partitioner
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskSchedulerService.testDelayedLocalityNodeCommErrorImmediateAllocation
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/52/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/52/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-52/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 8 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12808769 - PreCommit-HIVE-MASTER-Build

> Add a parameter to beeline to allow a properties file to be passed in
> -
>
> Key: HIVE-13964
> URL: https://issues.apache.org/jira/browse/HIVE-13964
> Project: Hive
>  Issue Type: New Feature
>  Components: Beeline
>Affects Versions: 2.0.1
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
>Priority: Minor
> Fix For: 2.2.0
>
> Attachments: HIVE-13964.01.patch
>
>
> HIVE-6652 removed the ability to pass in a properties file as a beeline 
> parameter. It may be a useful feature to be able to pass the file in is a 
> parameter, such as --property-file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13567) Auto-gather column stats - phase 2

2016-06-08 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-13567:
---
Status: Patch Available  (was: Open)

> Auto-gather column stats - phase 2
> --
>
> Key: HIVE-13567
> URL: https://issues.apache.org/jira/browse/HIVE-13567
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-13567.01.patch
>
>
> in phase 2, we are going to set auto-gather column on as default. This needs 
> to update golden files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13567) Auto-gather column stats - phase 2

2016-06-08 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-13567:
---
Status: Open  (was: Patch Available)

> Auto-gather column stats - phase 2
> --
>
> Key: HIVE-13567
> URL: https://issues.apache.org/jira/browse/HIVE-13567
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-13567.01.patch
>
>
> in phase 2, we are going to set auto-gather column on as default. This needs 
> to update golden files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13567) Auto-gather column stats - phase 2

2016-06-08 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-13567:
---
Attachment: HIVE-13567.01.patch

> Auto-gather column stats - phase 2
> --
>
> Key: HIVE-13567
> URL: https://issues.apache.org/jira/browse/HIVE-13567
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-13567.01.patch
>
>
> in phase 2, we are going to set auto-gather column on as default. This needs 
> to update golden files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13567) Auto-gather column stats - phase 2

2016-06-08 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-13567:
---
Attachment: (was: HIVE-13567.01.patch)

> Auto-gather column stats - phase 2
> --
>
> Key: HIVE-13567
> URL: https://issues.apache.org/jira/browse/HIVE-13567
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
>
> in phase 2, we are going to set auto-gather column on as default. This needs 
> to update golden files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12656) Turn hive.compute.query.using.stats on by default

2016-06-08 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-12656:
---
Status: Patch Available  (was: Open)

> Turn hive.compute.query.using.stats on by default
> -
>
> Key: HIVE-12656
> URL: https://issues.apache.org/jira/browse/HIVE-12656
> Project: Hive
>  Issue Type: Bug
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-12656.01.patch
>
>
> We now have hive.compute.query.using.stats=false by default. We plan to turn 
> it on by default so that we can have better performance. We can also set it 
> to false in some test cases to maintain the original purpose of those tests..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12656) Turn hive.compute.query.using.stats on by default

2016-06-08 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-12656:
---
Status: Open  (was: Patch Available)

> Turn hive.compute.query.using.stats on by default
> -
>
> Key: HIVE-12656
> URL: https://issues.apache.org/jira/browse/HIVE-12656
> Project: Hive
>  Issue Type: Bug
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-12656.01.patch
>
>
> We now have hive.compute.query.using.stats=false by default. We plan to turn 
> it on by default so that we can have better performance. We can also set it 
> to false in some test cases to maintain the original purpose of those tests..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12656) Turn hive.compute.query.using.stats on by default

2016-06-08 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-12656:
---
Attachment: (was: HIVE-12656.01.patch)

> Turn hive.compute.query.using.stats on by default
> -
>
> Key: HIVE-12656
> URL: https://issues.apache.org/jira/browse/HIVE-12656
> Project: Hive
>  Issue Type: Bug
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-12656.01.patch
>
>
> We now have hive.compute.query.using.stats=false by default. We plan to turn 
> it on by default so that we can have better performance. We can also set it 
> to false in some test cases to maintain the original purpose of those tests..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12656) Turn hive.compute.query.using.stats on by default

2016-06-08 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-12656:
---
Attachment: HIVE-12656.01.patch

> Turn hive.compute.query.using.stats on by default
> -
>
> Key: HIVE-12656
> URL: https://issues.apache.org/jira/browse/HIVE-12656
> Project: Hive
>  Issue Type: Bug
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-12656.01.patch
>
>
> We now have hive.compute.query.using.stats=false by default. We plan to turn 
> it on by default so that we can have better performance. We can also set it 
> to false in some test cases to maintain the original purpose of those tests..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-12791) Truncated table stats should return 0 as datasize

2016-06-08 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong resolved HIVE-12791.

Resolution: Fixed

after HIVE-12661

> Truncated table stats should return 0 as datasize
> -
>
> Key: HIVE-12791
> URL: https://issues.apache.org/jira/browse/HIVE-12791
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
>
> {code}
> create table s as select * from src;
> truncate table s;
> hive> explain select * from s;
> OK
> STAGE DEPENDENCIES:
>   Stage-0 is a root stage
> STAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: -1
>   Processor Tree:
> TableScan
>   alias: s
>   Statistics: Num rows: 29 Data size: 5812 Basic stats: COMPLETE 
> Column stats: NONE
>   Select Operator
> expressions: key (type: string), value (type: string)
> outputColumnNames: _col0, _col1
> Statistics: Num rows: 29 Data size: 5812 Basic stats: COMPLETE 
> Column stats: NONE
> ListSink
> Time taken: 0.048 seconds, Fetched: 17 row(s)
> {code}
> should be 
> {code}
> Num rows: 1 Data size: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13723) Executing join query on type Float using Thrift Serde will result in Float cast to Double error

2016-06-08 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321597#comment-15321597
 ] 

Vaibhav Gumashta commented on HIVE-13723:
-

+1

> Executing join query on type Float using Thrift Serde will result in Float 
> cast to Double error
> ---
>
> Key: HIVE-13723
> URL: https://issues.apache.org/jira/browse/HIVE-13723
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, JDBC, Serializers/Deserializers
>Affects Versions: 2.1.0
>Reporter: Ziyang Zhao
>Assignee: Ziyang Zhao
>Priority: Critical
> Attachments: HIVE-13723.1.patch, HIVE-13723.2.patch
>
>
> After enable thrift Serde, execute the following queries in beeline,
> >create table test1 (a int);
> >create table test2 (b float);
> >insert into test1 values (1);
> >insert into test2 values (1);
> >select * from test1 join test2 on test1.a=test2.b;
> this will give the error:
> java.lang.Exception: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row {"b":1.0}
> at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) 
> ~[hadoop-mapreduce-client-common-2.7.1.2.4.0.0-169.jar:?]
> at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522) 
> [hadoop-mapreduce-client-common-2.7.1.2.4.0.0-169.jar:?]
> Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row {"b":1.0}
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:168) 
> ~[hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) 
> ~[hadoop-mapreduce-client-core-2.7.1.2.4.0.0-169.jar:?]
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453) 
> ~[hadoop-mapreduce-client-core-2.7.1.2.4.0.0-169.jar:?]
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) 
> ~[hadoop-mapreduce-client-core-2.7.1.2.4.0.0-169.jar:?]
> at 
> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
>  ~[hadoop-mapreduce-client-common-2.7.1.2.4.0.0-169.jar:?]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[?:1.7.0_95]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[?:1.7.0_95]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[?:1.7.0_95]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  ~[?:1.7.0_95]
> at java.lang.Thread.run(Thread.java:745) ~[?:1.7.0_95]
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row {"b":1.0}
> at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:568) 
> ~[hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:159) 
> ~[hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) 
> ~[hadoop-mapreduce-client-core-2.7.1.2.4.0.0-169.jar:?]
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453) 
> ~[hadoop-mapreduce-client-core-2.7.1.2.4.0.0-169.jar:?]
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) 
> ~[hadoop-mapreduce-client-core-2.7.1.2.4.0.0-169.jar:?]
> at 
> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
>  ~[hadoop-mapreduce-client-common-2.7.1.2.4.0.0-169.jar:?]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[?:1.7.0_95]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[?:1.7.0_95]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[?:1.7.0_95]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  ~[?:1.7.0_95]
> at java.lang.Thread.run(Thread.java:745) ~[?:1.7.0_95]
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unexpected 
> exception from MapJoinOperator : 
> org.apache.hadoop.hive.serde2.SerDeException: java.lang.ClassCastException: 
> java.lang.Float cannot be cast to java.lang.Double
> at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.process(MapJoinOperator.java:454)
>  ~[hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837) 
> ~[hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.exec.FilterOperator.process(FilterOperator.java:126)
>  

[jira] [Updated] (HIVE-13662) Set file permission and ACL in file sink operator

2016-06-08 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-13662:
---
Attachment: HIVE-13662.02.patch

[~ashutoshc], could u please take a look? Thanks.

> Set file permission and ACL in file sink operator
> -
>
> Key: HIVE-13662
> URL: https://issues.apache.org/jira/browse/HIVE-13662
> Project: Hive
>  Issue Type: Bug
>Reporter: Rui Li
>Assignee: Pengcheng Xiong
> Attachments: HIVE-13662.01.patch, HIVE-13662.02.patch
>
>
> As suggested 
> [here|https://issues.apache.org/jira/browse/HIVE-13572?focusedCommentId=15254438=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15254438].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13973) Extend support for other primitive types in windowing expressions

2016-06-08 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321578#comment-15321578
 ] 

Ashutosh Chauhan commented on HIVE-13973:
-

+1

> Extend support for other primitive types in windowing expressions
> -
>
> Key: HIVE-13973
> URL: https://issues.apache.org/jira/browse/HIVE-13973
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-13973.01.patch, HIVE-13973.patch
>
>
> Following windowing query using boolean column in partitioning clause
> {code:sql}
> create table all100k(t tinyint, si smallint, i int,
> b bigint, f float, d double, s string,
> dc decimal(38,18), bo boolean, v varchar(25),
> c char(25), ts timestamp, dt date);
> select  rank() over (partition by i order by bo  nulls first, b nulls last 
> range between unbounded preceding and current row),
> row_number()  over (partition by bo order by si desc, b nulls last range 
> between unbounded preceding and unbounded following) as fv
> from all100k order by fv;
> {code}
> fails with the following error:
> {noformat}
> FAILED: SemanticException Failed to breakup Windowing invocations into 
> Groups. At least 1 group must only depend on input columns. Also check for 
> circular dependencies.
> Underlying error: Primitve type BOOLEAN not supported in Value Boundary 
> expression
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13443) LLAP: signing for the second state of submit (the event)

2016-06-08 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13443:

Attachment: (was: HIVE-13443.04.patch)

> LLAP: signing for the second state of submit (the event)
> 
>
> Key: HIVE-13443
> URL: https://issues.apache.org/jira/browse/HIVE-13443
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13443.01.patch, HIVE-13443.02.patch, 
> HIVE-13443.02.wo.13675.nogen.patch, HIVE-13443.03.patch, HIVE-13443.03.patch, 
> HIVE-13443.03.wo.13675.nogen.patch, HIVE-13443.04.patch, HIVE-13443.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13443) LLAP: signing for the second state of submit (the event)

2016-06-08 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13443:

Attachment: HIVE-13443.04.patch

Another rebase; just the HiveQA patch for now.

> LLAP: signing for the second state of submit (the event)
> 
>
> Key: HIVE-13443
> URL: https://issues.apache.org/jira/browse/HIVE-13443
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13443.01.patch, HIVE-13443.02.patch, 
> HIVE-13443.02.wo.13675.nogen.patch, HIVE-13443.03.patch, HIVE-13443.03.patch, 
> HIVE-13443.03.wo.13675.nogen.patch, HIVE-13443.04.patch, HIVE-13443.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13759) LlapTaskUmbilicalExternalClient should be closed by the record reader

2016-06-08 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321543#comment-15321543
 ] 

Siddharth Seth commented on HIVE-13759:
---

+1. Looks good. You may want to add a note somewhere in the code, or file a 
follow up jira for the complex case not working.

> LlapTaskUmbilicalExternalClient should be closed by the record reader
> -
>
> Key: HIVE-13759
> URL: https://issues.apache.org/jira/browse/HIVE-13759
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-13759.1.patch, HIVE-13759.2.patch
>
>
> The umbilical external client (and the server socket it creates) doesn't look 
> like it's getting closed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13675) LLAP: add HMAC signatures to LLAPIF splits

2016-06-08 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13675:

Attachment: HIVE-13675.10.patch

Rebased again...

> LLAP: add HMAC signatures to LLAPIF splits
> --
>
> Key: HIVE-13675
> URL: https://issues.apache.org/jira/browse/HIVE-13675
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13675.01.patch, HIVE-13675.02.patch, 
> HIVE-13675.03.patch, HIVE-13675.04.patch, HIVE-13675.05.patch, 
> HIVE-13675.06.patch, HIVE-13675.07.patch, HIVE-13675.08.patch, 
> HIVE-13675.09.patch, HIVE-13675.10.patch, HIVE-13675.WIP.patch, 
> HIVE-13675.wo.13444.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13957) vectorized IN is inconsistent with non-vectorized (at least for decimal in (string))

2016-06-08 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13957:

Attachment: HIVE-13957.03.patch

The same patch, for HiveQA


> vectorized IN is inconsistent with non-vectorized (at least for decimal in 
> (string))
> 
>
> Key: HIVE-13957
> URL: https://issues.apache.org/jira/browse/HIVE-13957
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13957.01.patch, HIVE-13957.02.patch, 
> HIVE-13957.03.patch, HIVE-13957.patch, HIVE-13957.patch
>
>
> The cast is applied to the column in regular IN, but vectorized IN applies it 
> to the IN() list.
> This can cause queries to produce incorrect results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13391) add an option to LLAP to use keytab to authenticate to read data

2016-06-08 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13391:

   Resolution: Fixed
Fix Version/s: 2.2.0
   Status: Resolved  (was: Patch Available)

Committed to master

> add an option to LLAP to use keytab to authenticate to read data
> 
>
> Key: HIVE-13391
> URL: https://issues.apache.org/jira/browse/HIVE-13391
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 2.2.0
>
> Attachments: HIVE-13391.01.patch, HIVE-13391.02.patch, 
> HIVE-13391.03.patch, HIVE-13391.04.patch, HIVE-13391.05.patch, 
> HIVE-13391.06.patch, HIVE-13391.07.patch, HIVE-13391.08.patch, 
> HIVE-13391.09.patch, HIVE-13391.10.patch, HIVE-13391.10.patch, 
> HIVE-13391.11.patch, HIVE-13391.patch
>
>
> This can be used for non-doAs case to allow access to clients who don't 
> propagate HDFS tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-13732) Security for LlapOutputFormatService

2016-06-08 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin resolved HIVE-13732.
-
Resolution: Duplicate

> Security for LlapOutputFormatService
> 
>
> Key: HIVE-13732
> URL: https://issues.apache.org/jira/browse/HIVE-13732
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap
>Reporter: Jason Dere
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13913) LLAP: introduce backpressure to recordreader

2016-06-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321496#comment-15321496
 ] 

Sergey Shelukhin commented on HIVE-13913:
-

I'll try it on the cluster eventually.

> LLAP: introduce backpressure to recordreader
> 
>
> Key: HIVE-13913
> URL: https://issues.apache.org/jira/browse/HIVE-13913
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13913.01.patch, HIVE-13913.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13913) LLAP: introduce backpressure to recordreader

2016-06-08 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13913:

Attachment: HIVE-13913.01.patch

Fixed the issue; also added the correct usage of close, it was not used at all 
(probably works because the decoder processes it correctly above).

> LLAP: introduce backpressure to recordreader
> 
>
> Key: HIVE-13913
> URL: https://issues.apache.org/jira/browse/HIVE-13913
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13913.01.patch, HIVE-13913.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13759) LlapTaskUmbilicalExternalClient should be closed by the record reader

2016-06-08 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321445#comment-15321445
 ] 

Jason Dere commented on HIVE-13759:
---

Failures do not look related and have other Jiras created for them.
[~sseth] does this one look ok?

> LlapTaskUmbilicalExternalClient should be closed by the record reader
> -
>
> Key: HIVE-13759
> URL: https://issues.apache.org/jira/browse/HIVE-13759
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-13759.1.patch, HIVE-13759.2.patch
>
>
> The umbilical external client (and the server socket it creates) doesn't look 
> like it's getting closed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13961) ACID: Major compaction fails to include the original bucket files if there's no delta directory

2016-06-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321429#comment-15321429
 ] 

Hive QA commented on HIVE-13961:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12808973/HIVE-13961.4.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 10225 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_constprog_partitioner
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
org.apache.hadoop.hive.ql.TestTxnCommands2.testNonAcidToAcidConversion3
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/51/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/51/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-51/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 8 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12808973 - PreCommit-HIVE-MASTER-Build

> ACID: Major compaction fails to include the original bucket files if there's 
> no delta directory
> ---
>
> Key: HIVE-13961
> URL: https://issues.apache.org/jira/browse/HIVE-13961
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.3.0, 2.1.0, 2.2.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>Priority: Blocker
> Attachments: HIVE-13961.1.patch, HIVE-13961.2.patch, 
> HIVE-13961.3.patch, HIVE-13961.4.patch
>
>
> The issue can be reproduced by steps below:
> 1. Insert a row to Non-ACID table
> 2. Convert Non-ACID to ACID table (i.e. set transactional=true table property)
> 3. Perform Major compaction



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13972) Resolve class dependency issue introduced by HIVE-13354

2016-06-08 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-13972:
-
   Resolution: Fixed
Fix Version/s: 2.2.0
   2.1.0
   1.3.0
   Status: Resolved  (was: Patch Available)

Committed to master, branch-2.1 and branch-1. Thanks Eugene for the review.

> Resolve class dependency issue introduced by HIVE-13354
> ---
>
> Key: HIVE-13972
> URL: https://issues.apache.org/jira/browse/HIVE-13972
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.3.0, 2.1.0, 2.2.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>Priority: Blocker
> Fix For: 1.3.0, 2.1.0, 2.2.0
>
> Attachments: HIVE-13972.1.patch, HIVE-13972.branch-1.patch
>
>
> HIVE-13354 moved a helper class StringableMap from 
> ql/txn/compactor/CompactorMR.java to metastore/txn/TxnUtils.java
> This introduced a dependency from ql package to metastore package which is 
> not allowed and fails in a real cluster.
> Instead of moving it to metastore, it should be moved to common package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13972) Resolve class dependency issue introduced by HIVE-13354

2016-06-08 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-13972:
-
Attachment: HIVE-13972.branch-1.patch

> Resolve class dependency issue introduced by HIVE-13354
> ---
>
> Key: HIVE-13972
> URL: https://issues.apache.org/jira/browse/HIVE-13972
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.3.0, 2.1.0, 2.2.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>Priority: Blocker
> Attachments: HIVE-13972.1.patch, HIVE-13972.branch-1.patch
>
>
> HIVE-13354 moved a helper class StringableMap from 
> ql/txn/compactor/CompactorMR.java to metastore/txn/TxnUtils.java
> This introduced a dependency from ql package to metastore package which is 
> not allowed and fails in a real cluster.
> Instead of moving it to metastore, it should be moved to common package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13974) ORC Schema Evolution doesn't support add columns to non-last STRUCT columns

2016-06-08 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-13974:

Attachment: HIVE-13974.01.patch

> ORC Schema Evolution doesn't support add columns to non-last STRUCT columns
> ---
>
> Key: HIVE-13974
> URL: https://issues.apache.org/jira/browse/HIVE-13974
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.3.0, 2.1.0, 2.2.0
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-13974.01.patch
>
>
> Currently, the included columns are based on the fileSchema and not the 
> readerSchema which doesn't work for adding columns to non-last STRUCT data 
> type columns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13900) HiveStatement.executeAsync() may not work properly when hive.server2.async.exec.async.compile is turned on

2016-06-08 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-13900:

Status: Patch Available  (was: Open)

patch-1: for executeAsync() function, we still need ResultSet status. Add the 
logic to wait for the compilation to finish, but not the execution.

> HiveStatement.executeAsync() may not work properly when 
> hive.server2.async.exec.async.compile is turned on
> --
>
> Key: HIVE-13900
> URL: https://issues.apache.org/jira/browse/HIVE-13900
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 2.2.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-13900.1.patch
>
>
> HIVE-13882 handles HiveStatement.executeQuery() when 
> hive.server2.async.exec.async.compile is turned on. Notice we may also have 
> similar issue when executeAsync() is called. Investigate what would be the 
> good approach for it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13749) Memory leak in Hive Metastore

2016-06-08 Thread Naveen Gangam (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321265#comment-15321265
 ] 

Naveen Gangam commented on HIVE-13749:
--

[~thejas] After disabling the compactor.Initiator and the Compactor threads, 
(because this customer is not using the fix from HIVE-13151), there appear to 
be no more leaks. 
However, there are still about 400 instances of Configuration objects in memory 
(about 80MB of retained objects, 12% in this case), about 11 of them from 
static initializers in *Writable classes and the remaining of them stashed in 
thread locals, 1 per thread. So HMS roughly has 390 threads, each has 1 
instance of Configuration set in its threadlocals. These references should be 
re-set when the thread gets re-assigned but they would be retained until this 
occurs. Would it make sense to do this cleanup sooner. Something like this
{code}
 try {
   ms.shutdown();
 } finally {
   threadLocalConf.remove();
   threadLocalMS.remove();
{code}
As always, thank you for your input in advance. 

> Memory leak in Hive Metastore
> -
>
> Key: HIVE-13749
> URL: https://issues.apache.org/jira/browse/HIVE-13749
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 1.1.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Attachments: HIVE-13749.patch, Top_Consumers7.html
>
>
> Looking a heap dump of 10GB, a large number of Configuration objects(> 66k 
> instances) are being retained. These objects along with its retained set is 
> occupying about 95% of the heap space. This leads to HMS crashes every few 
> days.
> I will attach an exported snapshot from the eclipse MAT.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13900) HiveStatement.executeAsync() may not work properly when hive.server2.async.exec.async.compile is turned on

2016-06-08 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-13900:

Attachment: HIVE-13900.1.patch

> HiveStatement.executeAsync() may not work properly when 
> hive.server2.async.exec.async.compile is turned on
> --
>
> Key: HIVE-13900
> URL: https://issues.apache.org/jira/browse/HIVE-13900
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 2.2.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-13900.1.patch
>
>
> HIVE-13882 handles HiveStatement.executeQuery() when 
> hive.server2.async.exec.async.compile is turned on. Notice we may also have 
> similar issue when executeAsync() is called. Investigate what would be the 
> good approach for it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-13968) CombineHiveInputFormat does not honor InputFormat that implements AvoidSplitCombination

2016-06-08 Thread Prasanna Rajaperumal (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321263#comment-15321263
 ] 

Prasanna Rajaperumal edited comment on HIVE-13968 at 6/8/16 7:13 PM:
-

Sure. [~lirui] Updated Patch with test case. Test case fails before fix and 
passes after the fix.



was (Author: prasanna@gmail.com):
Patch with test case

> CombineHiveInputFormat does not honor InputFormat that implements 
> AvoidSplitCombination
> ---
>
> Key: HIVE-13968
> URL: https://issues.apache.org/jira/browse/HIVE-13968
> Project: Hive
>  Issue Type: Bug
>Reporter: Prasanna Rajaperumal
>Assignee: Prasanna Rajaperumal
> Attachments: HIVE-13968.1.patch, HIVE-13968.2.patch
>
>
> If I have 100 path[] , the nonCombinablePaths will have only the paths 
> paths[0-9] and the rest of the paths will be in combinablePaths, even if the 
> inputformat returns false for AvoidSplitCombination.shouldSkipCombine() for 
> all the paths. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13968) CombineHiveInputFormat does not honor InputFormat that implements AvoidSplitCombination

2016-06-08 Thread Prasanna Rajaperumal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanna Rajaperumal updated HIVE-13968:

Attachment: HIVE-13968.2.patch

Patch with test case

> CombineHiveInputFormat does not honor InputFormat that implements 
> AvoidSplitCombination
> ---
>
> Key: HIVE-13968
> URL: https://issues.apache.org/jira/browse/HIVE-13968
> Project: Hive
>  Issue Type: Bug
>Reporter: Prasanna Rajaperumal
>Assignee: Prasanna Rajaperumal
> Attachments: HIVE-13968.1.patch, HIVE-13968.2.patch
>
>
> If I have 100 path[] , the nonCombinablePaths will have only the paths 
> paths[0-9] and the rest of the paths will be in combinablePaths, even if the 
> inputformat returns false for AvoidSplitCombination.shouldSkipCombine() for 
> all the paths. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13972) Resolve class dependency issue introduced by HIVE-13354

2016-06-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321221#comment-15321221
 ] 

Hive QA commented on HIVE-13972:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12808858/HIVE-13972.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 11 failed/errored test(s), 10208 tests 
executed
*Failed tests:*
{noformat}
TestMiniTezCliDriver-vectorization_13.q-schema_evol_text_nonvec_mapwork_part_all_primitive.q-bucket3.q-and-12-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_func1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_constprog_partitioner
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
org.apache.hive.jdbc.TestJdbcWithLocalClusterSpark.testPermFunc
org.apache.hive.jdbc.TestJdbcWithMiniMr.testPermFunc
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/50/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/50/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-50/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 11 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12808858 - PreCommit-HIVE-MASTER-Build

> Resolve class dependency issue introduced by HIVE-13354
> ---
>
> Key: HIVE-13972
> URL: https://issues.apache.org/jira/browse/HIVE-13972
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.3.0, 2.1.0, 2.2.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>Priority: Blocker
> Attachments: HIVE-13972.1.patch
>
>
> HIVE-13354 moved a helper class StringableMap from 
> ql/txn/compactor/CompactorMR.java to metastore/txn/TxnUtils.java
> This introduced a dependency from ql package to metastore package which is 
> not allowed and fails in a real cluster.
> Instead of moving it to metastore, it should be moved to common package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13903) getFunctionInfo is downloading jar on every call

2016-06-08 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321210#comment-15321210
 ] 

Jesus Camacho Rodriguez commented on HIVE-13903:


Reverted the patch and reopened the issue; see HIVE-13962 for further details.

> getFunctionInfo is downloading jar on every call
> 
>
> Key: HIVE-13903
> URL: https://issues.apache.org/jira/browse/HIVE-13903
> Project: Hive
>  Issue Type: Bug
>Reporter: Rajat Khandelwal
>Assignee: Rajat Khandelwal
> Attachments: HIVE-13903.01.patch
>
>
> on queries using permanent udfs, the jar file of the udf is downloaded 
> multiple times. Each call originating from Registry.getFunctionInfo. This 
> increases time for the query, especially if that query is just an explain 
> query. The jar should be downloaded once, and not downloaded again if the udf 
> class is accessible in the current thread. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13903) getFunctionInfo is downloading jar on every call

2016-06-08 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-13903:
---
Fix Version/s: (was: 2.1.0)

> getFunctionInfo is downloading jar on every call
> 
>
> Key: HIVE-13903
> URL: https://issues.apache.org/jira/browse/HIVE-13903
> Project: Hive
>  Issue Type: Bug
>Reporter: Rajat Khandelwal
>Assignee: Rajat Khandelwal
> Attachments: HIVE-13903.01.patch
>
>
> on queries using permanent udfs, the jar file of the udf is downloaded 
> multiple times. Each call originating from Registry.getFunctionInfo. This 
> increases time for the query, especially if that query is just an explain 
> query. The jar should be downloaded once, and not downloaded again if the udf 
> class is accessible in the current thread. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-13962) create_func1 test fails with NPE

2016-06-08 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez resolved HIVE-13962.

Resolution: Invalid
  Assignee: (was: Rajat Khandelwal)

I am going to close this one as Invalid, and revert and reopen HIVE-13903, as 
the regression needs to be further studied before that code can go in. Thanks

> create_func1 test fails with NPE
> 
>
> Key: HIVE-13962
> URL: https://issues.apache.org/jira/browse/HIVE-13962
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>
> {noformat}
> 2016-06-07T11:19:50,843 ERROR [82a55b04-c058-475d-8ba4-d1a3007eb213 main[]]: 
> ql.Driver (SessionState.java:printError(1055)) - FAILED: NullPointerException 
> null
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc.newInstance(ExprNodeGenericFuncDesc.java:236)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.getXpathOrFuncExprNodeDesc(TypeCheckProcFactory.java:1072)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.process(TypeCheckProcFactory.java:1317)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:89)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:158)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:120)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:219)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:163)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genAllExprNodeDesc(SemanticAnalyzer.java:11182)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:11137)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genSelectLogicalPlan(CalcitePlanner.java:2996)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genLogicalPlan(CalcitePlanner.java:3158)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:939)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:893)
>   at org.apache.calcite.tools.Frameworks$1.apply(Frameworks.java:113)
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:969)
>   at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:149)
>   at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:712)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:280)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10755)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:239)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:250)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:437)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:329)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1158)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1253)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1084)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1072)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335)
>   at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1143)
>   at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:1117)
>   at 
> org.apache.hadoop.hive.cli.TestCliDriver.runTest(TestCliDriver.java:120)
>   at 
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_func1(TestCliDriver.java:103)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at 

[jira] [Reopened] (HIVE-13903) getFunctionInfo is downloading jar on every call

2016-06-08 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez reopened HIVE-13903:


> getFunctionInfo is downloading jar on every call
> 
>
> Key: HIVE-13903
> URL: https://issues.apache.org/jira/browse/HIVE-13903
> Project: Hive
>  Issue Type: Bug
>Reporter: Rajat Khandelwal
>Assignee: Rajat Khandelwal
> Attachments: HIVE-13903.01.patch
>
>
> on queries using permanent udfs, the jar file of the udf is downloaded 
> multiple times. Each call originating from Registry.getFunctionInfo. This 
> increases time for the query, especially if that query is just an explain 
> query. The jar should be downloaded once, and not downloaded again if the udf 
> class is accessible in the current thread. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13443) LLAP: signing for the second state of submit (the event)

2016-06-08 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13443:

Attachment: HIVE-13443.04.patch

rinse, repeat


> LLAP: signing for the second state of submit (the event)
> 
>
> Key: HIVE-13443
> URL: https://issues.apache.org/jira/browse/HIVE-13443
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13443.01.patch, HIVE-13443.02.patch, 
> HIVE-13443.02.wo.13675.nogen.patch, HIVE-13443.03.patch, HIVE-13443.03.patch, 
> HIVE-13443.03.wo.13675.nogen.patch, HIVE-13443.04.patch, HIVE-13443.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13675) LLAP: add HMAC signatures to LLAPIF splits

2016-06-08 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13675:

Attachment: HIVE-13675.09.patch

rinse, repeat


> LLAP: add HMAC signatures to LLAPIF splits
> --
>
> Key: HIVE-13675
> URL: https://issues.apache.org/jira/browse/HIVE-13675
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13675.01.patch, HIVE-13675.02.patch, 
> HIVE-13675.03.patch, HIVE-13675.04.patch, HIVE-13675.05.patch, 
> HIVE-13675.06.patch, HIVE-13675.07.patch, HIVE-13675.08.patch, 
> HIVE-13675.09.patch, HIVE-13675.WIP.patch, HIVE-13675.wo.13444.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13973) Extend support for other primitive types in windowing expressions

2016-06-08 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-13973:
---
Attachment: HIVE-13973.01.patch

New patch supporting additional types and adding test cases.

> Extend support for other primitive types in windowing expressions
> -
>
> Key: HIVE-13973
> URL: https://issues.apache.org/jira/browse/HIVE-13973
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-13973.01.patch, HIVE-13973.patch
>
>
> Following windowing query using boolean column in partitioning clause
> {code:sql}
> create table all100k(t tinyint, si smallint, i int,
> b bigint, f float, d double, s string,
> dc decimal(38,18), bo boolean, v varchar(25),
> c char(25), ts timestamp, dt date);
> select  rank() over (partition by i order by bo  nulls first, b nulls last 
> range between unbounded preceding and current row),
> row_number()  over (partition by bo order by si desc, b nulls last range 
> between unbounded preceding and unbounded following) as fv
> from all100k order by fv;
> {code}
> fails with the following error:
> {noformat}
> FAILED: SemanticException Failed to breakup Windowing invocations into 
> Groups. At least 1 group must only depend on input columns. Also check for 
> circular dependencies.
> Underlying error: Primitve type BOOLEAN not supported in Value Boundary 
> expression
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13973) Extend support for other primitive types in windowing expressions

2016-06-08 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-13973:
---
Status: Patch Available  (was: In Progress)

> Extend support for other primitive types in windowing expressions
> -
>
> Key: HIVE-13973
> URL: https://issues.apache.org/jira/browse/HIVE-13973
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-13973.patch
>
>
> Following windowing query using boolean column in partitioning clause
> {code:sql}
> create table all100k(t tinyint, si smallint, i int,
> b bigint, f float, d double, s string,
> dc decimal(38,18), bo boolean, v varchar(25),
> c char(25), ts timestamp, dt date);
> select  rank() over (partition by i order by bo  nulls first, b nulls last 
> range between unbounded preceding and current row),
> row_number()  over (partition by bo order by si desc, b nulls last range 
> between unbounded preceding and unbounded following) as fv
> from all100k order by fv;
> {code}
> fails with the following error:
> {noformat}
> FAILED: SemanticException Failed to breakup Windowing invocations into 
> Groups. At least 1 group must only depend on input columns. Also check for 
> circular dependencies.
> Underlying error: Primitve type BOOLEAN not supported in Value Boundary 
> expression
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13973) Extend support for other primitive types in windowing expressions

2016-06-08 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-13973:
---
Status: Open  (was: Patch Available)

> Extend support for other primitive types in windowing expressions
> -
>
> Key: HIVE-13973
> URL: https://issues.apache.org/jira/browse/HIVE-13973
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-13973.patch
>
>
> Following windowing query using boolean column in partitioning clause
> {code:sql}
> create table all100k(t tinyint, si smallint, i int,
> b bigint, f float, d double, s string,
> dc decimal(38,18), bo boolean, v varchar(25),
> c char(25), ts timestamp, dt date);
> select  rank() over (partition by i order by bo  nulls first, b nulls last 
> range between unbounded preceding and current row),
> row_number()  over (partition by bo order by si desc, b nulls last range 
> between unbounded preceding and unbounded following) as fv
> from all100k order by fv;
> {code}
> fails with the following error:
> {noformat}
> FAILED: SemanticException Failed to breakup Windowing invocations into 
> Groups. At least 1 group must only depend on input columns. Also check for 
> circular dependencies.
> Underlying error: Primitve type BOOLEAN not supported in Value Boundary 
> expression
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HIVE-13973) Extend support for other primitive types in windowing expressions

2016-06-08 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-13973 started by Jesus Camacho Rodriguez.
--
> Extend support for other primitive types in windowing expressions
> -
>
> Key: HIVE-13973
> URL: https://issues.apache.org/jira/browse/HIVE-13973
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-13973.patch
>
>
> Following windowing query using boolean column in partitioning clause
> {code:sql}
> create table all100k(t tinyint, si smallint, i int,
> b bigint, f float, d double, s string,
> dc decimal(38,18), bo boolean, v varchar(25),
> c char(25), ts timestamp, dt date);
> select  rank() over (partition by i order by bo  nulls first, b nulls last 
> range between unbounded preceding and current row),
> row_number()  over (partition by bo order by si desc, b nulls last range 
> between unbounded preceding and unbounded following) as fv
> from all100k order by fv;
> {code}
> fails with the following error:
> {noformat}
> FAILED: SemanticException Failed to breakup Windowing invocations into 
> Groups. At least 1 group must only depend on input columns. Also check for 
> circular dependencies.
> Underlying error: Primitve type BOOLEAN not supported in Value Boundary 
> expression
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-13967) CREATE table fails when 'values' column name is found on the table spec.

2016-06-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-13967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña resolved HIVE-13967.

Resolution: Not A Problem

Close the ticket as this is not a problem due to SQL11 reserved keywords.

> CREATE table fails when 'values' column name is found on the table spec.
> 
>
> Key: HIVE-13967
> URL: https://issues.apache.org/jira/browse/HIVE-13967
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Sergio Peña
>Assignee: Abdullah Yousufi
>
> {noformat}
> hive> create table pkv (key int, values string);  
>   
> [0/4271]
> FailedPredicateException(identifier,{useSQL11ReservedKeywordsForIdentifier()}?)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.identifier(HiveParser_IdentifiersParser.java:11914)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.identifier(HiveParser.java:51795)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.columnNameType(HiveParser.java:42051)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.columnNameTypeOrPKOrFK(HiveParser.java:42308)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.columnNameTypeOrPKOrFKList(HiveParser.java:37966)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.createTableStatement(HiveParser.java:5259)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.ddlStatement(HiveParser.java:2763)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:1756)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1178)
> at 
> org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:204)
> at 
> org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:404)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:329)
> at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1158)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1253)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1084)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1072)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:641)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> FAILED: ParseException line 1:27 Failed to recognize predicate 'values'. 
> Failed rule: 'identifier' in column specification
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13966) DbNotificationListener: can loose DDL operation notifications

2016-06-08 Thread Nachiket Vaidya (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321083#comment-15321083
 ] 

Nachiket Vaidya commented on HIVE-13966:


+ When operation fails, should we continue adding an entry to notification log 
or not?
>> This is fine as this is false positive and one can skip further operation if 
>> the given entity is not present.

+ When operation is correct, but notification log fails, should we just display 
a warning message? or rollback the operation?
>> This can loose the information that something is changed in metadata. 
>> Ideally we should rollback the operation. But it can be expensive. A simple 
>> fix is to add to notification log before operation. But this will then 
>> applicable for all metastore listeners. I do not have good 
>> solution/suggestion for the same.
Actually we will get error when operation is successful but adding to 
notification log fails. But if application chose to ignore errors, this 
inconsistency can create problem.
What is the contract of listeners? They do not run in the same transaction as 
that of operation? and they just notify that operation was executed 
irrespective of the result of the operation?

> DbNotificationListener: can loose DDL operation notifications
> -
>
> Key: HIVE-13966
> URL: https://issues.apache.org/jira/browse/HIVE-13966
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Reporter: Nachiket Vaidya
>Priority: Critical
>
> The code for each API in HiveMetaStore.java is like this:
> 1. openTransaction()
> 2. -- operation--
> 3. commit() or rollback() based on result of the operation.
> 4. add entry to notification log (unconditionally)
> If the operation is failed (in step 2), we still add entry to notification 
> log. Found this issue in testing.
> It is still ok as this is the case of false positive.
> If the operation is successful and adding to notification log failed, the 
> user will get an MetaException. It will not rollback the operation, as it is 
> already committed. We need to handle this case so that we will not have false 
> negatives.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13662) Set file permission and ACL in file sink operator

2016-06-08 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321043#comment-15321043
 ] 

Pengcheng Xiong commented on HIVE-13662:


More info here
{code}
FileStatus{path=hdfs://localhost:63064/base/warehouse/singledynamicpart/.hive-staging_hive_2016-06-07_23-09-19_277_7503414655019469213-1/_task_tmp.-ext-10002/part1=1/_tmp.00_0;
 isDirectory=false; length=0; replication=3; blocksize=134217728; 
modification_time=1465386667125; access_time=1465366204678; owner=pxiong; 
group=supergroup; permission=rwxr-xr-x; isSymlink=false}
{code}



> Set file permission and ACL in file sink operator
> -
>
> Key: HIVE-13662
> URL: https://issues.apache.org/jira/browse/HIVE-13662
> Project: Hive
>  Issue Type: Bug
>Reporter: Rui Li
>Assignee: Pengcheng Xiong
> Attachments: HIVE-13662.01.patch
>
>
> As suggested 
> [here|https://issues.apache.org/jira/browse/HIVE-13572?focusedCommentId=15254438=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15254438].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-13962) create_func1 test fails with NPE

2016-06-08 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321021#comment-15321021
 ] 

Jesus Camacho Rodriguez edited comment on HIVE-13962 at 6/8/16 5:38 PM:


Apparently this was broken by HIVE-13903. To reproduce, taken from 
{{create_func1.q}}:

{noformat}
describe function extended qtest_get_java_boolean;
{noformat}

After HIVE-13903 went in:

{noformat}
PREHOOK: query: describe function extended qtest_get_java_boolean
PREHOOK: type: DESCFUNCTION
POSTHOOK: query: describe function extended qtest_get_java_boolean
POSTHOOK: type: DESCFUNCTION
Function 'qtest_get_java_boolean' does not exist.
{noformat}

Apparently the UDF does not appear as registered, but it should. Before 
HIVE-13903 went in:

{noformat}
PREHOOK: query: describe function extended qtest_get_java_boolean
PREHOOK: type: DESCFUNCTION
POSTHOOK: query: describe function extended qtest_get_java_boolean
POSTHOOK: type: DESCFUNCTION
qtest_get_java_boolean(str) - GenericUDF to return native Java's boolean type
Synonyms: default.qtest_get_java_boolean
{noformat}

[~prongs], could you take a look?


was (Author: jcamachorodriguez):
Apparently this was broken by HIVE-13903. Taken from {{create_func1.q}}:

{noformat}
PREHOOK: query: describe function extended qtest_get_java_boolean
PREHOOK: type: DESCFUNCTION
POSTHOOK: query: describe function extended qtest_get_java_boolean
POSTHOOK: type: DESCFUNCTION
Function 'qtest_get_java_boolean' does not exist.
{noformat}

Apparently the UDF does not appear as registered, but it should. Before 
HIVE-13903 went in:

{noformat}
PREHOOK: query: describe function extended qtest_get_java_boolean
PREHOOK: type: DESCFUNCTION
POSTHOOK: query: describe function extended qtest_get_java_boolean
POSTHOOK: type: DESCFUNCTION
qtest_get_java_boolean(str) - GenericUDF to return native Java's boolean type
Synonyms: default.qtest_get_java_boolean
{noformat}

[~prongs], could you take a look?

> create_func1 test fails with NPE
> 
>
> Key: HIVE-13962
> URL: https://issues.apache.org/jira/browse/HIVE-13962
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>
> {noformat}
> 2016-06-07T11:19:50,843 ERROR [82a55b04-c058-475d-8ba4-d1a3007eb213 main[]]: 
> ql.Driver (SessionState.java:printError(1055)) - FAILED: NullPointerException 
> null
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc.newInstance(ExprNodeGenericFuncDesc.java:236)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.getXpathOrFuncExprNodeDesc(TypeCheckProcFactory.java:1072)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.process(TypeCheckProcFactory.java:1317)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:89)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:158)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:120)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:219)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:163)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genAllExprNodeDesc(SemanticAnalyzer.java:11182)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:11137)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genSelectLogicalPlan(CalcitePlanner.java:2996)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genLogicalPlan(CalcitePlanner.java:3158)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:939)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:893)
>   at org.apache.calcite.tools.Frameworks$1.apply(Frameworks.java:113)
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:969)
>   at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:149)
>   at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:712)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:280)
>   at 
> 

[jira] [Commented] (HIVE-13962) create_func1 test fails with NPE

2016-06-08 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321021#comment-15321021
 ] 

Jesus Camacho Rodriguez commented on HIVE-13962:


Apparently this was broken by HIVE-13903. Taken from {{create_func1.q}}:

{noformat}
PREHOOK: query: describe function extended qtest_get_java_boolean
PREHOOK: type: DESCFUNCTION
POSTHOOK: query: describe function extended qtest_get_java_boolean
POSTHOOK: type: DESCFUNCTION
Function 'qtest_get_java_boolean' does not exist.
{noformat}

Apparently the UDF does not appear as registered, but it should. Before 
HIVE-13903 went in:

{noformat}
PREHOOK: query: describe function extended qtest_get_java_boolean
PREHOOK: type: DESCFUNCTION
POSTHOOK: query: describe function extended qtest_get_java_boolean
POSTHOOK: type: DESCFUNCTION
qtest_get_java_boolean(str) - GenericUDF to return native Java's boolean type
Synonyms: default.qtest_get_java_boolean
{noformat}

[~prongs], could you take a look?

> create_func1 test fails with NPE
> 
>
> Key: HIVE-13962
> URL: https://issues.apache.org/jira/browse/HIVE-13962
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>
> {noformat}
> 2016-06-07T11:19:50,843 ERROR [82a55b04-c058-475d-8ba4-d1a3007eb213 main[]]: 
> ql.Driver (SessionState.java:printError(1055)) - FAILED: NullPointerException 
> null
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc.newInstance(ExprNodeGenericFuncDesc.java:236)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.getXpathOrFuncExprNodeDesc(TypeCheckProcFactory.java:1072)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.process(TypeCheckProcFactory.java:1317)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:89)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:158)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:120)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:219)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:163)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genAllExprNodeDesc(SemanticAnalyzer.java:11182)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:11137)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genSelectLogicalPlan(CalcitePlanner.java:2996)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genLogicalPlan(CalcitePlanner.java:3158)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:939)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:893)
>   at org.apache.calcite.tools.Frameworks$1.apply(Frameworks.java:113)
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:969)
>   at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:149)
>   at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:712)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:280)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10755)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:239)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:250)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:437)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:329)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1158)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1253)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1084)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1072)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335)
>   at 
> 

[jira] [Commented] (HIVE-13662) Set file permission and ACL in file sink operator

2016-06-08 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321019#comment-15321019
 ] 

Pengcheng Xiong commented on HIVE-13662:


Here is what they say "When the new create(path, permission, …) method (with 
the permission parameter P) is used, the mode of the new file is P & ^umask & 
0666." In my case,  P=777, umask=022, the result should be 0644. But it is 
0655. Even if i got umask wrong, there is no way that we can get the last bit x 
in rwxr-xr-x with & 0666

> Set file permission and ACL in file sink operator
> -
>
> Key: HIVE-13662
> URL: https://issues.apache.org/jira/browse/HIVE-13662
> Project: Hive
>  Issue Type: Bug
>Reporter: Rui Li
>Assignee: Pengcheng Xiong
> Attachments: HIVE-13662.01.patch
>
>
> As suggested 
> [here|https://issues.apache.org/jira/browse/HIVE-13572?focusedCommentId=15254438=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15254438].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13248) Change date_add/date_sub/to_date functions to return Date type rather than String

2016-06-08 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321008#comment-15321008
 ] 

Jason Dere commented on HIVE-13248:
---

committed to branch-2.1 as well.

> Change date_add/date_sub/to_date functions to return Date type rather than 
> String
> -
>
> Key: HIVE-13248
> URL: https://issues.apache.org/jira/browse/HIVE-13248
> Project: Hive
>  Issue Type: Improvement
>  Components: UDF
>Affects Versions: 2.0.0, 2.1.0
>Reporter: Jason Dere
>Assignee: Jason Dere
>  Labels: TODOC2.2
> Fix For: 2.1.0
>
> Attachments: HIVE-13248.1.patch, HIVE-13248.2.patch, 
> HIVE-13248.3.patch, HIVE-13248.4.patch
>
>
> Some of the original "date" related functions return string values rather 
> than Date values, because they were created before the Date type existed in 
> Hive. We can try to change these to return Date in the 2.x line.
> Date values should be implicitly convertible to String.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13248) Change date_add/date_sub/to_date functions to return Date type rather than String

2016-06-08 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-13248:
--
Fix Version/s: (was: 2.2.0)
   2.1.0

> Change date_add/date_sub/to_date functions to return Date type rather than 
> String
> -
>
> Key: HIVE-13248
> URL: https://issues.apache.org/jira/browse/HIVE-13248
> Project: Hive
>  Issue Type: Improvement
>  Components: UDF
>Affects Versions: 2.0.0, 2.1.0
>Reporter: Jason Dere
>Assignee: Jason Dere
>  Labels: TODOC2.2
> Fix For: 2.1.0
>
> Attachments: HIVE-13248.1.patch, HIVE-13248.2.patch, 
> HIVE-13248.3.patch, HIVE-13248.4.patch
>
>
> Some of the original "date" related functions return string values rather 
> than Date values, because they were created before the Date type existed in 
> Hive. We can try to change these to return Date in the 2.x line.
> Date values should be implicitly convertible to String.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13563) Hive Streaming does not honor orc.compress.size and orc.stripe.size table properties

2016-06-08 Thread Wei Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320992#comment-15320992
 ] 

Wei Zheng commented on HIVE-13563:
--

For the two failures that have age==1
{code}
Test Name
Duration
Age
 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_acid_globallimit 
13 sec  1
 org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_schemeAuthority   
2 min 47 sec1
{code}
I updated golden file for acid_globallimit in patch 4. The other test 
schemeAuthority passed locally.

> Hive Streaming does not honor orc.compress.size and orc.stripe.size table 
> properties
> 
>
> Key: HIVE-13563
> URL: https://issues.apache.org/jira/browse/HIVE-13563
> Project: Hive
>  Issue Type: Bug
>  Components: ORC
>Affects Versions: 2.1.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>  Labels: TODOC2.1
> Attachments: HIVE-13563.1.patch, HIVE-13563.2.patch, 
> HIVE-13563.3.patch, HIVE-13563.4.patch
>
>
> According to the doc:
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+ORC#LanguageManualORC-HiveQLSyntax
> One should be able to specify tblproperties for many ORC options.
> But the settings for orc.compress.size and orc.stripe.size don't take effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13563) Hive Streaming does not honor orc.compress.size and orc.stripe.size table properties

2016-06-08 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-13563:
-
Attachment: HIVE-13563.4.patch

> Hive Streaming does not honor orc.compress.size and orc.stripe.size table 
> properties
> 
>
> Key: HIVE-13563
> URL: https://issues.apache.org/jira/browse/HIVE-13563
> Project: Hive
>  Issue Type: Bug
>  Components: ORC
>Affects Versions: 2.1.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>  Labels: TODOC2.1
> Attachments: HIVE-13563.1.patch, HIVE-13563.2.patch, 
> HIVE-13563.3.patch, HIVE-13563.4.patch
>
>
> According to the doc:
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+ORC#LanguageManualORC-HiveQLSyntax
> One should be able to specify tblproperties for many ORC options.
> But the settings for orc.compress.size and orc.stripe.size don't take effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13973) Extend support for other primitive types in windowing expressions

2016-06-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320935#comment-15320935
 ] 

Hive QA commented on HIVE-13973:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12808945/HIVE-13973.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 10 failed/errored test(s), 10224 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_func1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_constprog_partitioner
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
org.apache.hive.jdbc.TestJdbcWithLocalClusterSpark.testPermFunc
org.apache.hive.jdbc.TestJdbcWithMiniMr.testPermFunc
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/49/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/49/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-49/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 10 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12808945 - PreCommit-HIVE-MASTER-Build

> Extend support for other primitive types in windowing expressions
> -
>
> Key: HIVE-13973
> URL: https://issues.apache.org/jira/browse/HIVE-13973
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-13973.patch
>
>
> Following windowing query using boolean column in partitioning clause
> {code:sql}
> create table all100k(t tinyint, si smallint, i int,
> b bigint, f float, d double, s string,
> dc decimal(38,18), bo boolean, v varchar(25),
> c char(25), ts timestamp, dt date);
> select  rank() over (partition by i order by bo  nulls first, b nulls last 
> range between unbounded preceding and current row),
> row_number()  over (partition by bo order by si desc, b nulls last range 
> between unbounded preceding and unbounded following) as fv
> from all100k order by fv;
> {code}
> fails with the following error:
> {noformat}
> FAILED: SemanticException Failed to breakup Windowing invocations into 
> Groups. At least 1 group must only depend on input columns. Also check for 
> circular dependencies.
> Underlying error: Primitve type BOOLEAN not supported in Value Boundary 
> expression
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13961) ACID: Major compaction fails to include the original bucket files if there's no delta directory

2016-06-08 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-13961:
-
Attachment: HIVE-13961.4.patch

patch 4 is the same as patch 3, just to trigger QA ptest

> ACID: Major compaction fails to include the original bucket files if there's 
> no delta directory
> ---
>
> Key: HIVE-13961
> URL: https://issues.apache.org/jira/browse/HIVE-13961
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.3.0, 2.1.0, 2.2.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>Priority: Blocker
> Attachments: HIVE-13961.1.patch, HIVE-13961.2.patch, 
> HIVE-13961.3.patch, HIVE-13961.4.patch
>
>
> The issue can be reproduced by steps below:
> 1. Insert a row to Non-ACID table
> 2. Convert Non-ACID to ACID table (i.e. set transactional=true table property)
> 3. Perform Major compaction



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13964) Add a parameter to beeline to allow a properties file to be passed in

2016-06-08 Thread Abdullah Yousufi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdullah Yousufi updated HIVE-13964:

Status: Patch Available  (was: Open)

> Add a parameter to beeline to allow a properties file to be passed in
> -
>
> Key: HIVE-13964
> URL: https://issues.apache.org/jira/browse/HIVE-13964
> Project: Hive
>  Issue Type: New Feature
>  Components: Beeline
>Affects Versions: 2.0.1
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
>Priority: Minor
> Fix For: 2.2.0
>
> Attachments: HIVE-13964.01.patch
>
>
> HIVE-6652 removed the ability to pass in a properties file as a beeline 
> parameter. It may be a useful feature to be able to pass the file in is a 
> parameter, such as --property-file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13972) Resolve class dependency issue introduced by HIVE-13354

2016-06-08 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-13972:
-
Attachment: (was: HIVE-13972.2.patch)

> Resolve class dependency issue introduced by HIVE-13354
> ---
>
> Key: HIVE-13972
> URL: https://issues.apache.org/jira/browse/HIVE-13972
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.3.0, 2.1.0, 2.2.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>Priority: Blocker
> Attachments: HIVE-13972.1.patch
>
>
> HIVE-13354 moved a helper class StringableMap from 
> ql/txn/compactor/CompactorMR.java to metastore/txn/TxnUtils.java
> This introduced a dependency from ql package to metastore package which is 
> not allowed and fails in a real cluster.
> Instead of moving it to metastore, it should be moved to common package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13972) Resolve class dependency issue introduced by HIVE-13354

2016-06-08 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-13972:
-
Status: Patch Available  (was: Open)

> Resolve class dependency issue introduced by HIVE-13354
> ---
>
> Key: HIVE-13972
> URL: https://issues.apache.org/jira/browse/HIVE-13972
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.3.0, 2.1.0, 2.2.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>Priority: Blocker
> Attachments: HIVE-13972.1.patch
>
>
> HIVE-13354 moved a helper class StringableMap from 
> ql/txn/compactor/CompactorMR.java to metastore/txn/TxnUtils.java
> This introduced a dependency from ql package to metastore package which is 
> not allowed and fails in a real cluster.
> Instead of moving it to metastore, it should be moved to common package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HIVE-13972) Resolve class dependency issue introduced by HIVE-13354

2016-06-08 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-13972:
-
Comment: was deleted

(was: patch 2 is the same as patch 1, to trigger QA ptest)

> Resolve class dependency issue introduced by HIVE-13354
> ---
>
> Key: HIVE-13972
> URL: https://issues.apache.org/jira/browse/HIVE-13972
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.3.0, 2.1.0, 2.2.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>Priority: Blocker
> Attachments: HIVE-13972.1.patch
>
>
> HIVE-13354 moved a helper class StringableMap from 
> ql/txn/compactor/CompactorMR.java to metastore/txn/TxnUtils.java
> This introduced a dependency from ql package to metastore package which is 
> not allowed and fails in a real cluster.
> Instead of moving it to metastore, it should be moved to common package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13972) Resolve class dependency issue introduced by HIVE-13354

2016-06-08 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-13972:
-
Attachment: HIVE-13972.2.patch

patch 2 is the same as patch 1, to trigger QA ptest

> Resolve class dependency issue introduced by HIVE-13354
> ---
>
> Key: HIVE-13972
> URL: https://issues.apache.org/jira/browse/HIVE-13972
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.3.0, 2.1.0, 2.2.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>Priority: Blocker
> Attachments: HIVE-13972.1.patch, HIVE-13972.2.patch
>
>
> HIVE-13354 moved a helper class StringableMap from 
> ql/txn/compactor/CompactorMR.java to metastore/txn/TxnUtils.java
> This introduced a dependency from ql package to metastore package which is 
> not allowed and fails in a real cluster.
> Instead of moving it to metastore, it should be moved to common package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13973) Using boolean in windowing query partitioning clause causes error

2016-06-08 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320823#comment-15320823
 ] 

Jesus Camacho Rodriguez commented on HIVE-13973:


Oh, alright. I will add support for other primitive types then and add test 
cases accordingly.

> Using boolean in windowing query partitioning clause causes error
> -
>
> Key: HIVE-13973
> URL: https://issues.apache.org/jira/browse/HIVE-13973
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-13973.patch
>
>
> Following windowing query using boolean column in partitioning clause
> {code:sql}
> create table all100k(t tinyint, si smallint, i int,
> b bigint, f float, d double, s string,
> dc decimal(38,18), bo boolean, v varchar(25),
> c char(25), ts timestamp, dt date);
> select  rank() over (partition by i order by bo  nulls first, b nulls last 
> range between unbounded preceding and current row),
> row_number()  over (partition by bo order by si desc, b nulls last range 
> between unbounded preceding and unbounded following) as fv
> from all100k order by fv;
> {code}
> fails with the following error:
> {noformat}
> FAILED: SemanticException Failed to breakup Windowing invocations into 
> Groups. At least 1 group must only depend on input columns. Also check for 
> circular dependencies.
> Underlying error: Primitve type BOOLEAN not supported in Value Boundary 
> expression
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13973) Extend support for other primitive types in windowing expressions

2016-06-08 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-13973:
---
Summary: Extend support for other primitive types in windowing expressions  
(was: Using boolean in windowing query partitioning clause causes error)

> Extend support for other primitive types in windowing expressions
> -
>
> Key: HIVE-13973
> URL: https://issues.apache.org/jira/browse/HIVE-13973
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-13973.patch
>
>
> Following windowing query using boolean column in partitioning clause
> {code:sql}
> create table all100k(t tinyint, si smallint, i int,
> b bigint, f float, d double, s string,
> dc decimal(38,18), bo boolean, v varchar(25),
> c char(25), ts timestamp, dt date);
> select  rank() over (partition by i order by bo  nulls first, b nulls last 
> range between unbounded preceding and current row),
> row_number()  over (partition by bo order by si desc, b nulls last range 
> between unbounded preceding and unbounded following) as fv
> from all100k order by fv;
> {code}
> fails with the following error:
> {noformat}
> FAILED: SemanticException Failed to breakup Windowing invocations into 
> Groups. At least 1 group must only depend on input columns. Also check for 
> circular dependencies.
> Underlying error: Primitve type BOOLEAN not supported in Value Boundary 
> expression
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13973) Using boolean in windowing query partitioning clause causes error

2016-06-08 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320816#comment-15320816
 ] 

Ashutosh Chauhan commented on HIVE-13973:
-

Likely reason is these types were added to Hive after first support for 
windowing went in.

> Using boolean in windowing query partitioning clause causes error
> -
>
> Key: HIVE-13973
> URL: https://issues.apache.org/jira/browse/HIVE-13973
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-13973.patch
>
>
> Following windowing query using boolean column in partitioning clause
> {code:sql}
> create table all100k(t tinyint, si smallint, i int,
> b bigint, f float, d double, s string,
> dc decimal(38,18), bo boolean, v varchar(25),
> c char(25), ts timestamp, dt date);
> select  rank() over (partition by i order by bo  nulls first, b nulls last 
> range between unbounded preceding and current row),
> row_number()  over (partition by bo order by si desc, b nulls last range 
> between unbounded preceding and unbounded following) as fv
> from all100k order by fv;
> {code}
> fails with the following error:
> {noformat}
> FAILED: SemanticException Failed to breakup Windowing invocations into 
> Groups. At least 1 group must only depend on input columns. Also check for 
> circular dependencies.
> Underlying error: Primitve type BOOLEAN not supported in Value Boundary 
> expression
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13973) Using boolean in windowing query partitioning clause causes error

2016-06-08 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320804#comment-15320804
 ] 

Jesus Camacho Rodriguez commented on HIVE-13973:


Thanks Ashutosh!

Yes, it makes sense; I will add new test cases.

I was exploring the code and I could not really spot any reason for this 
implementation limitation. Any ideas?

> Using boolean in windowing query partitioning clause causes error
> -
>
> Key: HIVE-13973
> URL: https://issues.apache.org/jira/browse/HIVE-13973
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-13973.patch
>
>
> Following windowing query using boolean column in partitioning clause
> {code:sql}
> create table all100k(t tinyint, si smallint, i int,
> b bigint, f float, d double, s string,
> dc decimal(38,18), bo boolean, v varchar(25),
> c char(25), ts timestamp, dt date);
> select  rank() over (partition by i order by bo  nulls first, b nulls last 
> range between unbounded preceding and current row),
> row_number()  over (partition by bo order by si desc, b nulls last range 
> between unbounded preceding and unbounded following) as fv
> from all100k order by fv;
> {code}
> fails with the following error:
> {noformat}
> FAILED: SemanticException Failed to breakup Windowing invocations into 
> Groups. At least 1 group must only depend on input columns. Also check for 
> circular dependencies.
> Underlying error: Primitve type BOOLEAN not supported in Value Boundary 
> expression
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13973) Using boolean in windowing query partitioning clause causes error

2016-06-08 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320791#comment-15320791
 ] 

Ashutosh Chauhan commented on HIVE-13973:
-

* Do we need to worry about other primitive types such as 
char/varchar/date/timestamp ? 
* You may use alltypesorc table as a test table which comes pre-populated with 
data for this test.

> Using boolean in windowing query partitioning clause causes error
> -
>
> Key: HIVE-13973
> URL: https://issues.apache.org/jira/browse/HIVE-13973
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-13973.patch
>
>
> Following windowing query using boolean column in partitioning clause
> {code:sql}
> create table all100k(t tinyint, si smallint, i int,
> b bigint, f float, d double, s string,
> dc decimal(38,18), bo boolean, v varchar(25),
> c char(25), ts timestamp, dt date);
> select  rank() over (partition by i order by bo  nulls first, b nulls last 
> range between unbounded preceding and current row),
> row_number()  over (partition by bo order by si desc, b nulls last range 
> between unbounded preceding and unbounded following) as fv
> from all100k order by fv;
> {code}
> fails with the following error:
> {noformat}
> FAILED: SemanticException Failed to breakup Windowing invocations into 
> Groups. At least 1 group must only depend on input columns. Also check for 
> circular dependencies.
> Underlying error: Primitve type BOOLEAN not supported in Value Boundary 
> expression
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13850) File name conflict when have multiple INSERT INTO queries running in parallel

2016-06-08 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320783#comment-15320783
 ] 

Ashutosh Chauhan commented on HIVE-13850:
-

DbTxnManager works with only ACID tables. For non-acid tables use 
ZooKeeperLockManager. However, if you are running in highly concurrent 
environment, its better to use ACID tables.

> File name conflict when have multiple INSERT INTO queries running in parallel
> -
>
> Key: HIVE-13850
> URL: https://issues.apache.org/jira/browse/HIVE-13850
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1
>Reporter: Bing Li
>Assignee: Bing Li
> Attachments: HIVE-13850-1.2.1.patch
>
>
> We have an application which connect to HiveServer2 via JDBC.
> In the application, it executes "INSERT INTO" query to the same table.
> If there are a lot of users running the application at the same time. Some of 
> the INSERT could fail.
> The root cause is that in Hive.checkPaths(), it uses the following method to 
> check the existing of the file. But if there are multiple inserts running in 
> parallel, it will led to the conflict.
> for (int counter = 1; fs.exists(itemDest) || destExists(result, itemDest); 
> counter++) {
>   itemDest = new Path(destf, name + ("_copy_" + counter) + 
> filetype);
> }
> The Error Message
> ===
> In hive log,
> org.apache.hadoop.hive.ql.metadata.HiveException: copyFiles: error  
> while moving files!!! Cannot move hdfs://node:8020/apps/hive/warehouse/met
> 
> adata.db/scalding_stats/.hive-staging_hive_2016-05-10_18-46-
> 23_642_2056172497900766879-3321/-ext-1/00_0 to 
> hdfs://node:8020/apps/hive  
> /warehouse/metadata.db/scalding_stats/00_0_copy_9014
> at org.apache.hadoop.hive.ql.metadata.Hive.copyFiles(Hive.java: 
> 2719)   
> at org.apache.hadoop.hive.ql.metadata.Hive.loadTable(Hive.java: 
> 1645)  
> 
> In hadoop log, 
> WARN  hdfs.StateChange (FSDirRenameOp.java: 
> unprotectedRenameTo(174)) - DIR* FSDirectory.unprotectedRenameTo:   
> failed to rename /apps/hive/warehouse/metadata.db/scalding_stats/.hive- 
> staging_hive_2016-05-10_18-46-23_642_2056172497900766879-3321/-ext- 
> 1/00_0 to /apps/hive/warehouse/metadata.
> db/scalding_stats/00_0_copy_9014 because destination exists



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13972) Resolve class dependency issue introduced by HIVE-13354

2016-06-08 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320745#comment-15320745
 ] 

Eugene Koifman commented on HIVE-13972:
---

+1

> Resolve class dependency issue introduced by HIVE-13354
> ---
>
> Key: HIVE-13972
> URL: https://issues.apache.org/jira/browse/HIVE-13972
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.3.0, 2.1.0, 2.2.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>Priority: Blocker
> Attachments: HIVE-13972.1.patch
>
>
> HIVE-13354 moved a helper class StringableMap from 
> ql/txn/compactor/CompactorMR.java to metastore/txn/TxnUtils.java
> This introduced a dependency from ql package to metastore package which is 
> not allowed and fails in a real cluster.
> Instead of moving it to metastore, it should be moved to common package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13760) Add a HIVE_QUERY_TIMEOUT configuration to kill a query if a query is running for more than the configured timeout value.

2016-06-08 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated HIVE-13760:
-
Description: Add a HIVE_QUERY_TIMEOUT configuration to kill a query if a 
query is running for more than the configured timeout value. The default value 
will be 0 , which means no timeout. This will be useful for  user to manage 
queries with SLA.  (was: Add a HIVE_QUERY_TIMEOUT configuration to kill a query 
if a query is running for more than the configured timeout value. The default 
value will be -1 , which means no timeout. This will be useful for  user to 
manage queries with SLA.)

> Add a HIVE_QUERY_TIMEOUT configuration to kill a query if a query is running 
> for more than the configured timeout value.
> 
>
> Key: HIVE-13760
> URL: https://issues.apache.org/jira/browse/HIVE-13760
> Project: Hive
>  Issue Type: Improvement
>  Components: Configuration
>Affects Versions: 2.0.0
>Reporter: zhihai xu
>Assignee: zhihai xu
>  Labels: TODOC2.2
> Fix For: 2.2.0
>
> Attachments: HIVE-13760.000.patch, HIVE-13760.001.patch
>
>
> Add a HIVE_QUERY_TIMEOUT configuration to kill a query if a query is running 
> for more than the configured timeout value. The default value will be 0 , 
> which means no timeout. This will be useful for  user to manage queries with 
> SLA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13966) DbNotificationListener: can loose DDL operation notifications

2016-06-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-13966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320722#comment-15320722
 ] 

Sergio Peña commented on HIVE-13966:


What should be the expected results?
- When operation fails, should we continue adding an entry to notification log 
or not?
- When operation is correct, but notification log fails, should we just display 
a warning message? or rollback the operation?

> DbNotificationListener: can loose DDL operation notifications
> -
>
> Key: HIVE-13966
> URL: https://issues.apache.org/jira/browse/HIVE-13966
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Reporter: Nachiket Vaidya
>Priority: Critical
>
> The code for each API in HiveMetaStore.java is like this:
> 1. openTransaction()
> 2. -- operation--
> 3. commit() or rollback() based on result of the operation.
> 4. add entry to notification log (unconditionally)
> If the operation is failed (in step 2), we still add entry to notification 
> log. Found this issue in testing.
> It is still ok as this is the case of false positive.
> If the operation is successful and adding to notification log failed, the 
> user will get an MetaException. It will not rollback the operation, as it is 
> already committed. We need to handle this case so that we will not have false 
> negatives.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13967) CREATE table fails when 'values' column name is found on the table spec.

2016-06-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-13967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320693#comment-15320693
 ] 

Sergio Peña commented on HIVE-13967:


Thanks [~chetna]. 
Do you know why we have reserved keywords for column names? Could it be because 
the way Hive parses the query? Can we fix that?
In the meantime, I think it would be better to fix the long error exception at 
least. It would be good to have hive display a simple message than a long 
stacktrace.

> CREATE table fails when 'values' column name is found on the table spec.
> 
>
> Key: HIVE-13967
> URL: https://issues.apache.org/jira/browse/HIVE-13967
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Sergio Peña
>Assignee: Abdullah Yousufi
>
> {noformat}
> hive> create table pkv (key int, values string);  
>   
> [0/4271]
> FailedPredicateException(identifier,{useSQL11ReservedKeywordsForIdentifier()}?)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.identifier(HiveParser_IdentifiersParser.java:11914)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.identifier(HiveParser.java:51795)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.columnNameType(HiveParser.java:42051)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.columnNameTypeOrPKOrFK(HiveParser.java:42308)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.columnNameTypeOrPKOrFKList(HiveParser.java:37966)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.createTableStatement(HiveParser.java:5259)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.ddlStatement(HiveParser.java:2763)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:1756)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1178)
> at 
> org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:204)
> at 
> org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:404)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:329)
> at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1158)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1253)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1084)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1072)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:641)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> FAILED: ParseException line 1:27 Failed to recognize predicate 'values'. 
> Failed rule: 'identifier' in column specification
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-7443) Fix HiveConnection to communicate with Kerberized Hive JDBC server and alternative JDKs

2016-06-08 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320687#comment-15320687
 ] 

Aihua Xu commented on HIVE-7443:


Seems we still need the fix for HIVE-7443 since HIVE-13020 is forzookeeper.

> Fix HiveConnection to communicate with Kerberized Hive JDBC server and 
> alternative JDKs
> ---
>
> Key: HIVE-7443
> URL: https://issues.apache.org/jira/browse/HIVE-7443
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC, Security
>Affects Versions: 0.12.0, 0.13.1
> Environment: Kerberos
> Run Hive server2 and client with IBM JDK7.1
>Reporter: Yu Gao
>Assignee: Yu Gao
> Attachments: HIVE-7443.patch
>
>
> Hive Kerberos authentication has been enabled in my cluster. I ran kinit to 
> initialize the current login user's ticket cache successfully, and then tried 
> to use beeline to connect to Hive Server2, but failed. After I manually added 
> some logging to catch the failure exception, this is what I got that caused 
> the failure:
> beeline>  !connect 
> jdbc:hive2://:1/default;principal=hive/@REALM.COM
>  org.apache.hive.jdbc.HiveDriver
> scan complete in 2ms
> Connecting to 
> jdbc:hive2://:1/default;principal=hive/@REALM.COM
> Enter password for 
> jdbc:hive2://:1/default;principal=hive/@REALM.COM:
> 14/07/17 15:12:45 ERROR jdbc.HiveConnection: Failed to open client transport
> javax.security.sasl.SaslException: Failed to open client transport [Caused by 
> java.io.IOException: Could not instantiate SASL transport]
> at 
> org.apache.hive.service.auth.KerberosSaslHelper.getKerberosTransport(KerberosSaslHelper.java:78)
> at 
> org.apache.hive.jdbc.HiveConnection.createBinaryTransport(HiveConnection.java:342)
> at 
> org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:200)
> at org.apache.hive.jdbc.HiveConnection.(HiveConnection.java:178)
> at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
> at java.sql.DriverManager.getConnection(DriverManager.java:582)
> at java.sql.DriverManager.getConnection(DriverManager.java:198)
> at 
> org.apache.hive.beeline.DatabaseConnection.connect(DatabaseConnection.java:145)
> at 
> org.apache.hive.beeline.DatabaseConnection.getConnection(DatabaseConnection.java:186)
> at org.apache.hive.beeline.Commands.connect(Commands.java:959)
> at org.apache.hive.beeline.Commands.connect(Commands.java:880)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
> at java.lang.reflect.Method.invoke(Method.java:619)
> at 
> org.apache.hive.beeline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:44)
> at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:801)
> at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:659)
> at 
> org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:368)
> at org.apache.hive.beeline.BeeLine.main(BeeLine.java:351)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
> at java.lang.reflect.Method.invoke(Method.java:619)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> Caused by: java.io.IOException: Could not instantiate SASL transport
> at 
> org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Client.createClientTransport(HadoopThriftAuthBridge20S.java:177)
> at 
> org.apache.hive.service.auth.KerberosSaslHelper.getKerberosTransport(KerberosSaslHelper.java:74)
> ... 24 more
> Caused by: javax.security.sasl.SaslException: Failure to initialize security 
> context [Caused by org.ietf.jgss.GSSException, major code: 13, minor code: 0
> major string: Invalid credentials
> minor string: SubjectCredFinder: no JAAS Subject]
> at 
> com.ibm.security.sasl.gsskerb.GssKrb5Client.(GssKrb5Client.java:131)
> at 
> com.ibm.security.sasl.gsskerb.FactoryImpl.createSaslClient(FactoryImpl.java:53)
> at javax.security.sasl.Sasl.createSaslClient(Sasl.java:362)
> at 
> org.apache.thrift.transport.TSaslClientTransport.(TSaslClientTransport.java:72)
> at 
> org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Client.createClientTransport(HadoopThriftAuthBridge20S.java:169)
> ... 25 more
> Caused by: org.ietf.jgss.GSSException, major code: 13, minor 

[jira] [Commented] (HIVE-13850) File name conflict when have multiple INSERT INTO queries running in parallel

2016-06-08 Thread Bing Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320675#comment-15320675
 ] 

Bing Li commented on HIVE-13850:


Hi, [~ashutoshc]
Thank you for your comments. 
Yes, you're right. The issue hasn't been resolved by naming the target file 
with timestamp. We ran into it again...

We tried to set the following properties, but still got the error. 
Hive.support.concurrency -> true
Hive.txn.manager -> org.apache.hadoop.hive.ql.lockmgr.DbTxnManager

Are there any other properties required?

Thank you.

> File name conflict when have multiple INSERT INTO queries running in parallel
> -
>
> Key: HIVE-13850
> URL: https://issues.apache.org/jira/browse/HIVE-13850
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1
>Reporter: Bing Li
>Assignee: Bing Li
> Attachments: HIVE-13850-1.2.1.patch
>
>
> We have an application which connect to HiveServer2 via JDBC.
> In the application, it executes "INSERT INTO" query to the same table.
> If there are a lot of users running the application at the same time. Some of 
> the INSERT could fail.
> The root cause is that in Hive.checkPaths(), it uses the following method to 
> check the existing of the file. But if there are multiple inserts running in 
> parallel, it will led to the conflict.
> for (int counter = 1; fs.exists(itemDest) || destExists(result, itemDest); 
> counter++) {
>   itemDest = new Path(destf, name + ("_copy_" + counter) + 
> filetype);
> }
> The Error Message
> ===
> In hive log,
> org.apache.hadoop.hive.ql.metadata.HiveException: copyFiles: error  
> while moving files!!! Cannot move hdfs://node:8020/apps/hive/warehouse/met
> 
> adata.db/scalding_stats/.hive-staging_hive_2016-05-10_18-46-
> 23_642_2056172497900766879-3321/-ext-1/00_0 to 
> hdfs://node:8020/apps/hive  
> /warehouse/metadata.db/scalding_stats/00_0_copy_9014
> at org.apache.hadoop.hive.ql.metadata.Hive.copyFiles(Hive.java: 
> 2719)   
> at org.apache.hadoop.hive.ql.metadata.Hive.loadTable(Hive.java: 
> 1645)  
> 
> In hadoop log, 
> WARN  hdfs.StateChange (FSDirRenameOp.java: 
> unprotectedRenameTo(174)) - DIR* FSDirectory.unprotectedRenameTo:   
> failed to rename /apps/hive/warehouse/metadata.db/scalding_stats/.hive- 
> staging_hive_2016-05-10_18-46-23_642_2056172497900766879-3321/-ext- 
> 1/00_0 to /apps/hive/warehouse/metadata.
> db/scalding_stats/00_0_copy_9014 because destination exists



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13959) MoveTask should only release its query associated locks

2016-06-08 Thread Yongzhi Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320673#comment-15320673
 ] 

Yongzhi Chen commented on HIVE-13959:
-

It seems that the ZookeeperHiveLockManager may have some issue, it does not 
store and use HiveLockObjectData
See EmbeddedLockManager.java 
https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/lockmgr/EmbeddedLockManager.java#L403

> MoveTask should only release its query associated locks
> ---
>
> Key: HIVE-13959
> URL: https://issues.apache.org/jira/browse/HIVE-13959
> Project: Hive
>  Issue Type: Bug
>  Components: Locking
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
> Attachments: HIVE-13959.patch, HIVE-13959.patch
>
>
> releaseLocks in MoveTask releases all locks under a HiveLockObject pathNames. 
> But some of locks under this pathNames might be for other queries and should 
> not be released.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HIVE-13973) Using boolean in windowing query partitioning clause causes error

2016-06-08 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-13973 started by Jesus Camacho Rodriguez.
--
> Using boolean in windowing query partitioning clause causes error
> -
>
> Key: HIVE-13973
> URL: https://issues.apache.org/jira/browse/HIVE-13973
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-13973.patch
>
>
> Following windowing query using boolean column in partitioning clause
> {code:sql}
> create table all100k(t tinyint, si smallint, i int,
> b bigint, f float, d double, s string,
> dc decimal(38,18), bo boolean, v varchar(25),
> c char(25), ts timestamp, dt date);
> select  rank() over (partition by i order by bo  nulls first, b nulls last 
> range between unbounded preceding and current row),
> row_number()  over (partition by bo order by si desc, b nulls last range 
> between unbounded preceding and unbounded following) as fv
> from all100k order by fv;
> {code}
> fails with the following error:
> {noformat}
> FAILED: SemanticException Failed to breakup Windowing invocations into 
> Groups. At least 1 group must only depend on input columns. Also check for 
> circular dependencies.
> Underlying error: Primitve type BOOLEAN not supported in Value Boundary 
> expression
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13973) Using boolean in windowing query partitioning clause causes error

2016-06-08 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-13973:
---
Attachment: HIVE-13973.patch

> Using boolean in windowing query partitioning clause causes error
> -
>
> Key: HIVE-13973
> URL: https://issues.apache.org/jira/browse/HIVE-13973
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-13973.patch
>
>
> Following windowing query using boolean column in partitioning clause
> {code:sql}
> create table all100k(t tinyint, si smallint, i int,
> b bigint, f float, d double, s string,
> dc decimal(38,18), bo boolean, v varchar(25),
> c char(25), ts timestamp, dt date);
> select  rank() over (partition by i order by bo  nulls first, b nulls last 
> range between unbounded preceding and current row),
> row_number()  over (partition by bo order by si desc, b nulls last range 
> between unbounded preceding and unbounded following) as fv
> from all100k order by fv;
> {code}
> fails with the following error:
> {noformat}
> FAILED: SemanticException Failed to breakup Windowing invocations into 
> Groups. At least 1 group must only depend on input columns. Also check for 
> circular dependencies.
> Underlying error: Primitve type BOOLEAN not supported in Value Boundary 
> expression
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13973) Using boolean in windowing query partitioning clause causes error

2016-06-08 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-13973:
---
Status: Patch Available  (was: In Progress)

> Using boolean in windowing query partitioning clause causes error
> -
>
> Key: HIVE-13973
> URL: https://issues.apache.org/jira/browse/HIVE-13973
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-13973.patch
>
>
> Following windowing query using boolean column in partitioning clause
> {code:sql}
> create table all100k(t tinyint, si smallint, i int,
> b bigint, f float, d double, s string,
> dc decimal(38,18), bo boolean, v varchar(25),
> c char(25), ts timestamp, dt date);
> select  rank() over (partition by i order by bo  nulls first, b nulls last 
> range between unbounded preceding and current row),
> row_number()  over (partition by bo order by si desc, b nulls last range 
> between unbounded preceding and unbounded following) as fv
> from all100k order by fv;
> {code}
> fails with the following error:
> {noformat}
> FAILED: SemanticException Failed to breakup Windowing invocations into 
> Groups. At least 1 group must only depend on input columns. Also check for 
> circular dependencies.
> Underlying error: Primitve type BOOLEAN not supported in Value Boundary 
> expression
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-13721) HPL/SQL COPY FROM FTP Statement: lack of DIR option leads to NPE

2016-06-08 Thread Dmitry Tolpeko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Tolpeko reassigned HIVE-13721:
-

Assignee: Dmitry Tolpeko

> HPL/SQL COPY FROM FTP Statement: lack of DIR option leads to NPE
> 
>
> Key: HIVE-13721
> URL: https://issues.apache.org/jira/browse/HIVE-13721
> Project: Hive
>  Issue Type: Bug
>  Components: hpl/sql
>Reporter: Carter Shanklin
>Assignee: Dmitry Tolpeko
>
> The docs (http://www.hplsql.org/copy-from-ftp) suggest DIR is optional. When 
> I left it out in:
> {code}
> copy from ftp hdp250.example.com user 'vagrant' pwd 'vagrant'  files 
> 'sampledata.csv' to /tmp overwrite
> {code}
> I got:
> {code}
> Ln:2 Connected to ftp: hdp250.example.com (29 ms)
> Ln:2 Retrieving directory listing
>   Listing the current working FTP directory
> Ln:2 Files to copy: 45 bytes, 1 file, 0 subdirectories scanned (27 ms)
> Exception in thread "main" java.lang.NullPointerException
>   at org.apache.hive.hplsql.Ftp.getTargetFileName(Ftp.java:342)
>   at org.apache.hive.hplsql.Ftp.run(Ftp.java:149)
>   at org.apache.hive.hplsql.Ftp.copyFiles(Ftp.java:121)
>   at org.apache.hive.hplsql.Ftp.run(Ftp.java:91)
>   at org.apache.hive.hplsql.Exec.visitCopy_from_ftp_stmt(Exec.java:1292)
>   at org.apache.hive.hplsql.Exec.visitCopy_from_ftp_stmt(Exec.java:52)
>   at 
> org.apache.hive.hplsql.HplsqlParser$Copy_from_ftp_stmtContext.accept(HplsqlParser.java:11956)
>   at 
> org.antlr.v4.runtime.tree.AbstractParseTreeVisitor.visitChildren(AbstractParseTreeVisitor.java:70)
>   at org.apache.hive.hplsql.Exec.visitStmt(Exec.java:994)
>   at org.apache.hive.hplsql.Exec.visitStmt(Exec.java:52)
>   at 
> org.apache.hive.hplsql.HplsqlParser$StmtContext.accept(HplsqlParser.java:1012)
>   at 
> org.antlr.v4.runtime.tree.AbstractParseTreeVisitor.visitChildren(AbstractParseTreeVisitor.java:70)
>   at 
> org.apache.hive.hplsql.HplsqlBaseVisitor.visitBlock(HplsqlBaseVisitor.java:28)
>   at 
> org.apache.hive.hplsql.HplsqlParser$BlockContext.accept(HplsqlParser.java:446)
>   at 
> org.antlr.v4.runtime.tree.AbstractParseTreeVisitor.visitChildren(AbstractParseTreeVisitor.java:70)
>   at org.apache.hive.hplsql.Exec.visitProgram(Exec.java:901)
>   at org.apache.hive.hplsql.Exec.visitProgram(Exec.java:52)
>   at 
> org.apache.hive.hplsql.HplsqlParser$ProgramContext.accept(HplsqlParser.java:389)
>   at 
> org.antlr.v4.runtime.tree.AbstractParseTreeVisitor.visit(AbstractParseTreeVisitor.java:42)
>   at org.apache.hive.hplsql.Exec.run(Exec.java:760)
>   at org.apache.hive.hplsql.Exec.run(Exec.java:736)
>   at org.apache.hive.hplsql.Hplsql.main(Hplsql.java:23)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {code}
> Traceback leads to:
> {code}
>   /**
>* Get the target file relative path and name
>*/
>   String getTargetFileName(String file) {
> int len = dir.length();
> return targetDir + file.substring(len);
>   }
> {code}
> in Ftp.java
> When I added DIR '/' this worked.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-13721) HPL/SQL COPY FROM FTP Statement: lack of DIR option leads to NPE

2016-06-08 Thread Dmitry Tolpeko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Tolpeko resolved HIVE-13721.
---
   Resolution: Fixed
Fix Version/s: 2.2.0

> HPL/SQL COPY FROM FTP Statement: lack of DIR option leads to NPE
> 
>
> Key: HIVE-13721
> URL: https://issues.apache.org/jira/browse/HIVE-13721
> Project: Hive
>  Issue Type: Bug
>  Components: hpl/sql
>Reporter: Carter Shanklin
>Assignee: Dmitry Tolpeko
> Fix For: 2.2.0
>
>
> The docs (http://www.hplsql.org/copy-from-ftp) suggest DIR is optional. When 
> I left it out in:
> {code}
> copy from ftp hdp250.example.com user 'vagrant' pwd 'vagrant'  files 
> 'sampledata.csv' to /tmp overwrite
> {code}
> I got:
> {code}
> Ln:2 Connected to ftp: hdp250.example.com (29 ms)
> Ln:2 Retrieving directory listing
>   Listing the current working FTP directory
> Ln:2 Files to copy: 45 bytes, 1 file, 0 subdirectories scanned (27 ms)
> Exception in thread "main" java.lang.NullPointerException
>   at org.apache.hive.hplsql.Ftp.getTargetFileName(Ftp.java:342)
>   at org.apache.hive.hplsql.Ftp.run(Ftp.java:149)
>   at org.apache.hive.hplsql.Ftp.copyFiles(Ftp.java:121)
>   at org.apache.hive.hplsql.Ftp.run(Ftp.java:91)
>   at org.apache.hive.hplsql.Exec.visitCopy_from_ftp_stmt(Exec.java:1292)
>   at org.apache.hive.hplsql.Exec.visitCopy_from_ftp_stmt(Exec.java:52)
>   at 
> org.apache.hive.hplsql.HplsqlParser$Copy_from_ftp_stmtContext.accept(HplsqlParser.java:11956)
>   at 
> org.antlr.v4.runtime.tree.AbstractParseTreeVisitor.visitChildren(AbstractParseTreeVisitor.java:70)
>   at org.apache.hive.hplsql.Exec.visitStmt(Exec.java:994)
>   at org.apache.hive.hplsql.Exec.visitStmt(Exec.java:52)
>   at 
> org.apache.hive.hplsql.HplsqlParser$StmtContext.accept(HplsqlParser.java:1012)
>   at 
> org.antlr.v4.runtime.tree.AbstractParseTreeVisitor.visitChildren(AbstractParseTreeVisitor.java:70)
>   at 
> org.apache.hive.hplsql.HplsqlBaseVisitor.visitBlock(HplsqlBaseVisitor.java:28)
>   at 
> org.apache.hive.hplsql.HplsqlParser$BlockContext.accept(HplsqlParser.java:446)
>   at 
> org.antlr.v4.runtime.tree.AbstractParseTreeVisitor.visitChildren(AbstractParseTreeVisitor.java:70)
>   at org.apache.hive.hplsql.Exec.visitProgram(Exec.java:901)
>   at org.apache.hive.hplsql.Exec.visitProgram(Exec.java:52)
>   at 
> org.apache.hive.hplsql.HplsqlParser$ProgramContext.accept(HplsqlParser.java:389)
>   at 
> org.antlr.v4.runtime.tree.AbstractParseTreeVisitor.visit(AbstractParseTreeVisitor.java:42)
>   at org.apache.hive.hplsql.Exec.run(Exec.java:760)
>   at org.apache.hive.hplsql.Exec.run(Exec.java:736)
>   at org.apache.hive.hplsql.Hplsql.main(Hplsql.java:23)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {code}
> Traceback leads to:
> {code}
>   /**
>* Get the target file relative path and name
>*/
>   String getTargetFileName(String file) {
> int len = dir.length();
> return targetDir + file.substring(len);
>   }
> {code}
> in Ftp.java
> When I added DIR '/' this worked.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13540) Casts to numeric types don't seem to work in hplsql

2016-06-08 Thread Dmitry Tolpeko (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320587#comment-15320587
 ] 

Dmitry Tolpeko commented on HIVE-13540:
---

Done. Sorry I lost my Jira permissions and needed a few days to restore them. 

> Casts to numeric types don't seem to work in hplsql
> ---
>
> Key: HIVE-13540
> URL: https://issues.apache.org/jira/browse/HIVE-13540
> Project: Hive
>  Issue Type: Bug
>  Components: hpl/sql
>Affects Versions: 2.2.0
>Reporter: Carter Shanklin
>Assignee: Dmitry Tolpeko
> Fix For: 2.2.0
>
> Attachments: HIVE-13540.1.patch
>
>
> Maybe I'm doing this wrong? But it seems to be broken.
> Casts to string types seem to work fine, but not numbers.
> This code:
> {code}
> temp_int = CAST('1' AS int);
> print temp_int
> temp_float   = CAST('1.2' AS float);
> print temp_float
> temp_double  = CAST('1.2' AS double);
> print temp_double
> temp_decimal = CAST('1.2' AS decimal(10, 4));
> print temp_decimal
> temp_string = CAST('1.2' AS string);
> print temp_string
> {code}
> Produces this output:
> {code}
> [vagrant@hdp250 hplsql]$ hplsql -f temp2.hplsql
> which: no hbase in 
> (/usr/lib64/qt-3.3/bin:/usr/lib/jvm/java/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/puppetlabs/bin:/usr/local/share/jmeter/bin:/home/vagrant/bin)
> WARNING: Use "yarn jar" to launch YARN applications.
> null
> null
> null
> null
> 1.2
> {code}
> The software I'm using is not anything released but is pretty close to the 
> trunk, 2 weeks old at most.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13540) Casts to numeric types don't seem to work in hplsql

2016-06-08 Thread Dmitry Tolpeko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Tolpeko updated HIVE-13540:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Casts to numeric types don't seem to work in hplsql
> ---
>
> Key: HIVE-13540
> URL: https://issues.apache.org/jira/browse/HIVE-13540
> Project: Hive
>  Issue Type: Bug
>  Components: hpl/sql
>Affects Versions: 2.2.0
>Reporter: Carter Shanklin
>Assignee: Dmitry Tolpeko
> Fix For: 2.2.0
>
> Attachments: HIVE-13540.1.patch
>
>
> Maybe I'm doing this wrong? But it seems to be broken.
> Casts to string types seem to work fine, but not numbers.
> This code:
> {code}
> temp_int = CAST('1' AS int);
> print temp_int
> temp_float   = CAST('1.2' AS float);
> print temp_float
> temp_double  = CAST('1.2' AS double);
> print temp_double
> temp_decimal = CAST('1.2' AS decimal(10, 4));
> print temp_decimal
> temp_string = CAST('1.2' AS string);
> print temp_string
> {code}
> Produces this output:
> {code}
> [vagrant@hdp250 hplsql]$ hplsql -f temp2.hplsql
> which: no hbase in 
> (/usr/lib64/qt-3.3/bin:/usr/lib/jvm/java/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/puppetlabs/bin:/usr/local/share/jmeter/bin:/home/vagrant/bin)
> WARNING: Use "yarn jar" to launch YARN applications.
> null
> null
> null
> null
> 1.2
> {code}
> The software I'm using is not anything released but is pretty close to the 
> trunk, 2 weeks old at most.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >