[jira] [Updated] (HIVE-14658) UDF abs throws NPE when input arg type is string

2016-08-27 Thread Niklaus Xiao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niklaus Xiao updated HIVE-14658:

Fix Version/s: 2.2.0
   Status: Patch Available  (was: Open)

> UDF abs throws NPE when input arg type is string
> 
>
> Key: HIVE-14658
> URL: https://issues.apache.org/jira/browse/HIVE-14658
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Affects Versions: 1.3.0
>Reporter: Niklaus Xiao
>Assignee: Niklaus Xiao
>Priority: Minor
> Fix For: 2.2.0
>
> Attachments: HIVE-14658.1.patch
>
>
> I know this is not the right use case, but NPE is not exptected.
> {code}
> 0: jdbc:hive2://10.64.35.144:21066/> select abs("foo");
> Error: Error while compiling statement: FAILED: NullPointerException null 
> (state=42000,code=4)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14658) UDF abs throws NPE when input arg type is string

2016-08-27 Thread Niklaus Xiao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niklaus Xiao updated HIVE-14658:

Attachment: HIVE-14658.1.patch

> UDF abs throws NPE when input arg type is string
> 
>
> Key: HIVE-14658
> URL: https://issues.apache.org/jira/browse/HIVE-14658
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Affects Versions: 1.3.0
>Reporter: Niklaus Xiao
>Assignee: Niklaus Xiao
>Priority: Minor
> Fix For: 2.2.0
>
> Attachments: HIVE-14658.1.patch
>
>
> I know this is not the right use case, but NPE is not exptected.
> {code}
> 0: jdbc:hive2://10.64.35.144:21066/> select abs("foo");
> Error: Error while compiling statement: FAILED: NullPointerException null 
> (state=42000,code=4)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14658) UDF abs throws NPE when input arg type is string

2016-08-27 Thread Niklaus Xiao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niklaus Xiao updated HIVE-14658:

Affects Version/s: 2.2.0

> UDF abs throws NPE when input arg type is string
> 
>
> Key: HIVE-14658
> URL: https://issues.apache.org/jira/browse/HIVE-14658
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Affects Versions: 1.3.0, 2.2.0
>Reporter: Niklaus Xiao
>Assignee: Niklaus Xiao
>Priority: Minor
> Fix For: 2.2.0
>
> Attachments: HIVE-14658.1.patch
>
>
> I know this is not the right use case, but NPE is not exptected.
> {code}
> 0: jdbc:hive2://10.64.35.144:21066/> select abs("foo");
> Error: Error while compiling statement: FAILED: NullPointerException null 
> (state=42000,code=4)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14217) Druid integration

2016-08-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15440929#comment-15440929
 ] 

Hive QA commented on HIVE-14217:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12825684/HIVE-14217.03.patch

{color:green}SUCCESS:{color} +1 due to 17 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 10 failed/errored test(s), 10477 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ctas]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char]
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[explainuser_1]
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3]
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[schemeAuthority2]
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[join_view]
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1016/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1016/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1016/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 10 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12825684 - PreCommit-HIVE-MASTER-Build

> Druid integration
> -
>
> Key: HIVE-14217
> URL: https://issues.apache.org/jira/browse/HIVE-14217
> Project: Hive
>  Issue Type: New Feature
>  Components: Druid integration
>Reporter: Julian Hyde
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-14217.01.patch, HIVE-14217.02.patch, 
> HIVE-14217.03.patch
>
>
> Allow Hive to query data in Druid



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14532) Enable qtests from IDE

2016-08-27 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15440975#comment-15440975
 ] 

Lefty Leverenz commented on HIVE-14532:
---

[~kgyrtkirk], your draft looks good overall.  Here are a few things to fix:

#  "happends"  ->  "happens"  (line 10, section 2)
#  "IDEGOAL=eclipse:eclipe"  ->  "eclipse:eclipse"  (line 28, section 3.0)
#  "anyone how wouldn't"  ->  "who"  (line 40, section 3.1)
#  "hive-metstore"  ->  "hive-metastore"  (line 67, section 3.2)
#  "if you inted"  ->  "if you intend"  (line 78, section 3.2)
#  "some may don't"  ->  "some may not" or "some maybe won't"  (line 100, 
section 4)
#  "seems like not found some files"  ->  "seems like some files weren't found" 
 (line 100, section 4)

I ignored some trivial edits, which can be done later when the doc goes into 
the Hive wiki.

> Enable qtests from IDE
> --
>
> Key: HIVE-14532
> URL: https://issues.apache.org/jira/browse/HIVE-14532
> Project: Hive
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Minor
> Attachments: HIVE-14532.1.patch, HIVE-14532.2.patch
>
>
> with HIVE-1 applied; i've played around with executing qtest-s from 
> eclipse...after the patch seemed ok; i've checked it with:
> {code}
> git clean -dfx
> mvn package install eclipse:eclipse -Pitests -DskipTests
> mvn -q test -Pitests -Dtest=TestCliDriver -Dqfile=combine2.q
> {code}
> the last step I think is not required...but I bootstrapped and checked my 
> project integrity this way.
> After this I was able to execute {{TestCliDriver}} from eclipse using 
> {{-Dqfile=combine.q}}, other qfiles may or may not work...but will have at 
> least some chances to be usable.
> For my biggest surprise {{alter_concatenate_indexed_table.q}} also 
> passed...which contains relative file references - and I suspected that it 
> will have issues with that..
> note: I've the datanucleus plugin installed...and i use it when I need to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14515) Schema evolution uses slow INSERT INTO .. VALUES

2016-08-27 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-14515:

Attachment: HIVE-14515.03.patch

> Schema evolution uses slow INSERT INTO .. VALUES
> 
>
> Key: HIVE-14515
> URL: https://issues.apache.org/jira/browse/HIVE-14515
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-14515.01.patch, HIVE-14515.02.patch, 
> HIVE-14515.03.patch
>
>
> Use LOAD DATA LOCAL INPATH and INSERT INTO TABLE ... SELECT * FROM instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14515) Schema evolution uses slow INSERT INTO .. VALUES

2016-08-27 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-14515:

Status: In Progress  (was: Patch Available)

> Schema evolution uses slow INSERT INTO .. VALUES
> 
>
> Key: HIVE-14515
> URL: https://issues.apache.org/jira/browse/HIVE-14515
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-14515.01.patch, HIVE-14515.02.patch, 
> HIVE-14515.03.patch
>
>
> Use LOAD DATA LOCAL INPATH and INSERT INTO TABLE ... SELECT * FROM instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14515) Schema evolution uses slow INSERT INTO .. VALUES

2016-08-27 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-14515:

Status: Patch Available  (was: In Progress)

> Schema evolution uses slow INSERT INTO .. VALUES
> 
>
> Key: HIVE-14515
> URL: https://issues.apache.org/jira/browse/HIVE-14515
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-14515.01.patch, HIVE-14515.02.patch, 
> HIVE-14515.03.patch
>
>
> Use LOAD DATA LOCAL INPATH and INSERT INTO TABLE ... SELECT * FROM instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12540) Create function failed, but show functions display it

2016-08-27 Thread Weizhong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15441021#comment-15441021
 ] 

Weizhong commented on HIVE-12540:
-

I mean if the function create failed, we can't display it when run "show 
functions", but now it did. I run on Hive 2.1, it still have the problem.
{noformat}
hive> create function abc as 'abc';
Failed to register default.abc using class abc
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.FunctionTask
hive> show functions like "default.*";
OK
default.abc
Time taken: 0.021 seconds, Fetched: 1 row(s)
{noformat}

> Create function failed, but show functions display it
> -
>
> Key: HIVE-12540
> URL: https://issues.apache.org/jira/browse/HIVE-12540
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.2.0, 1.2.1
>Reporter: Weizhong
>Priority: Minor
>
> {noformat}
> 0: jdbc:hive2://vm119:1> create function udfTest as 
> 'hive.udf.UDFArrayNotE';
> ERROR : Failed to register default.udftest using class hive.udf.UDFArrayNotE
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.FunctionTask (state=08S01,code=1)
> 0: jdbc:hive2://vm119:1> show functions;
> +-+--+
> |tab_name |
> +-+--+
> | ... |
> | default.udftest |
> | ... |
> +-+--+
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14155) Vectorization: Custom UDF Vectorization annotations are ignored

2016-08-27 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-14155:
---
   Resolution: Fixed
Fix Version/s: 2.2.0
 Release Note: Vectorization: Custom UDF Vectorization annotations are 
ignored (Gopal V, reviewed by Ashutosh Chauhan)
   Status: Resolved  (was: Patch Available)

> Vectorization: Custom UDF Vectorization annotations are ignored
> ---
>
> Key: HIVE-14155
> URL: https://issues.apache.org/jira/browse/HIVE-14155
> Project: Hive
>  Issue Type: Bug
>  Components: UDF, Vectorization
>Affects Versions: 2.2.0
>Reporter: Gopal V
>Assignee: Gopal V
> Fix For: 2.2.0
>
> Attachments: HIVE-14155.1.patch, HIVE-14155.2.patch, 
> HIVE-14155.3.patch
>
>
> {code}
> @VectorizedExpressions(value = { VectorStringRot13.class })
> {code}
> in a custom UDF Is ignored because the check for annotations happens after 
> custom UDF detection.
> The custom UDF codepath is on the fail-over track of annotation lookups, so 
> the detection during validation of SEL is sufficient, instead of during 
> expression creation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14515) Schema evolution uses slow INSERT INTO .. VALUES

2016-08-27 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15441041#comment-15441041
 ] 

Matt McCline commented on HIVE-14515:
-

#1026

> Schema evolution uses slow INSERT INTO .. VALUES
> 
>
> Key: HIVE-14515
> URL: https://issues.apache.org/jira/browse/HIVE-14515
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-14515.01.patch, HIVE-14515.02.patch, 
> HIVE-14515.03.patch
>
>
> Use LOAD DATA LOCAL INPATH and INSERT INTO TABLE ... SELECT * FROM instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14648) LLAP: Avoid private pages in the SSD cache

2016-08-27 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-14648:
---
  Resolution: Fixed
Release Note: 
 LLAP: Avoid private pages in the SSD cache (Gopal V, reviewed by Sergey 
Shelukhin)

  Status: Resolved  (was: Patch Available)

> LLAP: Avoid private pages in the SSD cache
> --
>
> Key: HIVE-14648
> URL: https://issues.apache.org/jira/browse/HIVE-14648
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 2.2.0
>Reporter: Gopal V
>Assignee: Gopal V
>Priority: Blocker
> Fix For: 2.2.0
>
> Attachments: HIVE-14648.1.patch
>
>
> There's no reason for the SSD cache to have private mappings to the cache 
> file, there's only one reader and the memory overheads aren't worth it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14648) LLAP: Avoid private pages in the SSD cache

2016-08-27 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15441062#comment-15441062
 ] 

Gopal V commented on HIVE-14648:


Pushed to master, thanks [~sershe]

> LLAP: Avoid private pages in the SSD cache
> --
>
> Key: HIVE-14648
> URL: https://issues.apache.org/jira/browse/HIVE-14648
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 2.2.0
>Reporter: Gopal V
>Assignee: Gopal V
>Priority: Blocker
> Fix For: 2.2.0
>
> Attachments: HIVE-14648.1.patch
>
>
> There's no reason for the SSD cache to have private mappings to the cache 
> file, there's only one reader and the memory overheads aren't worth it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14655) LLAP input format should escape the query string being passed to getSplits()

2016-08-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15441095#comment-15441095
 ] 

Hive QA commented on HIVE-14655:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12825695/HIVE-14655.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 10464 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ctas]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char]
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3]
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1017/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1017/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1017/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12825695 - PreCommit-HIVE-MASTER-Build

> LLAP input format should escape the query string being passed to getSplits()
> 
>
> Key: HIVE-14655
> URL: https://issues.apache.org/jira/browse/HIVE-14655
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-14655.1.patch
>
>
> Query may not be parsed correctly by get_splits() otherwise.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14612) org.apache.hive.service.cli.operation.TestOperationLoggingLayout.testSwitchLogLayout failure

2016-08-27 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15441100#comment-15441100
 ] 

Lefty Leverenz commented on HIVE-14612:
---

[~hsubramaniyan], this issue needs a fix version (2.2.0).  Thanks.

> org.apache.hive.service.cli.operation.TestOperationLoggingLayout.testSwitchLogLayout
>  failure
> 
>
> Key: HIVE-14612
> URL: https://issues.apache.org/jira/browse/HIVE-14612
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Attachments: HIVE-14612.1.patch
>
>
> Failing for some time



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14656) Clean up driver instance in get_splits

2016-08-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15441228#comment-15441228
 ] 

Hive QA commented on HIVE-14656:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12825696/HIVE-14656.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 10464 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ctas]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char]
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3]
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1018/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1018/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1018/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12825696 - PreCommit-HIVE-MASTER-Build

> Clean up driver instance in get_splits
> --
>
> Key: HIVE-14656
> URL: https://issues.apache.org/jira/browse/HIVE-14656
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-14656.1.patch
>
>
> get_splits() creates a Driver instance that needs to be closed/cleaned up 
> after use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14362) Support explain analyze in Hive

2016-08-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15441398#comment-15441398
 ] 

Hive QA commented on HIVE-14362:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12825703/HIVE-14362.05.patch

{color:green}SUCCESS:{color} +1 due to 8 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 12 failed/errored test(s), 10470 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ctas]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char]
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_llap_counters1]
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_llap_counters]
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_basic]
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_schema_evol_3a]
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_0]
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_1]
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_3]
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3]
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1019/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1019/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1019/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 12 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12825703 - PreCommit-HIVE-MASTER-Build

> Support explain analyze in Hive
> ---
>
> Key: HIVE-14362
> URL: https://issues.apache.org/jira/browse/HIVE-14362
> Project: Hive
>  Issue Type: New Feature
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-14362.01.patch, HIVE-14362.02.patch, 
> HIVE-14362.03.patch, HIVE-14362.05.patch, compare_on_cluster.pdf
>
>
> Right now all the explain levels only support stats before query runs. We 
> would like to have an explain analyze similar to Postgres for real stats 
> after query runs. This will help to identify the major gap between 
> estimated/real stats and make not only query optimization better but also 
> query performance debugging easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14532) Enable qtests from IDE

2016-08-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15441574#comment-15441574
 ] 

Hive QA commented on HIVE-14532:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12825721/HIVE-14532.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 10464 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ctas]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char]
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3]
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1020/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1020/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1020/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12825721 - PreCommit-HIVE-MASTER-Build

> Enable qtests from IDE
> --
>
> Key: HIVE-14532
> URL: https://issues.apache.org/jira/browse/HIVE-14532
> Project: Hive
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Minor
> Attachments: HIVE-14532.1.patch, HIVE-14532.2.patch
>
>
> with HIVE-1 applied; i've played around with executing qtest-s from 
> eclipse...after the patch seemed ok; i've checked it with:
> {code}
> git clean -dfx
> mvn package install eclipse:eclipse -Pitests -DskipTests
> mvn -q test -Pitests -Dtest=TestCliDriver -Dqfile=combine2.q
> {code}
> the last step I think is not required...but I bootstrapped and checked my 
> project integrity this way.
> After this I was able to execute {{TestCliDriver}} from eclipse using 
> {{-Dqfile=combine.q}}, other qfiles may or may not work...but will have at 
> least some chances to be usable.
> For my biggest surprise {{alter_concatenate_indexed_table.q}} also 
> passed...which contains relative file references - and I suspected that it 
> will have issues with that..
> note: I've the datanucleus plugin installed...and i use it when I need to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14612) org.apache.hive.service.cli.operation.TestOperationLoggingLayout.testSwitchLogLayout failure

2016-08-27 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-14612:
-
Fix Version/s: 2.2.0

> org.apache.hive.service.cli.operation.TestOperationLoggingLayout.testSwitchLogLayout
>  failure
> 
>
> Key: HIVE-14612
> URL: https://issues.apache.org/jira/browse/HIVE-14612
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Fix For: 2.2.0
>
> Attachments: HIVE-14612.1.patch
>
>
> Failing for some time



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14612) org.apache.hive.service.cli.operation.TestOperationLoggingLayout.testSwitchLogLayout failure

2016-08-27 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15441749#comment-15441749
 ] 

Hari Sankar Sivarama Subramaniyan commented on HIVE-14612:
--

Thanks Lefty for reminding me. I've updated the same.

> org.apache.hive.service.cli.operation.TestOperationLoggingLayout.testSwitchLogLayout
>  failure
> 
>
> Key: HIVE-14612
> URL: https://issues.apache.org/jira/browse/HIVE-14612
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Fix For: 2.2.0
>
> Attachments: HIVE-14612.1.patch
>
>
> Failing for some time



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14651) Add a local cluster for Tez and LLAP

2016-08-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15441764#comment-15441764
 ] 

Hive QA commented on HIVE-14651:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12825728/HIVE-14651.03.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 10405 tests 
executed
*Failed tests:*
{noformat}
TestMiniSparkOnYarnCliDriver - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ctas]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char]
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[hybridgrace_hashjoin_1]
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3]
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1021/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1021/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1021/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 8 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12825728 - PreCommit-HIVE-MASTER-Build

> Add a local cluster for Tez and LLAP
> 
>
> Key: HIVE-14651
> URL: https://issues.apache.org/jira/browse/HIVE-14651
> Project: Hive
>  Issue Type: Sub-task
>  Components: Testing Infrastructure
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-14651.01.patch, HIVE-14651.02.patch, 
> HIVE-14651.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14659) OutputStream won't close if caught exception in funtion unparseExprForValuesClause in SemanticAnalyzer.java

2016-08-27 Thread Fan Yunbo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fan Yunbo updated HIVE-14659:
-
Attachment: HIVE-14659.1.patch

> OutputStream won't close if caught exception in funtion 
> unparseExprForValuesClause in SemanticAnalyzer.java
> ---
>
> Key: HIVE-14659
> URL: https://issues.apache.org/jira/browse/HIVE-14659
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Fan Yunbo
>Assignee: Fan Yunbo
> Fix For: 2.2.0
>
> Attachments: HIVE-14659.1.patch
>
>
> I hava met the problem that Hive process cannot create new threads because of 
> lots of OutputStream not closed.
> Hear is the part of jstack info:
> "Thread-35783" daemon prio=10 tid=0x7f8f58f02800 nid=0x18cc in 
> Object.wait() [0x7f8e632c]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:577)
> - locked <0x00061af52d50> (a java.util.LinkedList)
> and the related error log:
> org.apache.hadoop.hive.ql.parse.SemanticException: Unable to create temp file 
> for insert values Expression of type TOK_TABLE_OR_COL not supported in 
> insert/values
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genValuesTempTable(SemanticAnalyzer.java:812)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.doPhase1(SemanticAnalyzer.java:1207)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.doPhase1(SemanticAnalyzer.java:1410)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genResolvedParseTree(SemanticAnalyzer.java:10136)
> Caused by: org.apache.hadoop.hive.ql.parse.SemanticException: Expression of 
> type TOK_TABLE_OR_COL not supported in insert/values
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.unparseExprForValuesClause(SemanticAnalyzer.java:858)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genValuesTempTable(SemanticAnalyzer.java:785)
> ... 15 more
> It shows the output stream won't close if caught exception in funtion 
> unparseExprForValuesClause in SemanticAnalyzer.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14659) OutputStream won't close if caught exception in funtion unparseExprForValuesClause in SemanticAnalyzer.java

2016-08-27 Thread Fan Yunbo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fan Yunbo updated HIVE-14659:
-
Description: 
I hava met the problem that Hive process cannot create new threads because of 
lots of OutputStream not closed.
Here is the part of jstack info:

"Thread-35783" daemon prio=10 tid=0x7f8f58f02800 nid=0x18cc in 
Object.wait() [0x7f8e632c]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:577)
- locked <0x00061af52d50> (a java.util.LinkedList)

and the related error log:
org.apache.hadoop.hive.ql.parse.SemanticException: Unable to create temp file 
for insert values Expression of type TOK_TABLE_OR_COL not supported in 
insert/values
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genValuesTempTable(SemanticAnalyzer.java:812)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.doPhase1(SemanticAnalyzer.java:1207)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.doPhase1(SemanticAnalyzer.java:1410)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genResolvedParseTree(SemanticAnalyzer.java:10136)
Caused by: org.apache.hadoop.hive.ql.parse.SemanticException: Expression of 
type TOK_TABLE_OR_COL not supported in insert/values
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.unparseExprForValuesClause(SemanticAnalyzer.java:858)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genValuesTempTable(SemanticAnalyzer.java:785)
... 15 more

It shows the output stream won't close if caught exception in funtion 
unparseExprForValuesClause in SemanticAnalyzer.java

  was:
I hava met the problem that Hive process cannot create new threads because of 
lots of OutputStream not closed.
Hear is the part of jstack info:

"Thread-35783" daemon prio=10 tid=0x7f8f58f02800 nid=0x18cc in 
Object.wait() [0x7f8e632c]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:577)
- locked <0x00061af52d50> (a java.util.LinkedList)

and the related error log:
org.apache.hadoop.hive.ql.parse.SemanticException: Unable to create temp file 
for insert values Expression of type TOK_TABLE_OR_COL not supported in 
insert/values
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genValuesTempTable(SemanticAnalyzer.java:812)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.doPhase1(SemanticAnalyzer.java:1207)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.doPhase1(SemanticAnalyzer.java:1410)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genResolvedParseTree(SemanticAnalyzer.java:10136)
Caused by: org.apache.hadoop.hive.ql.parse.SemanticException: Expression of 
type TOK_TABLE_OR_COL not supported in insert/values
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.unparseExprForValuesClause(SemanticAnalyzer.java:858)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genValuesTempTable(SemanticAnalyzer.java:785)
... 15 more

It shows the output stream won't close if caught exception in funtion 
unparseExprForValuesClause in SemanticAnalyzer.java


> OutputStream won't close if caught exception in funtion 
> unparseExprForValuesClause in SemanticAnalyzer.java
> ---
>
> Key: HIVE-14659
> URL: https://issues.apache.org/jira/browse/HIVE-14659
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Fan Yunbo
>Assignee: Fan Yunbo
> Fix For: 2.2.0
>
> Attachments: HIVE-14659.1.patch
>
>
> I hava met the problem that Hive process cannot create new threads because of 
> lots of OutputStream not closed.
> Here is the part of jstack info:
> "Thread-35783" daemon prio=10 tid=0x7f8f58f02800 nid=0x18cc in 
> Object.wait() [0x7f8e632c]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:577)
> - locked <0x00061af52d50> (a java.util.LinkedList)
> and the related error log:
> org.apache.hadoop.hive.ql.parse.SemanticException: Unable to create temp file 
> for insert values Expression of type TOK_TABLE_OR_COL not supported in 
> insert/values
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genValuesTempTable(SemanticAnalyzer.java:812)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.doPhase1(SemanticAnalyzer.java:1207)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.doPhase1(SemanticAnalyzer.java:1410)
> at

[jira] [Updated] (HIVE-14659) OutputStream won't close if caught exception in funtion unparseExprForValuesClause in SemanticAnalyzer.java

2016-08-27 Thread Fan Yunbo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fan Yunbo updated HIVE-14659:
-
Attachment: (was: HIVE-14659.1.patch)

> OutputStream won't close if caught exception in funtion 
> unparseExprForValuesClause in SemanticAnalyzer.java
> ---
>
> Key: HIVE-14659
> URL: https://issues.apache.org/jira/browse/HIVE-14659
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Fan Yunbo
>Assignee: Fan Yunbo
> Fix For: 2.2.0
>
>
> I hava met the problem that Hive process cannot create new threads because of 
> lots of OutputStream not closed.
> Here is the part of jstack info:
> "Thread-35783" daemon prio=10 tid=0x7f8f58f02800 nid=0x18cc in 
> Object.wait() [0x7f8e632c]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:577)
> - locked <0x00061af52d50> (a java.util.LinkedList)
> and the related error log:
> org.apache.hadoop.hive.ql.parse.SemanticException: Unable to create temp file 
> for insert values Expression of type TOK_TABLE_OR_COL not supported in 
> insert/values
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genValuesTempTable(SemanticAnalyzer.java:812)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.doPhase1(SemanticAnalyzer.java:1207)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.doPhase1(SemanticAnalyzer.java:1410)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genResolvedParseTree(SemanticAnalyzer.java:10136)
> Caused by: org.apache.hadoop.hive.ql.parse.SemanticException: Expression of 
> type TOK_TABLE_OR_COL not supported in insert/values
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.unparseExprForValuesClause(SemanticAnalyzer.java:858)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genValuesTempTable(SemanticAnalyzer.java:785)
> ... 15 more
> It shows the output stream won't close if caught exception in funtion 
> unparseExprForValuesClause in SemanticAnalyzer.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14659) OutputStream won't close if caught exception in funtion unparseExprForValuesClause in SemanticAnalyzer.java

2016-08-27 Thread Fan Yunbo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fan Yunbo updated HIVE-14659:
-
Attachment: HIVE-14659.1.patch

> OutputStream won't close if caught exception in funtion 
> unparseExprForValuesClause in SemanticAnalyzer.java
> ---
>
> Key: HIVE-14659
> URL: https://issues.apache.org/jira/browse/HIVE-14659
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Fan Yunbo
>Assignee: Fan Yunbo
> Fix For: 2.2.0
>
> Attachments: HIVE-14659.1.patch
>
>
> I hava met the problem that Hive process cannot create new threads because of 
> lots of OutputStream not closed.
> Here is the part of jstack info:
> "Thread-35783" daemon prio=10 tid=0x7f8f58f02800 nid=0x18cc in 
> Object.wait() [0x7f8e632c]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:577)
> - locked <0x00061af52d50> (a java.util.LinkedList)
> and the related error log:
> org.apache.hadoop.hive.ql.parse.SemanticException: Unable to create temp file 
> for insert values Expression of type TOK_TABLE_OR_COL not supported in 
> insert/values
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genValuesTempTable(SemanticAnalyzer.java:812)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.doPhase1(SemanticAnalyzer.java:1207)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.doPhase1(SemanticAnalyzer.java:1410)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genResolvedParseTree(SemanticAnalyzer.java:10136)
> Caused by: org.apache.hadoop.hive.ql.parse.SemanticException: Expression of 
> type TOK_TABLE_OR_COL not supported in insert/values
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.unparseExprForValuesClause(SemanticAnalyzer.java:858)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genValuesTempTable(SemanticAnalyzer.java:785)
> ... 15 more
> It shows the output stream won't close if caught exception in funtion 
> unparseExprForValuesClause in SemanticAnalyzer.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HIVE-14659) OutputStream won't close if caught exception in funtion unparseExprForValuesClause in SemanticAnalyzer.java

2016-08-27 Thread Fan Yunbo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-14659 started by Fan Yunbo.

> OutputStream won't close if caught exception in funtion 
> unparseExprForValuesClause in SemanticAnalyzer.java
> ---
>
> Key: HIVE-14659
> URL: https://issues.apache.org/jira/browse/HIVE-14659
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Fan Yunbo
>Assignee: Fan Yunbo
> Fix For: 2.2.0
>
> Attachments: HIVE-14659.1.patch
>
>
> I hava met the problem that Hive process cannot create new threads because of 
> lots of OutputStream not closed.
> Here is the part of jstack info:
> "Thread-35783" daemon prio=10 tid=0x7f8f58f02800 nid=0x18cc in 
> Object.wait() [0x7f8e632c]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:577)
> - locked <0x00061af52d50> (a java.util.LinkedList)
> and the related error log:
> org.apache.hadoop.hive.ql.parse.SemanticException: Unable to create temp file 
> for insert values Expression of type TOK_TABLE_OR_COL not supported in 
> insert/values
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genValuesTempTable(SemanticAnalyzer.java:812)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.doPhase1(SemanticAnalyzer.java:1207)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.doPhase1(SemanticAnalyzer.java:1410)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genResolvedParseTree(SemanticAnalyzer.java:10136)
> Caused by: org.apache.hadoop.hive.ql.parse.SemanticException: Expression of 
> type TOK_TABLE_OR_COL not supported in insert/values
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.unparseExprForValuesClause(SemanticAnalyzer.java:858)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genValuesTempTable(SemanticAnalyzer.java:785)
> ... 15 more
> It shows the output stream won't close if caught exception in funtion 
> unparseExprForValuesClause in SemanticAnalyzer.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work stopped] (HIVE-14659) OutputStream won't close if caught exception in funtion unparseExprForValuesClause in SemanticAnalyzer.java

2016-08-27 Thread Fan Yunbo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-14659 stopped by Fan Yunbo.

> OutputStream won't close if caught exception in funtion 
> unparseExprForValuesClause in SemanticAnalyzer.java
> ---
>
> Key: HIVE-14659
> URL: https://issues.apache.org/jira/browse/HIVE-14659
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Fan Yunbo
>Assignee: Fan Yunbo
> Fix For: 2.2.0
>
> Attachments: HIVE-14659.1.patch
>
>
> I hava met the problem that Hive process cannot create new threads because of 
> lots of OutputStream not closed.
> Here is the part of jstack info:
> "Thread-35783" daemon prio=10 tid=0x7f8f58f02800 nid=0x18cc in 
> Object.wait() [0x7f8e632c]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:577)
> - locked <0x00061af52d50> (a java.util.LinkedList)
> and the related error log:
> org.apache.hadoop.hive.ql.parse.SemanticException: Unable to create temp file 
> for insert values Expression of type TOK_TABLE_OR_COL not supported in 
> insert/values
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genValuesTempTable(SemanticAnalyzer.java:812)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.doPhase1(SemanticAnalyzer.java:1207)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.doPhase1(SemanticAnalyzer.java:1410)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genResolvedParseTree(SemanticAnalyzer.java:10136)
> Caused by: org.apache.hadoop.hive.ql.parse.SemanticException: Expression of 
> type TOK_TABLE_OR_COL not supported in insert/values
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.unparseExprForValuesClause(SemanticAnalyzer.java:858)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genValuesTempTable(SemanticAnalyzer.java:785)
> ... 15 more
> It shows the output stream won't close if caught exception in funtion 
> unparseExprForValuesClause in SemanticAnalyzer.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HIVE-14659) OutputStream won't close if caught exception in funtion unparseExprForValuesClause in SemanticAnalyzer.java

2016-08-27 Thread Fan Yunbo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-14659 started by Fan Yunbo.

> OutputStream won't close if caught exception in funtion 
> unparseExprForValuesClause in SemanticAnalyzer.java
> ---
>
> Key: HIVE-14659
> URL: https://issues.apache.org/jira/browse/HIVE-14659
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Fan Yunbo
>Assignee: Fan Yunbo
> Fix For: 2.2.0
>
> Attachments: HIVE-14659.1.patch
>
>
> I hava met the problem that Hive process cannot create new threads because of 
> lots of OutputStream not closed.
> Here is the part of jstack info:
> "Thread-35783" daemon prio=10 tid=0x7f8f58f02800 nid=0x18cc in 
> Object.wait() [0x7f8e632c]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:577)
> - locked <0x00061af52d50> (a java.util.LinkedList)
> and the related error log:
> org.apache.hadoop.hive.ql.parse.SemanticException: Unable to create temp file 
> for insert values Expression of type TOK_TABLE_OR_COL not supported in 
> insert/values
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genValuesTempTable(SemanticAnalyzer.java:812)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.doPhase1(SemanticAnalyzer.java:1207)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.doPhase1(SemanticAnalyzer.java:1410)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genResolvedParseTree(SemanticAnalyzer.java:10136)
> Caused by: org.apache.hadoop.hive.ql.parse.SemanticException: Expression of 
> type TOK_TABLE_OR_COL not supported in insert/values
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.unparseExprForValuesClause(SemanticAnalyzer.java:858)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genValuesTempTable(SemanticAnalyzer.java:785)
> ... 15 more
> It shows the output stream won't close if caught exception in funtion 
> unparseExprForValuesClause in SemanticAnalyzer.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14659) OutputStream won't close if caught exception in funtion unparseExprForValuesClause in SemanticAnalyzer.java

2016-08-27 Thread Fan Yunbo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fan Yunbo updated HIVE-14659:
-
Status: Patch Available  (was: In Progress)

> OutputStream won't close if caught exception in funtion 
> unparseExprForValuesClause in SemanticAnalyzer.java
> ---
>
> Key: HIVE-14659
> URL: https://issues.apache.org/jira/browse/HIVE-14659
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Fan Yunbo
>Assignee: Fan Yunbo
> Fix For: 2.2.0
>
> Attachments: HIVE-14659.1.patch
>
>
> I hava met the problem that Hive process cannot create new threads because of 
> lots of OutputStream not closed.
> Here is the part of jstack info:
> "Thread-35783" daemon prio=10 tid=0x7f8f58f02800 nid=0x18cc in 
> Object.wait() [0x7f8e632c]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:577)
> - locked <0x00061af52d50> (a java.util.LinkedList)
> and the related error log:
> org.apache.hadoop.hive.ql.parse.SemanticException: Unable to create temp file 
> for insert values Expression of type TOK_TABLE_OR_COL not supported in 
> insert/values
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genValuesTempTable(SemanticAnalyzer.java:812)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.doPhase1(SemanticAnalyzer.java:1207)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.doPhase1(SemanticAnalyzer.java:1410)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genResolvedParseTree(SemanticAnalyzer.java:10136)
> Caused by: org.apache.hadoop.hive.ql.parse.SemanticException: Expression of 
> type TOK_TABLE_OR_COL not supported in insert/values
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.unparseExprForValuesClause(SemanticAnalyzer.java:858)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genValuesTempTable(SemanticAnalyzer.java:785)
> ... 15 more
> It shows the output stream won't close if caught exception in funtion 
> unparseExprForValuesClause in SemanticAnalyzer.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14536) Unit test code cleanup

2016-08-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15441903#comment-15441903
 ] 

Hive QA commented on HIVE-14536:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12825736/HIVE-14536.7.patch

{color:green}SUCCESS:{color} +1 due to 24 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 10464 tests 
executed
*Failed tests:*
{noformat}
TestBeeLineDriver - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ctas]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char]
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3]
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[schemeAuthority2]
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1022/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1022/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1022/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 9 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12825736 - PreCommit-HIVE-MASTER-Build

> Unit test code cleanup
> --
>
> Key: HIVE-14536
> URL: https://issues.apache.org/jira/browse/HIVE-14536
> Project: Hive
>  Issue Type: Sub-task
>  Components: Testing Infrastructure
>Reporter: Peter Vary
>Assignee: Peter Vary
> Attachments: HIVE-14536.5.patch, HIVE-14536.6.patch, 
> HIVE-14536.7.patch, HIVE-14536.patch
>
>
> Clean up the itest infrastructure, to create a readable, easy to understand 
> code



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14627) Improvements to MiniMr tests

2016-08-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15442017#comment-15442017
 ] 

Hive QA commented on HIVE-14627:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12825747/HIVE-14627.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 10464 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ctas]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char]
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3]
org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropTable
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1023/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1023/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1023/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 9 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12825747 - PreCommit-HIVE-MASTER-Build

> Improvements to MiniMr tests
> 
>
> Key: HIVE-14627
> URL: https://issues.apache.org/jira/browse/HIVE-14627
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 2.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-14627.1.patch, HIVE-14627.2.patch, 
> HIVE-14627.3.patch
>
>
> Currently MiniMr is extremely slow, I ran udf_using.q on MiniMr and following 
> are the execution time breakdown
> Total time - 13m59s
> Junit reported time for testcase - 50s
> Most of the time is spent in creating/loading/analyzing initial tables - ~12m
> Cleanup - ~1m
> There is huge overhead for running MiniMr tests when compared to the actual 
> test runtime. 
> Ran the same test without init script.
> Total time - 2m17s
> Junit reported time for testcase - 52s
> Also I noticed some tests that doesn't have to run on MiniMr (like 
> udf_using.q that does not require MiniMr. It just reads/write to hdfs which 
> we can do in MiniTez/MiniLlap which are way faster). Most tests access only 
> very few initial tables to read few rows from it. We can fix those tests to 
> load just the table that is required for the table instead of all initial 
> tables. Also we can remove q_init_script.sql initialization for MiniMr after 
> rewriting and moving over the unwanted tests which should cut down the 
> runtime a lot.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13383) RetryingMetaStoreClient retries non retriable embedded metastore client

2016-08-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15442145#comment-15442145
 ] 

Hive QA commented on HIVE-13383:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12796031/HIVE-13383.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 10457 tests 
executed
*Failed tests:*
{noformat}
TestJdbcMetadataApiAuth - did not produce a TEST-*.xml file
TestSparkNegativeCliDriver - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ctas]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char]
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3]
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1024/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1024/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1024/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 8 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12796031 - PreCommit-HIVE-MASTER-Build

> RetryingMetaStoreClient retries non retriable embedded metastore client 
> 
>
> Key: HIVE-13383
> URL: https://issues.apache.org/jira/browse/HIVE-13383
> Project: Hive
>  Issue Type: Bug
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Attachments: HIVE-13383.1.patch
>
>
> Embedded metastore clients can't be retried, they throw an exception - "For 
> direct MetaStore DB connections, we don't support retries at the client 
> level."
> This tends to mask the real error that caused the attempts to retry. 
> RetryingMetaStoreClient shouldn't even attempt to reconnect when 
> direct/embedded metastore client is used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-14660) ArrayIndexOutOfBoundsException on delete

2016-08-27 Thread Benjamin BONNET (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin BONNET reassigned HIVE-14660:
--

Assignee: Benjamin BONNET

> ArrayIndexOutOfBoundsException on delete
> 
>
> Key: HIVE-14660
> URL: https://issues.apache.org/jira/browse/HIVE-14660
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 1.2.1
>Reporter: Benjamin BONNET
>Assignee: Benjamin BONNET
>
> Hi,
> DELETE on an ACID table may fail on an ArrayIndexOutOfBoundsException.
> That bug occurs at Reduce phase when there are less reducers than the number 
> of the table buckets.
> In order to reproduce, create a simple ACID table :
> {code:sql}
> CREATE TABLE test (`cle` bigint,`valeur` string)
>  PARTITIONED BY (`annee` string)
>  CLUSTERED BY (cle) INTO 5 BUCKETS
>  TBLPROPERTIES ('transactional'='true');
> {code}
> Populate it with lines distributed among all buckets, with random values and 
> a few partitions.
> Force the Reducers to be less than the buckets :
> {code:sql}
> set mapred.reduce.tasks=1;
> {code}
> Then execute a delete that will remove many lines from all the buckets.
> {code:sql}
> DELETE FROM test WHERE valeur<'some_value';
> {code}
> Then you will get an ArrayIndexOutOfBoundsException :
> {code}
> 2016-08-22 21:21:02,500 [FATAL] [TezChild] |tez.ReduceRecordSource|: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row (tag=0) 
> {"key":{"reducesinkkey0":{"transactionid":119,"bucketid":0,"rowid":0}},"value":{"_col0":"4"}}
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource$GroupIterator.next(ReduceRecordSource.java:352)
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:274)
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:252)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:150)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:139)
> at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:344)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:181)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:172)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:172)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:168)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 5
> at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:769)
> at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:838)
> at 
> org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:88)
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource$GroupIterator.next(ReduceRecordSource.java:343)
> ... 17 more
> {code}
> Adding logs into FileSinkOperator, one sees the operator deals with buckets 
> 0, 1, 2, 3, 4, then 0 again and it fails at line 769 : actually each time you 
> switch bucket, you move forwards in a 5 (number of buckets) elements array. 
> So when you get bucket 0 for the second time, you get out of the array...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14660) ArrayIndexOutOfBoundsException on delete

2016-08-27 Thread Benjamin BONNET (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin BONNET updated HIVE-14660:
---
Attachment: HIVE-14660.1-banch-1.2.patch

> ArrayIndexOutOfBoundsException on delete
> 
>
> Key: HIVE-14660
> URL: https://issues.apache.org/jira/browse/HIVE-14660
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 1.2.1
>Reporter: Benjamin BONNET
>Assignee: Benjamin BONNET
> Attachments: HIVE-14660.1-banch-1.2.patch
>
>
> Hi,
> DELETE on an ACID table may fail on an ArrayIndexOutOfBoundsException.
> That bug occurs at Reduce phase when there are less reducers than the number 
> of the table buckets.
> In order to reproduce, create a simple ACID table :
> {code:sql}
> CREATE TABLE test (`cle` bigint,`valeur` string)
>  PARTITIONED BY (`annee` string)
>  CLUSTERED BY (cle) INTO 5 BUCKETS
>  TBLPROPERTIES ('transactional'='true');
> {code}
> Populate it with lines distributed among all buckets, with random values and 
> a few partitions.
> Force the Reducers to be less than the buckets :
> {code:sql}
> set mapred.reduce.tasks=1;
> {code}
> Then execute a delete that will remove many lines from all the buckets.
> {code:sql}
> DELETE FROM test WHERE valeur<'some_value';
> {code}
> Then you will get an ArrayIndexOutOfBoundsException :
> {code}
> 2016-08-22 21:21:02,500 [FATAL] [TezChild] |tez.ReduceRecordSource|: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row (tag=0) 
> {"key":{"reducesinkkey0":{"transactionid":119,"bucketid":0,"rowid":0}},"value":{"_col0":"4"}}
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource$GroupIterator.next(ReduceRecordSource.java:352)
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:274)
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:252)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:150)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:139)
> at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:344)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:181)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:172)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:172)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:168)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 5
> at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:769)
> at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:838)
> at 
> org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:88)
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource$GroupIterator.next(ReduceRecordSource.java:343)
> ... 17 more
> {code}
> Adding logs into FileSinkOperator, one sees the operator deals with buckets 
> 0, 1, 2, 3, 4, then 0 again and it fails at line 769 : actually each time you 
> switch bucket, you move forwards in a 5 (number of buckets) elements array. 
> So when you get bucket 0 for the second time, you get out of the array...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14660) ArrayIndexOutOfBoundsException on delete

2016-08-27 Thread Benjamin BONNET (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin BONNET updated HIVE-14660:
---
Status: Patch Available  (was: Open)

> ArrayIndexOutOfBoundsException on delete
> 
>
> Key: HIVE-14660
> URL: https://issues.apache.org/jira/browse/HIVE-14660
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 1.2.1
>Reporter: Benjamin BONNET
>Assignee: Benjamin BONNET
> Attachments: HIVE-14660.1-banch-1.2.patch
>
>
> Hi,
> DELETE on an ACID table may fail on an ArrayIndexOutOfBoundsException.
> That bug occurs at Reduce phase when there are less reducers than the number 
> of the table buckets.
> In order to reproduce, create a simple ACID table :
> {code:sql}
> CREATE TABLE test (`cle` bigint,`valeur` string)
>  PARTITIONED BY (`annee` string)
>  CLUSTERED BY (cle) INTO 5 BUCKETS
>  TBLPROPERTIES ('transactional'='true');
> {code}
> Populate it with lines distributed among all buckets, with random values and 
> a few partitions.
> Force the Reducers to be less than the buckets :
> {code:sql}
> set mapred.reduce.tasks=1;
> {code}
> Then execute a delete that will remove many lines from all the buckets.
> {code:sql}
> DELETE FROM test WHERE valeur<'some_value';
> {code}
> Then you will get an ArrayIndexOutOfBoundsException :
> {code}
> 2016-08-22 21:21:02,500 [FATAL] [TezChild] |tez.ReduceRecordSource|: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row (tag=0) 
> {"key":{"reducesinkkey0":{"transactionid":119,"bucketid":0,"rowid":0}},"value":{"_col0":"4"}}
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource$GroupIterator.next(ReduceRecordSource.java:352)
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:274)
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:252)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:150)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:139)
> at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:344)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:181)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:172)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:172)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:168)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 5
> at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:769)
> at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:838)
> at 
> org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:88)
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource$GroupIterator.next(ReduceRecordSource.java:343)
> ... 17 more
> {code}
> Adding logs into FileSinkOperator, one sees the operator deals with buckets 
> 0, 1, 2, 3, 4, then 0 again and it fails at line 769 : actually each time you 
> switch bucket, you move forwards in a 5 (number of buckets) elements array. 
> So when you get bucket 0 for the second time, you get out of the array...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14660) ArrayIndexOutOfBoundsException on delete

2016-08-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15442216#comment-15442216
 ] 

ASF GitHub Bot commented on HIVE-14660:
---

GitHub user bonnetb opened a pull request:

https://github.com/apache/hive/pull/100

HIVE-14660 : ArrayIndexOutOfBounds on delete



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bonnetb/hive HIVE-14660

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hive/pull/100.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #100


commit 21f333f0483249949dd97a6960c169b6dd255491
Author: Benjamin BONNET 
Date:   2016-08-27T20:20:15Z

HIVE-14660 : ArrayIndexOutOfBounds on delete




> ArrayIndexOutOfBoundsException on delete
> 
>
> Key: HIVE-14660
> URL: https://issues.apache.org/jira/browse/HIVE-14660
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 1.2.1
>Reporter: Benjamin BONNET
>Assignee: Benjamin BONNET
> Attachments: HIVE-14660.1-banch-1.2.patch
>
>
> Hi,
> DELETE on an ACID table may fail on an ArrayIndexOutOfBoundsException.
> That bug occurs at Reduce phase when there are less reducers than the number 
> of the table buckets.
> In order to reproduce, create a simple ACID table :
> {code:sql}
> CREATE TABLE test (`cle` bigint,`valeur` string)
>  PARTITIONED BY (`annee` string)
>  CLUSTERED BY (cle) INTO 5 BUCKETS
>  TBLPROPERTIES ('transactional'='true');
> {code}
> Populate it with lines distributed among all buckets, with random values and 
> a few partitions.
> Force the Reducers to be less than the buckets :
> {code:sql}
> set mapred.reduce.tasks=1;
> {code}
> Then execute a delete that will remove many lines from all the buckets.
> {code:sql}
> DELETE FROM test WHERE valeur<'some_value';
> {code}
> Then you will get an ArrayIndexOutOfBoundsException :
> {code}
> 2016-08-22 21:21:02,500 [FATAL] [TezChild] |tez.ReduceRecordSource|: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row (tag=0) 
> {"key":{"reducesinkkey0":{"transactionid":119,"bucketid":0,"rowid":0}},"value":{"_col0":"4"}}
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource$GroupIterator.next(ReduceRecordSource.java:352)
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:274)
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:252)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:150)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:139)
> at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:344)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:181)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:172)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:172)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:168)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 5
> at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:769)
> at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:838)
> at 
> org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:88)
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource$GroupIterator.next(ReduceRecordSource.java:343)
> ... 17 more
> {code}
> Adding logs into FileSinkOperator, one sees the operator deals with buckets 
> 0, 1, 2, 3, 4, then 0 again and it fails at line 769 : actually each time you 
> switch bucket, you move forwards in a 5 (number of buckets) elements array. 
> So when you get bucket 0 for th

[jira] [Commented] (HIVE-14658) UDF abs throws NPE when input arg type is string

2016-08-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15442253#comment-15442253
 ] 

Hive QA commented on HIVE-14658:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12825785/HIVE-14658.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 10464 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ctas]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char]
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3]
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1025/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1025/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1025/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12825785 - PreCommit-HIVE-MASTER-Build

> UDF abs throws NPE when input arg type is string
> 
>
> Key: HIVE-14658
> URL: https://issues.apache.org/jira/browse/HIVE-14658
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Affects Versions: 1.3.0, 2.2.0
>Reporter: Niklaus Xiao
>Assignee: Niklaus Xiao
>Priority: Minor
> Fix For: 2.2.0
>
> Attachments: HIVE-14658.1.patch
>
>
> I know this is not the right use case, but NPE is not exptected.
> {code}
> 0: jdbc:hive2://10.64.35.144:21066/> select abs("foo");
> Error: Error while compiling statement: FAILED: NullPointerException null 
> (state=42000,code=4)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14515) Schema evolution uses slow INSERT INTO .. VALUES

2016-08-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15442363#comment-15442363
 ] 

Hive QA commented on HIVE-14515:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12825799/HIVE-14515.03.patch

{color:green}SUCCESS:{color} +1 due to 28 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 10466 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char]
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[schema_evol_stats]
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3]
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1026/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1026/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1026/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12825799 - PreCommit-HIVE-MASTER-Build

> Schema evolution uses slow INSERT INTO .. VALUES
> 
>
> Key: HIVE-14515
> URL: https://issues.apache.org/jira/browse/HIVE-14515
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-14515.01.patch, HIVE-14515.02.patch, 
> HIVE-14515.03.patch
>
>
> Use LOAD DATA LOCAL INPATH and INSERT INTO TABLE ... SELECT * FROM instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14659) OutputStream won't close if caught exception in funtion unparseExprForValuesClause in SemanticAnalyzer.java

2016-08-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15442464#comment-15442464
 ] 

Hive QA commented on HIVE-14659:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12825876/HIVE-14659.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 10464 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ctas]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char]
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3]
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1027/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1027/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1027/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12825876 - PreCommit-HIVE-MASTER-Build

> OutputStream won't close if caught exception in funtion 
> unparseExprForValuesClause in SemanticAnalyzer.java
> ---
>
> Key: HIVE-14659
> URL: https://issues.apache.org/jira/browse/HIVE-14659
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Fan Yunbo
>Assignee: Fan Yunbo
> Fix For: 2.2.0
>
> Attachments: HIVE-14659.1.patch
>
>
> I hava met the problem that Hive process cannot create new threads because of 
> lots of OutputStream not closed.
> Here is the part of jstack info:
> "Thread-35783" daemon prio=10 tid=0x7f8f58f02800 nid=0x18cc in 
> Object.wait() [0x7f8e632c]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:577)
> - locked <0x00061af52d50> (a java.util.LinkedList)
> and the related error log:
> org.apache.hadoop.hive.ql.parse.SemanticException: Unable to create temp file 
> for insert values Expression of type TOK_TABLE_OR_COL not supported in 
> insert/values
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genValuesTempTable(SemanticAnalyzer.java:812)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.doPhase1(SemanticAnalyzer.java:1207)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.doPhase1(SemanticAnalyzer.java:1410)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genResolvedParseTree(SemanticAnalyzer.java:10136)
> Caused by: org.apache.hadoop.hive.ql.parse.SemanticException: Expression of 
> type TOK_TABLE_OR_COL not supported in insert/values
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.unparseExprForValuesClause(SemanticAnalyzer.java:858)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genValuesTempTable(SemanticAnalyzer.java:785)
> ... 15 more
> It shows the output stream won't close if caught exception in funtion 
> unparseExprForValuesClause in SemanticAnalyzer.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14659) OutputStream won't close if caught exception in funtion unparseExprForValuesClause in SemanticAnalyzer.java

2016-08-27 Thread Fan Yunbo (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15442555#comment-15442555
 ] 

Fan Yunbo commented on HIVE-14659:
--

[~prasanth_j][~sershe] Can someone please review this small patch?

> OutputStream won't close if caught exception in funtion 
> unparseExprForValuesClause in SemanticAnalyzer.java
> ---
>
> Key: HIVE-14659
> URL: https://issues.apache.org/jira/browse/HIVE-14659
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Fan Yunbo
>Assignee: Fan Yunbo
> Fix For: 2.2.0
>
> Attachments: HIVE-14659.1.patch
>
>
> I hava met the problem that Hive process cannot create new threads because of 
> lots of OutputStream not closed.
> Here is the part of jstack info:
> "Thread-35783" daemon prio=10 tid=0x7f8f58f02800 nid=0x18cc in 
> Object.wait() [0x7f8e632c]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:577)
> - locked <0x00061af52d50> (a java.util.LinkedList)
> and the related error log:
> org.apache.hadoop.hive.ql.parse.SemanticException: Unable to create temp file 
> for insert values Expression of type TOK_TABLE_OR_COL not supported in 
> insert/values
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genValuesTempTable(SemanticAnalyzer.java:812)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.doPhase1(SemanticAnalyzer.java:1207)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.doPhase1(SemanticAnalyzer.java:1410)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genResolvedParseTree(SemanticAnalyzer.java:10136)
> Caused by: org.apache.hadoop.hive.ql.parse.SemanticException: Expression of 
> type TOK_TABLE_OR_COL not supported in insert/values
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.unparseExprForValuesClause(SemanticAnalyzer.java:858)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genValuesTempTable(SemanticAnalyzer.java:785)
> ... 15 more
> It shows the output stream won't close if caught exception in funtion 
> unparseExprForValuesClause in SemanticAnalyzer.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14515) Schema evolution uses slow INSERT INTO .. VALUES

2016-08-27 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-14515:

Attachment: HIVE-14515.04.patch

> Schema evolution uses slow INSERT INTO .. VALUES
> 
>
> Key: HIVE-14515
> URL: https://issues.apache.org/jira/browse/HIVE-14515
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-14515.01.patch, HIVE-14515.02.patch, 
> HIVE-14515.03.patch, HIVE-14515.04.patch
>
>
> Use LOAD DATA LOCAL INPATH and INSERT INTO TABLE ... SELECT * FROM instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14515) Schema evolution uses slow INSERT INTO .. VALUES

2016-08-27 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15442887#comment-15442887
 ] 

Matt McCline commented on HIVE-14515:
-

Accidentally included changes for schema_evol_stats.q -- removed them and 
created patch #4.

> Schema evolution uses slow INSERT INTO .. VALUES
> 
>
> Key: HIVE-14515
> URL: https://issues.apache.org/jira/browse/HIVE-14515
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-14515.01.patch, HIVE-14515.02.patch, 
> HIVE-14515.03.patch, HIVE-14515.04.patch
>
>
> Use LOAD DATA LOCAL INPATH and INSERT INTO TABLE ... SELECT * FROM instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14515) Schema evolution uses slow INSERT INTO .. VALUES

2016-08-27 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15442896#comment-15442896
 ] 

Matt McCline commented on HIVE-14515:
-

Committed #4 to master.

> Schema evolution uses slow INSERT INTO .. VALUES
> 
>
> Key: HIVE-14515
> URL: https://issues.apache.org/jira/browse/HIVE-14515
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-14515.01.patch, HIVE-14515.02.patch, 
> HIVE-14515.03.patch, HIVE-14515.04.patch
>
>
> Use LOAD DATA LOCAL INPATH and INSERT INTO TABLE ... SELECT * FROM instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14515) Schema evolution uses slow INSERT INTO .. VALUES

2016-08-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15442898#comment-15442898
 ] 

Hive QA commented on HIVE-14515:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12825899/HIVE-14515.04.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1028/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1028/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1028/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.8.0_25 ]]
+ export JAVA_HOME=/usr/java/jdk1.8.0_25
+ JAVA_HOME=/usr/java/jdk1.8.0_25
+ export 
PATH=/usr/java/jdk1.8.0_25/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/java/jdk1.8.0_25/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-MASTER-Build-1028/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ cd apache-github-source-source
+ git fetch origin
>From https://github.com/apache/hive
   9a90c65..cb534ab  master -> origin/master
+ git reset --hard HEAD
HEAD is now at 9a90c65 HIVE-14648: LLAP: Avoid private pages in the SSD cache 
(Gopal V, reviewed by Sergey Shelukhin)
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded.
  (use "git pull" to update your local branch)
+ git reset --hard origin/master
HEAD is now at cb534ab HIVE-14515: Schema evolution uses slow INSERT INTO .. 
VALUES (Matt McCline, reviewed by Prasanth Jayachandran)
+ git merge --ff-only origin/master
Already up-to-date.
+ git gc
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12825899 - PreCommit-HIVE-MASTER-Build

> Schema evolution uses slow INSERT INTO .. VALUES
> 
>
> Key: HIVE-14515
> URL: https://issues.apache.org/jira/browse/HIVE-14515
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-14515.01.patch, HIVE-14515.02.patch, 
> HIVE-14515.03.patch, HIVE-14515.04.patch
>
>
> Use LOAD DATA LOCAL INPATH and INSERT INTO TABLE ... SELECT * FROM instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)