[jira] [Updated] (HIVE-15606) Include druid-handler sources in src packaging

2017-01-12 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-15606:
---
Assignee: (was: Jesus Camacho Rodriguez)

> Include druid-handler sources in src packaging
> --
>
> Key: HIVE-15606
> URL: https://issues.apache.org/jira/browse/HIVE-15606
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Affects Versions: 2.2.0
>Reporter: Jesus Camacho Rodriguez
>
> We forgot to do this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13696) Monitor fair-scheduler.xml and automatically update/validate jobs submitted to fair-scheduler

2017-01-12 Thread Yongzhi Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15821214#comment-15821214
 ] 

Yongzhi Chen commented on HIVE-13696:
-

[~jcamachorodriguez], thanks for catching the issue. Yes, it is caused by the 
patch. Now I am working on fix the issue. 

> Monitor fair-scheduler.xml and automatically update/validate jobs submitted 
> to fair-scheduler
> -
>
> Key: HIVE-13696
> URL: https://issues.apache.org/jira/browse/HIVE-13696
> Project: Hive
>  Issue Type: Improvement
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
> Fix For: 2.2.0
>
> Attachments: HIVE-13696.01.patch, HIVE-13696.02.patch, 
> HIVE-13696.06.patch, HIVE-13696.08.patch, HIVE-13696.11.patch, 
> HIVE-13696.13.patch, HIVE-13696.14.patch
>
>
> Ensure that jobs are placed into the correct queue according to 
> {{fair-scheduler.xml}}. Jobs should be placed into the correct queue, and 
> users should not be able to submit jobs to queues they do not have access to.
> This patch builds on the existing functionality in {{FairSchedulerShim}} to 
> route jobs to user-specific queue based on {{fair-scheduler.xml}} 
> configuration (leveraging the Yarn {{QueuePlacementPolicy}} class). In 
> addition to configuring job routing at session connect (current behavior), 
> the routing is validated per submission to yarn (when impersonation is off). 
> A {{FileSystemWatcher}} class is included to monitor changes in the 
> {{fair-scheduler.xml}} file (so updates are automatically reloaded when the 
> file pointed to by {{yarn.scheduler.fair.allocation.file}} is changed).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-15606) Include druid-handler sources in src packaging

2017-01-12 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez reassigned HIVE-15606:
--

Assignee: Jesus Camacho Rodriguez

> Include druid-handler sources in src packaging
> --
>
> Key: HIVE-15606
> URL: https://issues.apache.org/jira/browse/HIVE-15606
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Affects Versions: 2.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>
> We forgot to do this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15572) Improve the response time for query canceling when it happens during acquiring locks

2017-01-12 Thread Chaoyu Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15821168#comment-15821168
 ] 

Chaoyu Tang commented on HIVE-15572:


[~ychena] could you upload the patch to review board?

> Improve the response time for query canceling when it happens during 
> acquiring locks
> 
>
> Key: HIVE-15572
> URL: https://issues.apache.org/jira/browse/HIVE-15572
> Project: Hive
>  Issue Type: Improvement
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
> Attachments: HIVE-15572.1.patch
>
>
> When query canceling command sent during Hive Acquire locks (from zookeeper), 
> hive will finish acquiring all the locks and release them. As it is shown in 
> the following log:
> It took 165 s to finish acquire the lock,then spend 81s to release them.
> We can improve the performance by not acquiring any more locks and releasing 
> held locks when the query canceling command is received. 
> {noformat}
> Background-Pool: Thread-224]:  from=org.apache.hadoop.hive.ql.Driver>
> 2017-01-03 10:50:35,413 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: 
> [HiveServer2-Background-Pool: Thread-224]:  method=acquireReadWriteLocks from=org.apache.hadoop.hive.ql.Driver>
> 2017-01-03 10:51:00,671 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: 
> [HiveServer2-Background-Pool: Thread-218]:  method=acquireReadWriteLocks start=1483469295080 end=1483469460671 
> duration=165591 from=org.apache.hadoop.hive.ql.Driver>
> 2017-01-03 10:51:00,672 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: 
> [HiveServer2-Background-Pool: Thread-218]:  from=org.apache.hadoop.hive.ql.Driver>
> 2017-01-03 10:51:00,672 ERROR org.apache.hadoop.hive.ql.Driver: 
> [HiveServer2-Background-Pool: Thread-218]: FAILED: query select count(*) from 
> manyparttbl has been cancelled
> 2017-01-03 10:51:00,673 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: 
> [HiveServer2-Background-Pool: Thread-218]:  from=org.apache.hadoop.hive.ql.Driver>
> 2017-01-03 10:51:40,755 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: 
> [HiveServer2-Background-Pool: Thread-215]:  start=1483469419487 end=1483469500755 duration=81268 
> from=org.apache.hadoop.hive.ql.Driver>
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15166) Provide beeline option to set the jline history max size

2017-01-12 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15821165#comment-15821165
 ] 

Aihua Xu commented on HIVE-15166:
-

[~ericlin] Thanks for working on it. I was working on a FileHistory with 
limited file size, but I feel your simple approach would work nicely.

A couple of comments:
* Is the change in .gitignore what you intend to make? If not, can you remove 
that change?
* Seems you are not using the latest code? Can you sync to the latest and apply 
the change?
* Seems it makes sense to call ((FileHistory) 
h).setMaxSize(getOpts().getMaxHistoryRows()); when you create the FileHistory 
so we won't cache too many history in the memory as well.

{noformat}   
private void setupHistory() throws IOException {
if (this.history != null) {
   return;
}

this.history = new FileHistory(new File(getOpts().getHistoryFile()));
*** Set the maxSize here ***
// add shutdown hook to flush the history to history file
ShutdownHookManager.addShutdownHook(new Runnable() {
  @Override
  public void run() {
try {
  history.flush();
} catch (IOException e) {
  error(e);
}
  }
});
  }
{noformat}

> Provide beeline option to set the jline history max size
> 
>
> Key: HIVE-15166
> URL: https://issues.apache.org/jira/browse/HIVE-15166
> Project: Hive
>  Issue Type: Improvement
>  Components: Beeline
>Affects Versions: 2.1.0
>Reporter: Eric Lin
>Assignee: Eric Lin
>Priority: Minor
> Attachments: HIVE-15166.patch
>
>
> Currently Beeline does not provide an option to limit the max size for 
> beeline history file, in the case that each query is very big, it will flood 
> the history file and slow down beeline on start up and shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15582) Druid CTAS should support BYTE/SHORT/INT types

2017-01-12 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-15582:
---
Attachment: HIVE-15582.01.patch

> Druid CTAS should support BYTE/SHORT/INT types
> --
>
> Key: HIVE-15582
> URL: https://issues.apache.org/jira/browse/HIVE-15582
> Project: Hive
>  Issue Type: Sub-task
>  Components: Druid integration
>Affects Versions: 2.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-15582.01.patch, HIVE-15582.patch
>
>
> Currently these types are not recognized and we throw an exception when we 
> try to create a table with them.
> {noformat}
> Caused by: org.apache.hadoop.hive.serde2.SerDeException: Unknown type: INT
>   at 
> org.apache.hadoop.hive.druid.serde.DruidSerDe.serialize(DruidSerDe.java:414)
>   at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:715)
>   ... 22 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15520) Improve the sum performance for Range based window

2017-01-12 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-15520:

   Resolution: Fixed
Fix Version/s: 2.2.0
 Release Note: 
This is to improve the performance of sum function over range based windowing. 

One issue related to sum(lag(x)) over (partition by c1 order by c2 range 
between ...)  and sum(lead(x)) over (partition by c1 order by c2 range between 
...) has been fixed which would produce different result. Without the patch, 
lag(x)/lead(x) would only consider the previous/next row in the windowing, not 
within the partition, which doesn't match other databases, also doesn't match 
rows based windowing. 
   Status: Resolved  (was: Patch Available)

Committed to master. Thanks Yongzhi for reviewing.

> Improve the sum performance for Range based window
> --
>
> Key: HIVE-15520
> URL: https://issues.apache.org/jira/browse/HIVE-15520
> Project: Hive
>  Issue Type: Sub-task
>  Components: PTF-Windowing
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Fix For: 2.2.0
>
> Attachments: HIVE-15520.1.patch, HIVE-15520.2.patch, 
> HIVE-15520.3.patch, HIVE-15520.4.patch
>
>
> Currently streaming process is not supported for range based windowing. Thus 
> sum( x ) over (partition by y order by z) is O(n^2) running time. 
> Investigate the possibility of streaming support.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15537) Nested column pruning: fix issue when selecting struct field from array/map element (part 2)

2017-01-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15821139#comment-15821139
 ] 

Hive QA commented on HIVE-15537:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12847151/HIVE-15537.5.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 10941 tests 
executed
*Failed tests:*
{noformat}
TestDerbyConnector - did not produce a TEST-*.xml file (likely timed out) 
(batchId=234)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_part]
 (batchId=148)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_if_expr]
 (batchId=139)
org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerCheckInvocation.org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerCheckInvocation
 (batchId=208)
org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerShowFilters.org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerShowFilters
 (batchId=208)
org.apache.hive.hcatalog.pig.TestRCFileHCatStorer.testWriteTimestamp 
(batchId=172)
org.apache.hive.hcatalog.pig.TestTextFileHCatStorer.testWriteSmallint 
(batchId=172)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/2912/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/2912/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-2912/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 7 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12847151 - PreCommit-HIVE-Build

> Nested column pruning: fix issue when selecting struct field from array/map 
> element (part 2)
> 
>
> Key: HIVE-15537
> URL: https://issues.apache.org/jira/browse/HIVE-15537
> Project: Hive
>  Issue Type: Sub-task
>  Components: Logical Optimizer
>Affects Versions: 2.2.0
>Reporter: Chao Sun
>Assignee: Chao Sun
> Attachments: HIVE-15537.1.patch, HIVE-15537.2.patch, 
> HIVE-15537.3.patch, HIVE-15537.4.patch, HIVE-15537.5.patch
>
>
> HIVE-15507 only addresses the issue of
> {code}
> SELECT arr[0].f FROM tbl
> {code}
> However, it didn't handle:
> {code}
> SELECT arr[0].f.g FROM tbl
> {code}
> In this case the current code will generate a path {{arr.g}}, which is wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10836) Beeline OutOfMemoryError due to large history

2017-01-12 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-10836:

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

Seems we are solving the same issue in both places. Resolve as dup.

> Beeline OutOfMemoryError due to large history
> -
>
> Key: HIVE-10836
> URL: https://issues.apache.org/jira/browse/HIVE-10836
> Project: Hive
>  Issue Type: Bug
> Environment: Hive 1.1.0 on RHEL with Cloudera (cdh5.4.0)
>Reporter: Patrick McAnneny
>Assignee: Aihua Xu
> Attachments: HIVE-10836.1.patch
>
>
> Attempting to run beeline via commandline fails with the error below due to 
> large commands in the ~/.beeline/history file. Not sure if the problem also 
> exists with many lines in the history or just big lines.
> I had a few lines in my history file with over 1 million characters each. 
> Deleting said lines from the history file resolved the issue.
> Beeline version 1.1.0-cdh5.4.0 by Apache Hive
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
>   at java.util.Arrays.copyOf(Arrays.java:2367)
>   at 
> java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:130)
>   at 
> java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:114)
>   at 
> java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:535)
>   at java.lang.StringBuffer.append(StringBuffer.java:322)
>   at java.io.BufferedReader.readLine(BufferedReader.java:363)
>   at java.io.BufferedReader.readLine(BufferedReader.java:382)
>   at jline.console.history.FileHistory.load(FileHistory.java:69)
>   at jline.console.history.FileHistory.load(FileHistory.java:61)
>   at org.apache.hive.beeline.BeeLine.getConsoleReader(BeeLine.java:869)
>   at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:766)
>   at 
> org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:480)
>   at org.apache.hive.beeline.BeeLine.main(BeeLine.java:463)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15269) Dynamic Min-Max runtime-filtering for Tez

2017-01-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15821031#comment-15821031
 ] 

Hive QA commented on HIVE-15269:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12846737/HIVE-15269.11.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/2911/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/2911/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-2911/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2017-01-12 13:48:04.635
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-2911/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2017-01-12 13:48:04.637
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 6973edc HIVE-15365 : Add new methods to MessageFactory API 
(corresponding to the ones added in JSONMessageFactory) (Sushanth Sowmyan, 
reviewed by Daniel Dai)
+ git clean -f -d
Removing 
metastore/src/test/org/apache/hadoop/hive/metastore/TestRetriesInRetryingHMSHandler.java
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 6973edc HIVE-15365 : Add new methods to MessageFactory API 
(corresponding to the ones added in JSONMessageFactory) (Sushanth Sowmyan, 
reviewed by Daniel Dai)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2017-01-12 13:48:05.600
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java: No such 
file or directory
error: a/itests/src/test/resources/testconfiguration.properties: No such file 
or directory
error: a/orc/src/test/org/apache/orc/impl/TestRecordReaderImpl.java: No such 
file or directory
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/AbstractMapJoinOperator.java: No 
such file or directory
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/CommonJoinOperator.java: No 
such file or directory
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/ExprNodeColumnEvaluator.java: No 
such file or directory
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/ExprNodeConstantDefaultEvaluator.java:
 No such file or directory
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/ExprNodeConstantEvaluator.java: No 
such file or directory
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/ExprNodeEvaluator.java: No 
such file or directory
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/ExprNodeEvaluatorFactory.java: No 
such file or directory
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/ExprNodeEvaluatorHead.java: 
No such file or directory
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/ExprNodeEvaluatorRef.java: 
No such file or directory
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/ExprNodeFieldEvaluator.java: No 
such file or directory
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/ExprNodeGenericFuncEvaluator.java: 
No such file or directory
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/FilterOperator.java: No 
such file or directory
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/GroupByOperator.java: No 
such file or directory
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/HashTableSinkOperator.java: 
No such file or directory
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/JoinUtil.java: No such file 
or directory
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/ObjectCache.java: No such 
file or directory
error: 

[jira] [Commented] (HIVE-15569) failures in RetryingHMSHandler. do not get retried

2017-01-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15821025#comment-15821025
 ] 

Hive QA commented on HIVE-15569:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12847139/HIVE-15569.02.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 10945 tests 
executed
*Failed tests:*
{noformat}
TestDerbyConnector - did not produce a TEST-*.xml file (likely timed out) 
(batchId=235)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_part]
 (batchId=148)
org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerCheckInvocation.org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerCheckInvocation
 (batchId=209)
org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerShowFilters.org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerShowFilters
 (batchId=209)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/2910/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/2910/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-2910/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12847139 - PreCommit-HIVE-Build

> failures in RetryingHMSHandler. do not get retried
> 
>
> Key: HIVE-15569
> URL: https://issues.apache.org/jira/browse/HIVE-15569
> Project: Hive
>  Issue Type: Bug
>Reporter: Thejas M Nair
>Assignee: Vihang Karajgaonkar
> Attachments: HIVE-15569.01.patch, HIVE-15569.02.patch
>
>
> RetryingHMSHandler.  is called during Hive metastore startup, and any 
> transient db failures during that call are not retried. This can result in 
> failure for HiveMetastore startup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15469) Fix REPL DUMP/LOAD DROP_PTN so it works on non-string-ptn-key tables

2017-01-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820934#comment-15820934
 ] 

Hive QA commented on HIVE-15469:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12847136/HIVE-15469.2.patch

{color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 10927 tests 
executed
*Failed tests:*
{noformat}
TestDerbyConnector - did not produce a TEST-*.xml file (likely timed out) 
(batchId=234)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_part]
 (batchId=148)
org.apache.hadoop.hive.cli.TestSparkCliDriver.org.apache.hadoop.hive.cli.TestSparkCliDriver
 (batchId=95)
org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerCheckInvocation.org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerCheckInvocation
 (batchId=208)
org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerShowFilters.org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerShowFilters
 (batchId=208)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/2909/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/2909/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-2909/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12847136 - PreCommit-HIVE-Build

> Fix REPL DUMP/LOAD DROP_PTN so it works on non-string-ptn-key tables
> 
>
> Key: HIVE-15469
> URL: https://issues.apache.org/jira/browse/HIVE-15469
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Reporter: Sushanth Sowmyan
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-15469.1.patch, HIVE-15469.2.patch
>
>
> The current implementation of REPL DUMP/REPL LOAD for DROP_PTN is limited to 
> dropping partitions whose key types are strings. This needs the tableObj to 
> be available in the DropPartitionMessage before it can be fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13696) Monitor fair-scheduler.xml and automatically update/validate jobs submitted to fair-scheduler

2017-01-12 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820852#comment-15820852
 ] 

Jesus Camacho Rodriguez commented on HIVE-13696:


[~sircodesalot], [~ychena], [~mohitsabharwal], [~spena]

I am seeing this issue in my environment after the patch was committed 
(reproducible simply by executing, e.g., a _show databases_ statement):

{noformat}
2017-01-12T06:16:31,453 ERROR [f1c1f178-244f-4f89-99c0-994872f099aa main] 
ql.Driver: FAILED: NullPointerException null
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.QueuePlacementRule.cleanName(QueuePlacementRule.java:351)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.QueuePlacementRule$User.getQueueForApp(QueuePlacementRule.java:132)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.QueuePlacementRule.assignAppToQueue(QueuePlacementRule.java:74)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.QueuePlacementPolicy.assignAppToQueue(QueuePlacementPolicy.java:167)
at 
org.apache.hadoop.hive.schshim.FairSchedulerShim.setJobQueueForUserInternal(FairSchedulerShim.java:96)
at 
org.apache.hadoop.hive.schshim.FairSchedulerShim.validateQueueConfiguration(FairSchedulerShim.java:82)
at 
org.apache.hadoop.hive.ql.session.YarnFairScheduling.validateYarnQueue(YarnFairScheduling.java:68)
at org.apache.hadoop.hive.ql.Driver.configureScheduling(Driver.java:671)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:543)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1313)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1233)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1223)
at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:400)
at 
org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:777)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:715)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:642)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:222)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
{noformat}

Reverting the patch fixes the issues. Any ideas?

> Monitor fair-scheduler.xml and automatically update/validate jobs submitted 
> to fair-scheduler
> -
>
> Key: HIVE-13696
> URL: https://issues.apache.org/jira/browse/HIVE-13696
> Project: Hive
>  Issue Type: Improvement
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
> Fix For: 2.2.0
>
> Attachments: HIVE-13696.01.patch, HIVE-13696.02.patch, 
> HIVE-13696.06.patch, HIVE-13696.08.patch, HIVE-13696.11.patch, 
> HIVE-13696.13.patch, HIVE-13696.14.patch
>
>
> Ensure that jobs are placed into the correct queue according to 
> {{fair-scheduler.xml}}. Jobs should be placed into the correct queue, and 
> users should not be able to submit jobs to queues they do not have access to.
> This patch builds on the existing functionality in {{FairSchedulerShim}} to 
> route jobs to user-specific queue based on {{fair-scheduler.xml}} 
> configuration (leveraging the Yarn {{QueuePlacementPolicy}} class). In 
> addition to configuring job routing at session connect (current behavior), 
> the routing is validated per submission to yarn (when impersonation is off). 
> A {{FileSystemWatcher}} class is included to monitor changes in the 
> {{fair-scheduler.xml}} file (so updates are automatically reloaded when the 
> file pointed to by {{yarn.scheduler.fair.allocation.file}} is changed).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14706) Lineage information not set properly

2017-01-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820841#comment-15820841
 ] 

Hive QA commented on HIVE-14706:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12847131/HIVE-14706.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 10941 tests 
executed
*Failed tests:*
{noformat}
TestDerbyConnector - did not produce a TEST-*.xml file (likely timed out) 
(batchId=234)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_part]
 (batchId=148)
org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerCheckInvocation.org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerCheckInvocation
 (batchId=208)
org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerShowFilters.org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerShowFilters
 (batchId=208)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/2908/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/2908/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-2908/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12847131 - PreCommit-HIVE-Build

> Lineage information not set properly
> 
>
> Key: HIVE-14706
> URL: https://issues.apache.org/jira/browse/HIVE-14706
> Project: Hive
>  Issue Type: Bug
>Reporter: Vimal Sharma
>Assignee: Pengcheng Xiong
>Priority: Critical
> Attachments: HIVE-14706.01.patch
>
>
> I am trying to fetch column level lineage after a CTAS query in a Post 
> Execution hook in Hive. Below are the queries:
> {code}
> create table t1(id int, name string);
> create table t2 as select * from t1;
> {code}
> The lineage information is retrieved using the following sample piece of code:
> {code}
> lInfo = hookContext.getLinfo()
> for(Map.Entry e : 
> lInfo.entrySet()) {
> System.out.println("Col Lineage Key : " + e.getKey());
> System.out.println("Col Lineage Value: " + e.getValue());
> }
> {code}
> The Dependency field(i.e Col Lineage Value)  is coming in as null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15519) BitSet not computed properly for ColumnBuffer subset

2017-01-12 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-15519:
--
Attachment: HIVE-15519.6.patch

> BitSet not computed properly for ColumnBuffer subset
> 
>
> Key: HIVE-15519
> URL: https://issues.apache.org/jira/browse/HIVE-15519
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, JDBC
>Reporter: Bharat Viswanadham
>Assignee: Rui Li
>Priority: Critical
> Attachments: HIVE-15519.1.patch, HIVE-15519.2.patch, 
> HIVE-15519.3.patch, HIVE-15519.4.patch, HIVE-15519.5-branch-1.patch, 
> HIVE-15519.6.patch, data_type_test(1).txt
>
>
> Hive decimal type column precision is returning as zero, even though column 
> has precision set.
> Example: col67 decimal(18,2) scale is returning as zero for that column.
> Tried with below program.
> {code}
>System.out.println("Opening connection");   
> Class.forName("org.apache.hive.jdbc.HiveDriver");
>Connection con = 
> DriverManager.getConnection("jdbc:hive2://x.x.x.x:1/default");
>   DatabaseMetaData dbMeta = con.getMetaData();
>ResultSet rs = dbMeta.getColumns(null, "DEFAULT", "data_type_test",null);
>  while (rs.next()) {
> if (rs.getString("COLUMN_NAME").equalsIgnoreCase("col48") || 
> rs.getString("COLUMN_NAME").equalsIgnoreCase("col67") || 
> rs.getString("COLUMN_NAME").equalsIgnoreCase("col68") || 
> rs.getString("COLUMN_NAME").equalsIgnoreCase("col122")){
>  System.out.println(rs.getString("COLUMN_NAME") + "\t" + 
> rs.getString("COLUMN_SIZE") + "\t" + rs.getInt("DECIMAL_DIGITS"));
> }
>}
>rs.close();
>con.close();
>   } catch (Exception e) {
>e.printStackTrace();
>;
>   }
> {code}
> Default fetch size is 50. if any column no is under 50 with decimal type, 
> precision is returning properly, when the column no is greater than 50, scale 
> is returning as zero.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15578) Simplify IdentifiersParser

2017-01-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820755#comment-15820755
 ] 

Hive QA commented on HIVE-15578:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12847130/HIVE-15578.02.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 12 failed/errored test(s), 10941 tests 
executed
*Failed tests:*
{noformat}
TestDerbyConnector - did not produce a TEST-*.xml file (likely timed out) 
(batchId=234)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[offset_limit_ppd_optimizer]
 (batchId=150)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_part]
 (batchId=148)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_varchar_simple]
 (batchId=151)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[char_pad_convert_fail2]
 (batchId=84)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[invalid_select_expression]
 (batchId=85)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[ptf_negative_DistributeByOrderBy]
 (batchId=84)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[ptf_negative_PartitionBySortBy]
 (batchId=85)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[ptf_window_boundaries2]
 (batchId=85)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[ptf_window_boundaries]
 (batchId=84)
org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerCheckInvocation.org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerCheckInvocation
 (batchId=208)
org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerShowFilters.org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerShowFilters
 (batchId=208)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/2907/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/2907/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-2907/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 12 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12847130 - PreCommit-HIVE-Build

> Simplify IdentifiersParser
> --
>
> Key: HIVE-15578
> URL: https://issues.apache.org/jira/browse/HIVE-15578
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-15578.01.patch, HIVE-15578.02.patch
>
>
> before: 1.72M LOC in IdentifiersParser, after: 1.41M



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15582) Druid CTAS should support BYTE/SHORT/INT types

2017-01-12 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820751#comment-15820751
 ] 

Jesus Camacho Rodriguez commented on HIVE-15582:


[~bslim], I answer to your comments:
- I even did not realized about the changes that you mention, but they are just 
formatting: it happened because those lines did not have the correct 
indentation.
- We do not need any change for _deserialize_, since the type inferred for 
Druid sources is either _long_ or _float_ (not _byte_, _short_, _int_, or 
_double_).
- DruidOutputFormat already supports byte/short/int (L135-L138 in 
DruidOutputFormat.java).
- Tests for deserializer are already present in TestDruidSerDe. I will upload a 
new patch shortly that adds tests for serializer too.

> Druid CTAS should support BYTE/SHORT/INT types
> --
>
> Key: HIVE-15582
> URL: https://issues.apache.org/jira/browse/HIVE-15582
> Project: Hive
>  Issue Type: Sub-task
>  Components: Druid integration
>Affects Versions: 2.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-15582.patch
>
>
> Currently these types are not recognized and we throw an exception when we 
> try to create a table with them.
> {noformat}
> Caused by: org.apache.hadoop.hive.serde2.SerDeException: Unknown type: INT
>   at 
> org.apache.hadoop.hive.druid.serde.DruidSerDe.serialize(DruidSerDe.java:414)
>   at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:715)
>   ... 22 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15519) BitSet not computed properly for ColumnBuffer subset

2017-01-12 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-15519:
--
Attachment: (was: HIVE-15519.4.patch)

> BitSet not computed properly for ColumnBuffer subset
> 
>
> Key: HIVE-15519
> URL: https://issues.apache.org/jira/browse/HIVE-15519
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, JDBC
>Reporter: Bharat Viswanadham
>Assignee: Rui Li
>Priority: Critical
> Attachments: HIVE-15519.1.patch, HIVE-15519.2.patch, 
> HIVE-15519.3.patch, HIVE-15519.4.patch, HIVE-15519.5-branch-1.patch, 
> data_type_test(1).txt
>
>
> Hive decimal type column precision is returning as zero, even though column 
> has precision set.
> Example: col67 decimal(18,2) scale is returning as zero for that column.
> Tried with below program.
> {code}
>System.out.println("Opening connection");   
> Class.forName("org.apache.hive.jdbc.HiveDriver");
>Connection con = 
> DriverManager.getConnection("jdbc:hive2://x.x.x.x:1/default");
>   DatabaseMetaData dbMeta = con.getMetaData();
>ResultSet rs = dbMeta.getColumns(null, "DEFAULT", "data_type_test",null);
>  while (rs.next()) {
> if (rs.getString("COLUMN_NAME").equalsIgnoreCase("col48") || 
> rs.getString("COLUMN_NAME").equalsIgnoreCase("col67") || 
> rs.getString("COLUMN_NAME").equalsIgnoreCase("col68") || 
> rs.getString("COLUMN_NAME").equalsIgnoreCase("col122")){
>  System.out.println(rs.getString("COLUMN_NAME") + "\t" + 
> rs.getString("COLUMN_SIZE") + "\t" + rs.getInt("DECIMAL_DIGITS"));
> }
>}
>rs.close();
>con.close();
>   } catch (Exception e) {
>e.printStackTrace();
>;
>   }
> {code}
> Default fetch size is 50. if any column no is under 50 with decimal type, 
> precision is returning properly, when the column no is greater than 50, scale 
> is returning as zero.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15519) BitSet not computed properly for ColumnBuffer subset

2017-01-12 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-15519:
--
Attachment: HIVE-15519.4.patch

Upload the patch for master to run test again.
I'll look into the test failures in branch-1.

> BitSet not computed properly for ColumnBuffer subset
> 
>
> Key: HIVE-15519
> URL: https://issues.apache.org/jira/browse/HIVE-15519
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, JDBC
>Reporter: Bharat Viswanadham
>Assignee: Rui Li
>Priority: Critical
> Attachments: HIVE-15519.1.patch, HIVE-15519.2.patch, 
> HIVE-15519.3.patch, HIVE-15519.4.patch, HIVE-15519.4.patch, 
> HIVE-15519.5-branch-1.patch, data_type_test(1).txt
>
>
> Hive decimal type column precision is returning as zero, even though column 
> has precision set.
> Example: col67 decimal(18,2) scale is returning as zero for that column.
> Tried with below program.
> {code}
>System.out.println("Opening connection");   
> Class.forName("org.apache.hive.jdbc.HiveDriver");
>Connection con = 
> DriverManager.getConnection("jdbc:hive2://x.x.x.x:1/default");
>   DatabaseMetaData dbMeta = con.getMetaData();
>ResultSet rs = dbMeta.getColumns(null, "DEFAULT", "data_type_test",null);
>  while (rs.next()) {
> if (rs.getString("COLUMN_NAME").equalsIgnoreCase("col48") || 
> rs.getString("COLUMN_NAME").equalsIgnoreCase("col67") || 
> rs.getString("COLUMN_NAME").equalsIgnoreCase("col68") || 
> rs.getString("COLUMN_NAME").equalsIgnoreCase("col122")){
>  System.out.println(rs.getString("COLUMN_NAME") + "\t" + 
> rs.getString("COLUMN_SIZE") + "\t" + rs.getInt("DECIMAL_DIGITS"));
> }
>}
>rs.close();
>con.close();
>   } catch (Exception e) {
>e.printStackTrace();
>;
>   }
> {code}
> Default fetch size is 50. if any column no is under 50 with decimal type, 
> precision is returning properly, when the column no is greater than 50, scale 
> is returning as zero.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15434) Add UDF to allow interrogation of uniontype values

2017-01-12 Thread Elliot West (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliot West updated HIVE-15434:
---
Target Version/s: 2.2.0, 2.1.2

> Add UDF to allow interrogation of uniontype values
> --
>
> Key: HIVE-15434
> URL: https://issues.apache.org/jira/browse/HIVE-15434
> Project: Hive
>  Issue Type: New Feature
>  Components: UDF
>Affects Versions: 2.1.1
>Reporter: David Maughan
>Assignee: David Maughan
> Attachments: HIVE-15434.01.patch, HIVE-15434.02.patch
>
>
> h2. Overview
> As stated in the documention:
> {quote}
> UNIONTYPE support is incomplete The UNIONTYPE datatype was introduced in Hive 
> 0.7.0 (HIVE-537), but full support for this type in Hive remains incomplete. 
> Queries that reference UNIONTYPE fields in JOIN (HIVE-2508), WHERE, and GROUP 
> BY clauses will fail, and Hive does not define syntax to extract the tag or 
> value fields of a UNIONTYPE. This means that UNIONTYPEs are effectively 
> look-at-only.
> {quote}
> It is essential to have a usable uniontype. Until full support is added to 
> Hive users should at least have the ability to inspect and extract values for 
> further comparison or transformation.
> h2. Proposal
> I propose to add a GenericUDF that has 2 modes of operation. Consider the 
> following schema and data that contains a union:
> Schema:
> {code}
> struct>
> {code}
> Query:
> {code}
> hive> select field1 from thing;
> {0:0}
> {1:"one"}
> {code}
> h4. Explode to Struct
> This method will recursively convert all unions within the type to structs 
> with fields named {{tag_n}}, {{n}} being the tag number. Only the {{tag_*}} 
> field that matches the tag of the union will be populated with the value. In 
> the case above the schema of field1 will be converted to:
> {code}
> struct
> {code}
> {code}
> hive> select extract_union(field1) from thing;
> {"tag_0":0,"tag_1":null}
> {"tag_0":null,"tag_1":one}
> {code}
> {code}
> hive> select extract_union(field1).tag_0 from thing;
> 0
> null
> {code}
> h4. Extract the specified tag
> This method will simply extract the value of the specified tag. If the tag 
> number matches then the value is returned, if it does not, then null is 
> returned.
> {code}
> hive> select extract_union(field1, 0) from thing;
> 0
> null
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15590) add separate spnego principal config for LLAP Web UI

2017-01-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820659#comment-15820659
 ] 

Hive QA commented on HIVE-15590:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12847123/HIVE-15590.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 10941 tests 
executed
*Failed tests:*
{noformat}
TestDerbyConnector - did not produce a TEST-*.xml file (likely timed out) 
(batchId=234)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[offset_limit_ppd_optimizer]
 (batchId=150)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_part]
 (batchId=148)
org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerCheckInvocation.org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerCheckInvocation
 (batchId=208)
org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerShowFilters.org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerShowFilters
 (batchId=208)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/2906/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/2906/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-2906/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12847123 - PreCommit-HIVE-Build

> add separate spnego principal config for LLAP Web UI
> 
>
> Key: HIVE-15590
> URL: https://issues.apache.org/jira/browse/HIVE-15590
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-15590.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15588) Vectorization: Defer deallocation of scratch columns in complex VectorExpressions like VectorUDFAdaptor, VectorUDFCoalesce, etc to prevent wrong reuse

2017-01-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820559#comment-15820559
 ] 

Hive QA commented on HIVE-15588:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12847111/HIVE-15588.01.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 25 failed/errored test(s), 10943 tests 
executed
*Failed tests:*
{noformat}
TestDerbyConnector - did not produce a TEST-*.xml file (likely timed out) 
(batchId=234)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[constprog_when_case] 
(batchId=53)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[foldts] (batchId=52)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[index_auto_partitioned] 
(batchId=10)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[transform_acid] 
(batchId=18)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_between_columns] 
(batchId=63)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_coalesce_2] 
(batchId=65)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_coalesce_3] 
(batchId=52)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_decimal_math_funcs]
 (batchId=21)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_when_case_null] 
(batchId=33)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorized_case] 
(batchId=52)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorized_casts] 
(batchId=75)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorized_math_funcs] 
(batchId=19)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorized_timestamp_ints_casts]
 (batchId=45)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_part]
 (batchId=148)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_adaptor_usage_mode]
 (batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_between_columns]
 (batchId=150)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_coalesce_2]
 (batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_coalesce_3]
 (batchId=148)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_when_case_null]
 (batchId=144)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_case]
 (batchId=148)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_casts]
 (batchId=153)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[vectorized_case] 
(batchId=118)
org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerCheckInvocation.org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerCheckInvocation
 (batchId=208)
org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerShowFilters.org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerShowFilters
 (batchId=208)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/2904/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/2904/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-2904/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 25 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12847111 - PreCommit-HIVE-Build

> Vectorization: Defer deallocation of scratch columns in complex 
> VectorExpressions like VectorUDFAdaptor, VectorUDFCoalesce, etc to prevent 
> wrong reuse
> --
>
> Key: HIVE-15588
> URL: https://issues.apache.org/jira/browse/HIVE-15588
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-15588.01.patch
>
>
> Make sure we don't deallocate a scratch column too quickly and cause result 
> corruption due to scratch column reuse.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-15390) Orc reader unnecessarily reading stripe footers with hive.optimize.index.filter set to true

2017-01-12 Thread Abhishek Somani (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820473#comment-15820473
 ] 

Abhishek Somani edited comment on HIVE-15390 at 1/12/17 8:30 AM:
-

[~prasanth_j] [~rajesh.balamohan] [~gopalv] can you please review.


was (Author: asomani):
[~prasanth_j] [~rajesh.balamohan] can you please review.

> Orc reader unnecessarily reading stripe footers with 
> hive.optimize.index.filter set to true
> ---
>
> Key: HIVE-15390
> URL: https://issues.apache.org/jira/browse/HIVE-15390
> Project: Hive
>  Issue Type: Bug
>  Components: ORC
>Affects Versions: 1.2.1
>Reporter: Abhishek Somani
>Assignee: Abhishek Somani
> Attachments: HIVE-15390.1.patch, HIVE-15390.patch
>
>
> In a split given to a task, the task's orc reader is unnecessarily reading 
> stripe footers for stripes that are not its responsibility to read. This is 
> happening with hive.optimize.index.filter set to true.
> Assuming one split per task(no tez grouping considered), a task should not 
> need to read beyond the split's end offset. Even in some split computation 
> strategies where a split's end offset can be in the middle of a stripe, it 
> should not need to read more than one stripe beyond the split's end offset(to 
> fully read a stripe that started in it). However I see that some tasks make 
> unnecessary filesystem calls to read all the stripe footers in a file from 
> the split start offset till the end of the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15390) Orc reader unnecessarily reading stripe footers with hive.optimize.index.filter set to true

2017-01-12 Thread Abhishek Somani (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820473#comment-15820473
 ] 

Abhishek Somani commented on HIVE-15390:


[~prasanth_j] [~rajesh.balamohan] can you please review.

> Orc reader unnecessarily reading stripe footers with 
> hive.optimize.index.filter set to true
> ---
>
> Key: HIVE-15390
> URL: https://issues.apache.org/jira/browse/HIVE-15390
> Project: Hive
>  Issue Type: Bug
>  Components: ORC
>Affects Versions: 1.2.1
>Reporter: Abhishek Somani
>Assignee: Abhishek Somani
> Attachments: HIVE-15390.1.patch, HIVE-15390.patch
>
>
> In a split given to a task, the task's orc reader is unnecessarily reading 
> stripe footers for stripes that are not its responsibility to read. This is 
> happening with hive.optimize.index.filter set to true.
> Assuming one split per task(no tez grouping considered), a task should not 
> need to read beyond the split's end offset. Even in some split computation 
> strategies where a split's end offset can be in the middle of a stripe, it 
> should not need to read more than one stripe beyond the split's end offset(to 
> fully read a stripe that started in it). However I see that some tasks make 
> unnecessary filesystem calls to read all the stripe footers in a file from 
> the split start offset till the end of the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-15473) Progress Bar on Beeline client

2017-01-12 Thread anishek (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820437#comment-15820437
 ] 

anishek edited comment on HIVE-15473 at 1/12/17 8:29 AM:
-

We cant use TOpeartionState as those state names are used to display the states 
in progress bar, we need to allow states as  'java.lang.String' as this will 
allow progress bar for any other execution engine to be displayed, The 
rendering of the progress bar does not care about the HiveServer State 
representations but rather for, the execution engine state representations. 

Since we also need to know on the client side when to stop querying for the 
progressBar the internal execution engine states have to be mapped to 
JobExecutionStatus. For now since progress bar is only for tez the matching 
happens via the 'fromString' method in JobExecutionStatus. Ideally this class 
should do the relevant mapping. Idea about how this could be achieved is here :

{code}

public enum JobExecutionStatus {
  SUBMITTED((short) 0),

  INITING((short) 1),

  RUNNING((short) 2),

  SUCCEEDED((short) 3),

  KILLED((short) 4),

  FAILED((short) 5),

  ERROR((short) 6),

  NOT_AVAILABLE((short) 7);

  private final short executionStatusOrdinal;

  JobExecutionStatus(short executionStatusOrdinal) {
this.executionStatusOrdinal = executionStatusOrdinal;
  }

  public short toExecutionStatus() {
return executionStatusOrdinal;
  }

  public static JobExecutionStatus fromString(String input, StatusFinder 
finder) {
return finder.from(input);
  }

  interface StatusFinder {
JobExecutionStatus from(String inputStatus);
  }

  static class TezStatusFinder implements StatusFinder {

@Override
public JobExecutionStatus from(String inputStatus) {
  for (JobExecutionStatus status : values()) {
if (status.name().equals(inputStatus))
  return status;
  }
  return NOT_AVAILABLE;
}
  }
}
{code}

OR

may be have two state variables in the response for GetProgressUpdate , one as 
String used for display, other a OperationState Object allow us to create 
control flow statements on the caller side.


was (Author: anishek):
We cant use TOpeartionState as those state names are used to display the states 
in progress bar, we need to allow states as  'java.lang.String' as this will 
allow progress bar for any other execution engine to be displayed, The 
rendering of the progress bar does not care about the HiveServer State 
representations but rather for, the execution engine state representations. 

Since we also need to know on the client side when to stop querying for the 
progressBar the internal execution engine states have to be mapped to 
JobExecutionStatus. For now since progress bar is only for tez the matching 
happens via the 'fromString' method in JobExecutionStatus. Ideally this class 
should do the relevant mapping. Idea about how this could be achieved is here :

{code}

public enum JobExecutionStatus {
  SUBMITTED((short) 0),

  INITING((short) 1),

  RUNNING((short) 2),

  SUCCEEDED((short) 3),

  KILLED((short) 4),

  FAILED((short) 5),

  ERROR((short) 6),

  NOT_AVAILABLE((short) 7);

  private final short executionStatusOrdinal;

  JobExecutionStatus(short executionStatusOrdinal) {
this.executionStatusOrdinal = executionStatusOrdinal;
  }

  public short toExecutionStatus() {
return executionStatusOrdinal;
  }

  public static JobExecutionStatus fromString(String input, StatusFinder 
finder) {
return finder.from(input);
  }

  interface StatusFinder {
JobExecutionStatus from(String inputStatus);
  }

  static class TezStatusFinder implements StatusFinder {

@Override
public JobExecutionStatus from(String inputStatus) {
  for (JobExecutionStatus status : values()) {
if (status.name().equals(inputStatus))
  return status;
  }
  return NOT_AVAILABLE;
}
  }
}
{code}

> Progress Bar on Beeline client
> --
>
> Key: HIVE-15473
> URL: https://issues.apache.org/jira/browse/HIVE-15473
> Project: Hive
>  Issue Type: Improvement
>  Components: Beeline, HiveServer2
>Affects Versions: 2.1.1
>Reporter: anishek
>Assignee: anishek
>Priority: Minor
> Attachments: HIVE-15473.2.patch, HIVE-15473.3.patch, 
> HIVE-15473.4.patch, screen_shot_beeline.jpg
>
>
> Hive Cli allows showing progress bar for tez execution engine as shown in 
> https://issues.apache.org/jira/secure/attachment/12678767/ux-demo.gif
> it would be great to have similar progress bar displayed when user is 
> connecting via beeline command line client as well. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-15473) Progress Bar on Beeline client

2017-01-12 Thread anishek (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820437#comment-15820437
 ] 

anishek edited comment on HIVE-15473 at 1/12/17 8:27 AM:
-

We cant use TOpeartionState as those state names are used to display the states 
in progress bar, we need to allow states as  'java.lang.String' as this will 
allow progress bar for any other execution engine to be displayed, The 
rendering of the progress bar does not care about the HiveServer State 
representations but rather for, the execution engine state representations. 

Since we also need to know on the client side when to stop querying for the 
progressBar the internal execution engine states have to be mapped to 
JobExecutionStatus. For now since progress bar is only for tez the matching 
happens via the 'fromString' method in JobExecutionStatus. Ideally this class 
should do the relevant mapping. Idea about how this could be achieved is here :

{code}

public enum JobExecutionStatus {
  SUBMITTED((short) 0),

  INITING((short) 1),

  RUNNING((short) 2),

  SUCCEEDED((short) 3),

  KILLED((short) 4),

  FAILED((short) 5),

  ERROR((short) 6),

  NOT_AVAILABLE((short) 7);

  private final short executionStatusOrdinal;

  JobExecutionStatus(short executionStatusOrdinal) {
this.executionStatusOrdinal = executionStatusOrdinal;
  }

  public short toExecutionStatus() {
return executionStatusOrdinal;
  }

  public static JobExecutionStatus fromString(String input, StatusFinder 
finder) {
return finder.from(input);
  }

  interface StatusFinder {
JobExecutionStatus from(String inputStatus);
  }

  static class TezStatusFinder implements StatusFinder {

@Override
public JobExecutionStatus from(String inputStatus) {
  for (JobExecutionStatus status : values()) {
if (status.name().equals(inputStatus))
  return status;
  }
  return NOT_AVAILABLE;
}
  }
}
{code}


was (Author: anishek):
We cant use TOpeartionState as those state names are used to display the states 
in progress bar, we need to allow states as  'java.lang.String' as this will 
allow progress bar for any other execution engine to be displayed, The 
rendering of the progress bar does not care about the HiveServer State 
representations but rather for, the execution engine state representations. 

> Progress Bar on Beeline client
> --
>
> Key: HIVE-15473
> URL: https://issues.apache.org/jira/browse/HIVE-15473
> Project: Hive
>  Issue Type: Improvement
>  Components: Beeline, HiveServer2
>Affects Versions: 2.1.1
>Reporter: anishek
>Assignee: anishek
>Priority: Minor
> Attachments: HIVE-15473.2.patch, HIVE-15473.3.patch, 
> HIVE-15473.4.patch, screen_shot_beeline.jpg
>
>
> Hive Cli allows showing progress bar for tez execution engine as shown in 
> https://issues.apache.org/jira/secure/attachment/12678767/ux-demo.gif
> it would be great to have similar progress bar displayed when user is 
> connecting via beeline command line client as well. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14707) ACID: Insert shuffle sort-merges on blank KEY

2017-01-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820466#comment-15820466
 ] 

Hive QA commented on HIVE-14707:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12847110/HIVE-14707.19.patch

{color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 14 failed/errored test(s), 10943 tests 
executed
*Failed tests:*
{noformat}
TestDerbyConnector - did not produce a TEST-*.xml file (likely timed out) 
(batchId=234)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[offset_limit_ppd_optimizer]
 (batchId=150)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_part]
 (batchId=148)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_exists]
 (batchId=145)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_in]
 (batchId=149)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_multi]
 (batchId=140)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_notin]
 (batchId=150)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_char_simple]
 (batchId=146)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_varchar_simple]
 (batchId=151)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3] 
(batchId=92)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query23] 
(batchId=221)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query69] 
(batchId=221)
org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerCheckInvocation.org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerCheckInvocation
 (batchId=208)
org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerShowFilters.org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerShowFilters
 (batchId=208)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/2903/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/2903/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-2903/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 14 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12847110 - PreCommit-HIVE-Build

> ACID: Insert shuffle sort-merges on blank KEY
> -
>
> Key: HIVE-14707
> URL: https://issues.apache.org/jira/browse/HIVE-14707
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 2.2.0
>Reporter: Gopal V
>Assignee: Eugene Koifman
> Attachments: HIVE-14707.01.patch, HIVE-14707.02.patch, 
> HIVE-14707.03.patch, HIVE-14707.04.patch, HIVE-14707.05.patch, 
> HIVE-14707.06.patch, HIVE-14707.08.patch, HIVE-14707.09.patch, 
> HIVE-14707.10.patch, HIVE-14707.11.patch, HIVE-14707.13.patch, 
> HIVE-14707.14.patch, HIVE-14707.16.patch, HIVE-14707.17.patch, 
> HIVE-14707.18.patch, HIVE-14707.19.patch, HIVE-14707.19.patch
>
>
> The ACID insert codepath uses a sorted shuffle, while they key used for 
> shuffle is always 0 bytes long.
> {code}
> hive (sales_acid)> explain insert into sales values(1, 2, 
> '3400---009', 1, null);
> STAGE PLANS:
>   Stage: Stage-1
> Tez
>   DagId: gopal_20160906172626_80261c4c-79cc-4e02-87fe-3133be404e55:2
>   Edges:
> Reducer 2 <- Map 1 (SIMPLE_EDGE)
> ...
>   Vertices:
> Map 1 
> Map Operator Tree:
> TableScan
>   alias: values__tmp__table__2
>   Statistics: Num rows: 1 Data size: 28 Basic stats: COMPLETE 
> Column stats: NONE
>   Select Operator
> expressions: tmp_values_col1 (type: string), 
> tmp_values_col2 (type: string), tmp_values_col3 (type: string), 
> tmp_values_col4 (type: string), tmp_values_col5 (type: string)
> outputColumnNames: _col0, _col1, _col2, _col3, _col4
> Statistics: Num rows: 1 Data size: 28 Basic stats: 
> COMPLETE Column stats: NONE
> Reduce Output Operator
>   sort order: 
>   Map-reduce partition columns: UDFToLong(_col1) (type: 
> bigint)
>   Statistics: Num rows: 1 Data size: 28 Basic stats: 
> COMPLETE Column stats: NONE
>   value expressions: _col0 (type: string), _col1 (type: 
> string), _col2 (type: string), _col3 (type: 

[jira] [Commented] (HIVE-15473) Progress Bar on Beeline client

2017-01-12 Thread anishek (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820437#comment-15820437
 ] 

anishek commented on HIVE-15473:


We cant use TOpeartionState as those state names are used to display the states 
in progress bar, we need to allow states as  'java.lang.String' as this will 
allow progress bar for any other execution engine to be displayed, The 
rendering of the progress bar does not care about the HiveServer State 
representations but rather for, the execution engine state representations. 

> Progress Bar on Beeline client
> --
>
> Key: HIVE-15473
> URL: https://issues.apache.org/jira/browse/HIVE-15473
> Project: Hive
>  Issue Type: Improvement
>  Components: Beeline, HiveServer2
>Affects Versions: 2.1.1
>Reporter: anishek
>Assignee: anishek
>Priority: Minor
> Attachments: HIVE-15473.2.patch, HIVE-15473.3.patch, 
> HIVE-15473.4.patch, screen_shot_beeline.jpg
>
>
> Hive Cli allows showing progress bar for tez execution engine as shown in 
> https://issues.apache.org/jira/secure/attachment/12678767/ux-demo.gif
> it would be great to have similar progress bar displayed when user is 
> connecting via beeline command line client as well. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)