[jira] [Commented] (HIVE-19161) Add authorizations to information schema

2018-04-27 Thread Daniel Dai (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457375#comment-16457375
 ] 

Daniel Dai commented on HIVE-19161:
---

HIVE-19161.11.patch to fix checkstyle warnings.

> Add authorizations to information schema
> 
>
> Key: HIVE-19161
> URL: https://issues.apache.org/jira/browse/HIVE-19161
> Project: Hive
>  Issue Type: Improvement
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-19161.1.patch, HIVE-19161.10.patch, 
> HIVE-19161.11.patch, HIVE-19161.2.patch, HIVE-19161.3.patch, 
> HIVE-19161.4.patch, HIVE-19161.5.patch, HIVE-19161.6.patch, 
> HIVE-19161.7.patch, HIVE-19161.8.patch, HIVE-19161.9.patch
>
>
> We need to control the access of information schema so user can only query 
> the information authorized to.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19161) Add authorizations to information schema

2018-04-27 Thread Daniel Dai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-19161:
--
Attachment: HIVE-19161.11.patch

> Add authorizations to information schema
> 
>
> Key: HIVE-19161
> URL: https://issues.apache.org/jira/browse/HIVE-19161
> Project: Hive
>  Issue Type: Improvement
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-19161.1.patch, HIVE-19161.10.patch, 
> HIVE-19161.11.patch, HIVE-19161.2.patch, HIVE-19161.3.patch, 
> HIVE-19161.4.patch, HIVE-19161.5.patch, HIVE-19161.6.patch, 
> HIVE-19161.7.patch, HIVE-19161.8.patch, HIVE-19161.9.patch
>
>
> We need to control the access of information schema so user can only query 
> the information authorized to.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19054) Function replication shall use "hive.repl.replica.functions.root.dir" as root

2018-04-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457371#comment-16457371
 ] 

Hive QA commented on HIVE-19054:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10547/dev-support/hive-personality.sh
 |
| git revision | master / e388bc7 |
| Default Java | 1.8.0_111 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10547/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Function replication shall use "hive.repl.replica.functions.root.dir" as root
> -
>
> Key: HIVE-19054
> URL: https://issues.apache.org/jira/browse/HIVE-19054
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-19054.1.patch, HIVE-19054.2.patch, 
> HIVE-19054.3.patch, HIVE-19054.4.patch
>
>
> It's wrongly use fs.defaultFS as the root, ignore 
> "hive.repl.replica.functions.root.dir" definition, thus prevent replicating 
> to cloud destination.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17657) export/import for MM tables is broken

2018-04-27 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457362#comment-16457362
 ] 

Sankar Hariappan commented on HIVE-17657:
-

[~sershe], [~ekoifman],

I've couple of questions.
 # How getAcidState lists these directories? If it returns mm_table_import, 
then replication will also create the same structure at target with subdirs. 
But, if it returns only delta_x_x_y list, then in target, we skip the parent 
directory mm_table_import. Instead delta dir will be directly created in 
warehouse data location. But of course includes the subdirs under delta dir.
 # Will any of this directory contents change (such as add another delta 
directory or rename etc) after listing new files for notification events? There 
are no issues with compaction as it will archive these files to cmroot before 
clean-up.

> export/import for MM tables is broken
> -
>
> Key: HIVE-17657
> URL: https://issues.apache.org/jira/browse/HIVE-17657
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-17657.01.patch, HIVE-17657.02.patch, 
> HIVE-17657.03.patch, HIVE-17657.04.patch, HIVE-17657.05.patch, 
> HIVE-17657.patch
>
>
> there is mm_exim.q but it's not clear from the tests what file structure it 
> creates 
> On import the txnids in the directory names would have to be remapped if 
> importing to a different cluster.  Perhaps export can be smart and export 
> highest base_x and accretive deltas (minus aborted ones).  Then import can 
> ...?  It would have to remap txn ids from the archive to new txn ids.  This 
> would then mean that import is made up of several transactions rather than 1 
> atomic op.  (all locks must belong to a transaction)
> One possibility is to open a new txn for each dir in the archive (where 
> start/end txn of file name is the same) and commit all of them at once (need 
> new TMgr API for that).  This assumes using a shared lock (if any!) and thus 
> allows other inserts (not related to import) to occur.
> What if you have delta_6_9, such as a result of concatenate?  If we stipulate 
> that this must mean that there is no delta_6_6 or any other "obsolete" delta 
> in the archive we can map it to a new single txn delta_x_x.
> Add read_only mode for tables (useful in general, may be needed for upgrade 
> etc) and use that to make the above atomic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19311) Partition and bucketing support for “load data” statement

2018-04-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457360#comment-16457360
 ] 

Hive QA commented on HIVE-19311:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12920967/HIVE-19311.3.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/10544/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10544/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10544/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2018-04-28 05:03:14.793
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-10544/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2018-04-28 05:03:14.804
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at e388bc7 HIVE-19338 : isExplicitAnalyze method may be incorrect 
in BasicStatsTask (Sergey Shelukhin, reviewed by Jesus Camacho Rodriguez)
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at e388bc7 HIVE-19338 : isExplicitAnalyze method may be incorrect 
in BasicStatsTask (Sergey Shelukhin, reviewed by Jesus Camacho Rodriguez)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2018-04-28 05:03:24.537
+ rm -rf ../yetus_PreCommit-HIVE-Build-10544
+ mkdir ../yetus_PreCommit-HIVE-Build-10544
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-10544
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-10544/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: a/itests/src/test/resources/testconfiguration.properties: does not exist 
in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/Context.java: does not exist in 
index
error: a/ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java: does not exist in 
index
error: a/ql/src/java/org/apache/hadoop/hive/ql/metadata/Table.java: does not 
exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/LoadSemanticAnalyzer.java: 
does not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java: 
does not exist in index
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/parse/UpdateDeleteSemanticAnalyzer.java:
 does not exist in index
error: patch failed: ql/src/java/org/apache/hadoop/hive/ql/Context.java:1067
Falling back to three-way merge...
Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/Context.java' with 
conflicts.
Going to apply patch with: git apply -p1
/data/hiveptest/working/scratch/build.patch:1146: trailing whitespace.
Map 1 
/data/hiveptest/working/scratch/build.patch:1173: trailing whitespace.
ds 
/data/hiveptest/working/scratch/build.patch:1354: trailing whitespace.
Map 1 
/data/hiveptest/working/scratch/build.patch:1381: trailing whitespace.
ds 
/data/hiveptest/working/scratch/build.patch:1382: trailing whitespace.
hr 
error: patch failed: ql/src/java/org/apache/hadoop/hive/ql/Context.java:1067
Falling back to three-way merge...
Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/Context.java' with 
conflicts.
U ql/src/java/org/apache/hadoop/hive/ql/Context.java
warning: squelched 35 whitespace errors
warning: 40 lines add whitespace errors.
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12920967 - PreCommit-HIVE-Build

> Partition and bucketing support for “load data” statement
> -
>
> Key: 

[jira] [Commented] (HIVE-19340) Disable timeout of transactions opened by replication task at target cluster

2018-04-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457359#comment-16457359
 ] 

Hive QA commented on HIVE-19340:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12920935/HIVE-19340.01.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 54 failed/errored test(s), 14285 tests 
executed
*Failed tests:*
{noformat}
TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=93)

[infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q]
TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed 
out) (batchId=217)
TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] 
(batchId=17)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_numeric_overflows]
 (batchId=70)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[explainuser_1]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_5]
 (batchId=154)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] 
(batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_stats]
 (batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_part]
 (batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] 
(batchId=105)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[cluster_tasklog_retrieval]
 (batchId=98)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace]
 (batchId=98)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace_turnoff]
 (batchId=98)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[minimr_broken_pipe]
 (batchId=98)
org.apache.hadoop.hive.ql.TestAcidOnTez.testAcidInsertWithRemoveUnion 
(batchId=228)
org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228)
org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 
(batchId=228)
org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232)
org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate
 (batchId=231)
org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys
 (batchId=231)
org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testDifferentFiltersAreNotMatched
 (batchId=298)
org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testSameFiltersMatched 
(batchId=298)
org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
 (batchId=298)
org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
 (batchId=298)
org.apache.hadoop.hive.ql.plan.mapping.TestReOptimization.testNotReExecutedIfAssertionError
 (batchId=298)
org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgressParallel 
(batchId=235)
org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData (batchId=240)
org.apache.hive.jdbc.TestSSL.testSSLFetchHttp (batchId=239)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesWrite 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryElapsedTime
 (batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryExecutionTime
 (batchId=242)

[jira] [Updated] (HIVE-19317) Handle schema evolution from int like types to decimal

2018-04-27 Thread Janaki Lahorani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Janaki Lahorani updated HIVE-19317:
---
Attachment: HIVE-19317.2.patch

> Handle schema evolution from int like types to decimal
> --
>
> Key: HIVE-19317
> URL: https://issues.apache.org/jira/browse/HIVE-19317
> Project: Hive
>  Issue Type: Bug
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-19317.1.patch, HIVE-19317.2.patch
>
>
> If int like type is changed to decimal on parquet data, select results in 
> errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19340) Disable timeout of transactions opened by replication task at target cluster

2018-04-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457346#comment-16457346
 ] 

Hive QA commented on HIVE-19340:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
49s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
13s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
24s{color} | {color:red} standalone-metastore: The patch generated 2 new + 551 
unchanged - 3 fixed = 553 total (was 554) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10542/dev-support/hive-personality.sh
 |
| git revision | master / e388bc7 |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10542/yetus/diff-checkstyle-standalone-metastore.txt
 |
| modules | C: ql standalone-metastore U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10542/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Disable timeout of transactions opened by replication task at target cluster
> 
>
> Key: HIVE-19340
> URL: https://issues.apache.org/jira/browse/HIVE-19340
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl, Transactions
>Affects Versions: 3.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: ACID, DR, pull-request-available, replication
> Fix For: 3.0.0
>
> Attachments: HIVE-19340.01.patch
>
>
> The transactions opened by applying EVENT_OPEN_TXN should never be aborted 
> automatically due to time-out. Aborting of transaction started by replication 
> task may leads to inconsistent state at target which needs additional 
> overhead to clean-up. So, it is proposed to mark the transactions opened by 
> replication task as special ones and shouldn't be aborted if heart beat is 
> lost. This helps to ensure all ABORT and COMMIT events will always find the 
> corresponding txn at target to operate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19110) Vectorization: Enabling vectorization causes TestContribCliDriver udf_example_arraymapstruct.q to produce Wrong Results

2018-04-27 Thread Haifeng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haifeng Chen updated HIVE-19110:

Status: Patch Available  (was: Open)

Removed the BUG comment from the original qtest file. Fixed one check style 
error and one white space error. New patch submitted.

> Vectorization: Enabling vectorization causes TestContribCliDriver 
> udf_example_arraymapstruct.q to produce Wrong Results
> ---
>
> Key: HIVE-19110
> URL: https://issues.apache.org/jira/browse/HIVE-19110
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Haifeng Chen
>Priority: Critical
> Attachments: HIVE-19110.01.patch, HIVE-19110.02.patch, 
> HIVE-19110.03.patch
>
>
> Found in vectorization enable by default experiment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19110) Vectorization: Enabling vectorization causes TestContribCliDriver udf_example_arraymapstruct.q to produce Wrong Results

2018-04-27 Thread Haifeng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haifeng Chen updated HIVE-19110:

Attachment: HIVE-19110.03.patch

> Vectorization: Enabling vectorization causes TestContribCliDriver 
> udf_example_arraymapstruct.q to produce Wrong Results
> ---
>
> Key: HIVE-19110
> URL: https://issues.apache.org/jira/browse/HIVE-19110
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Haifeng Chen
>Priority: Critical
> Attachments: HIVE-19110.01.patch, HIVE-19110.02.patch, 
> HIVE-19110.03.patch
>
>
> Found in vectorization enable by default experiment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19184) Hive 3.0.0 release branch preparation

2018-04-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457340#comment-16457340
 ] 

Hive QA commented on HIVE-19184:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12919517/HIVE-19184.01-branch-3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 71 failed/errored test(s), 14133 tests 
executed
*Failed tests:*
{noformat}
TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=253)
TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=253)
TestMiniDruidCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=253)
TestMiniDruidKafkaCliDriver - did not produce a TEST-*.xml file (likely timed 
out) (batchId=253)
TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed 
out) (batchId=217)
TestTezPerfCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=253)
TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286)
org.apache.hadoop.hive.cli.TestContribCliDriver.testCliDriver[udaf_example_max_n]
 (batchId=248)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[explainuser_1]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_5]
 (batchId=154)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_bmj_schema_evolution]
 (batchId=154)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] 
(batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat]
 (batchId=183)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_explainuser_1]
 (batchId=183)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] 
(batchId=105)
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[infer_bucket_sort_dyn_part]
 (batchId=93)
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[infer_bucket_sort_map_operators]
 (batchId=93)
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[infer_bucket_sort_num_buckets]
 (batchId=93)
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[parallel_orderby] 
(batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[avro_non_nullable_union]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[cachingprintstream]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[compute_stats_long]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[dyn_part3] 
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[dyn_part_max_per_node]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[dynamic_partitions_with_whitelist]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[script_broken_pipe2]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[script_broken_pipe3]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[script_error] 
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[serde_regex2] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_2]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_publisher_error_1]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_publisher_error_2]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_corr_in_agg]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_in_implicit_gby]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_notin_implicit_gby]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_scalar_corr_multi_rows]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_scalar_multi_rows]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_assert_true2]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_assert_true] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error_reduce]
 (batchId=95)

[jira] [Updated] (HIVE-19110) Vectorization: Enabling vectorization causes TestContribCliDriver udf_example_arraymapstruct.q to produce Wrong Results

2018-04-27 Thread Haifeng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haifeng Chen updated HIVE-19110:

Status: Open  (was: Patch Available)

> Vectorization: Enabling vectorization causes TestContribCliDriver 
> udf_example_arraymapstruct.q to produce Wrong Results
> ---
>
> Key: HIVE-19110
> URL: https://issues.apache.org/jira/browse/HIVE-19110
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Haifeng Chen
>Priority: Critical
> Attachments: HIVE-19110.01.patch, HIVE-19110.02.patch
>
>
> Found in vectorization enable by default experiment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19108) Vectorization and Parquet: Turning on vectorization in parquet_ppd_decimal.q causes Wrong Query Results

2018-04-27 Thread Haifeng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haifeng Chen updated HIVE-19108:

Status: Patch Available  (was: Open)

Resubmit the patch with updating of the qtest with the latest modifications on 
testing infrastructure and remove the BUG comment from the original test file.

 

> Vectorization and Parquet: Turning on vectorization in parquet_ppd_decimal.q 
> causes Wrong Query Results
> ---
>
> Key: HIVE-19108
> URL: https://issues.apache.org/jira/browse/HIVE-19108
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Haifeng Chen
>Priority: Critical
> Attachments: HIVE-19108.01.patch, HIVE-19108.02.patch, 
> HIVE-19108.03.patch
>
>
> Found in vectorization enable by default experiment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19108) Vectorization and Parquet: Turning on vectorization in parquet_ppd_decimal.q causes Wrong Query Results

2018-04-27 Thread Haifeng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haifeng Chen updated HIVE-19108:

Attachment: (was: HIVE-19108.03.patch)

> Vectorization and Parquet: Turning on vectorization in parquet_ppd_decimal.q 
> causes Wrong Query Results
> ---
>
> Key: HIVE-19108
> URL: https://issues.apache.org/jira/browse/HIVE-19108
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Haifeng Chen
>Priority: Critical
> Attachments: HIVE-19108.01.patch, HIVE-19108.02.patch, 
> HIVE-19108.03.patch
>
>
> Found in vectorization enable by default experiment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19108) Vectorization and Parquet: Turning on vectorization in parquet_ppd_decimal.q causes Wrong Query Results

2018-04-27 Thread Haifeng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haifeng Chen updated HIVE-19108:

Attachment: HIVE-19108.03.patch

> Vectorization and Parquet: Turning on vectorization in parquet_ppd_decimal.q 
> causes Wrong Query Results
> ---
>
> Key: HIVE-19108
> URL: https://issues.apache.org/jira/browse/HIVE-19108
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Haifeng Chen
>Priority: Critical
> Attachments: HIVE-19108.01.patch, HIVE-19108.02.patch, 
> HIVE-19108.03.patch
>
>
> Found in vectorization enable by default experiment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19108) Vectorization and Parquet: Turning on vectorization in parquet_ppd_decimal.q causes Wrong Query Results

2018-04-27 Thread Haifeng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haifeng Chen updated HIVE-19108:

Status: Open  (was: Patch Available)

> Vectorization and Parquet: Turning on vectorization in parquet_ppd_decimal.q 
> causes Wrong Query Results
> ---
>
> Key: HIVE-19108
> URL: https://issues.apache.org/jira/browse/HIVE-19108
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Haifeng Chen
>Priority: Critical
> Attachments: HIVE-19108.01.patch, HIVE-19108.02.patch, 
> HIVE-19108.03.patch
>
>
> Found in vectorization enable by default experiment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19108) Vectorization and Parquet: Turning on vectorization in parquet_ppd_decimal.q causes Wrong Query Results

2018-04-27 Thread Haifeng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haifeng Chen updated HIVE-19108:

Attachment: HIVE-19108.03.patch

> Vectorization and Parquet: Turning on vectorization in parquet_ppd_decimal.q 
> causes Wrong Query Results
> ---
>
> Key: HIVE-19108
> URL: https://issues.apache.org/jira/browse/HIVE-19108
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Haifeng Chen
>Priority: Critical
> Attachments: HIVE-19108.01.patch, HIVE-19108.02.patch, 
> HIVE-19108.03.patch
>
>
> Found in vectorization enable by default experiment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19118) Vectorization: Turning on vectorization in escape_crlf produces wrong results

2018-04-27 Thread Haifeng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haifeng Chen updated HIVE-19118:

Status: Patch Available  (was: Open)

Resubmit the patch with vectorization qtest for escaping case. Remove the 
comment from the original qtest.

> Vectorization: Turning on vectorization in escape_crlf produces wrong results
> -
>
> Key: HIVE-19118
> URL: https://issues.apache.org/jira/browse/HIVE-19118
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Haifeng Chen
>Priority: Critical
> Attachments: HIVE-19118.01.patch, HIVE-19118.02.patch, 
> HIVE-19118.03.patch, HIVE-19118.04.patch
>
>
> Found in vectorization enable by default experiment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19118) Vectorization: Turning on vectorization in escape_crlf produces wrong results

2018-04-27 Thread Haifeng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haifeng Chen updated HIVE-19118:

Attachment: HIVE-19118.04.patch

> Vectorization: Turning on vectorization in escape_crlf produces wrong results
> -
>
> Key: HIVE-19118
> URL: https://issues.apache.org/jira/browse/HIVE-19118
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Haifeng Chen
>Priority: Critical
> Attachments: HIVE-19118.01.patch, HIVE-19118.02.patch, 
> HIVE-19118.03.patch, HIVE-19118.04.patch
>
>
> Found in vectorization enable by default experiment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19118) Vectorization: Turning on vectorization in escape_crlf produces wrong results

2018-04-27 Thread Haifeng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haifeng Chen updated HIVE-19118:

Status: Open  (was: Patch Available)

> Vectorization: Turning on vectorization in escape_crlf produces wrong results
> -
>
> Key: HIVE-19118
> URL: https://issues.apache.org/jira/browse/HIVE-19118
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Haifeng Chen
>Priority: Critical
> Attachments: HIVE-19118.01.patch, HIVE-19118.02.patch, 
> HIVE-19118.03.patch
>
>
> Found in vectorization enable by default experiment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19184) Hive 3.0.0 release branch preparation

2018-04-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457296#comment-16457296
 ] 

Hive QA commented on HIVE-19184:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  1m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10541/dev-support/hive-personality.sh
 |
| git revision | master / e388bc7 |
| modules | C: . U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10541/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Hive 3.0.0 release branch preparation
> -
>
> Key: HIVE-19184
> URL: https://issues.apache.org/jira/browse/HIVE-19184
> Project: Hive
>  Issue Type: Task
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19184.01-branch-3.patch
>
>
> Need to do bunch of things to prepare branch-3 for release e.g.
> * Update pom.xml to delete SNAPSHOT
> * Update .reviewboardrc
> * Remove storage-api module to build
> * Change storage-api depdency etc



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19324) improve YARN queue check error message in Tez pool

2018-04-27 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19324:

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Committed to master. Thanks for the review!

> improve YARN queue check error message in Tez pool
> --
>
> Key: HIVE-19324
> URL: https://issues.apache.org/jira/browse/HIVE-19324
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepesh Khandelwal
>Assignee: Sergey Shelukhin
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19324.01.patch, HIVE-19324.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19338) isExplicitAnalyze method may be incorrect in BasicStatsTask

2018-04-27 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19338:

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Committed. Thanks for the review!

> isExplicitAnalyze method may be incorrect in BasicStatsTask
> ---
>
> Key: HIVE-19338
> URL: https://issues.apache.org/jira/browse/HIVE-19338
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19338.patch
>
>
> It relies on a specific ctor being used, however this ctor is used on 
> non-analyze paths too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19339) Regenerate alltypesorc file with latest ORC

2018-04-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457292#comment-16457292
 ] 

Hive QA commented on HIVE-19339:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12920924/HIVE-19339.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 144 failed/errored test(s), 14284 tests 
executed
*Failed tests:*
{noformat}
TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=93)

[infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q]
TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed 
out) (batchId=217)
TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_4] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[foldts] (batchId=59)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_join_pushdown] 
(batchId=82)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_merge9] (batchId=27)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_merge_incompat3] 
(batchId=9)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[timestamp_ints_casts] 
(batchId=1)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_decimal_cast] 
(batchId=34)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_elt] (batchId=37)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_empty_where] 
(batchId=24)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_if_expr] 
(batchId=11)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_left_outer_join] 
(batchId=22)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_non_constant_in_expr]
 (batchId=78)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_nvl] (batchId=75)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_tablesample_rows] 
(batchId=53)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_udf3] (batchId=65)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_varchar_simple] 
(batchId=78)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_10] 
(batchId=26)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_11] 
(batchId=39)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_12] 
(batchId=11)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_13] 
(batchId=51)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_14] 
(batchId=15)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_15] 
(batchId=66)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_16] 
(batchId=44)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_17] 
(batchId=90)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_1] 
(batchId=60)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_2] 
(batchId=21)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_3] 
(batchId=79)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_4] 
(batchId=21)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_5] 
(batchId=59)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_6] 
(batchId=28)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_7] 
(batchId=45)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_8] 
(batchId=48)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_9] 
(batchId=1)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_limit] 
(batchId=37)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_nested_udf]
 (batchId=67)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_not] 
(batchId=44)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_offset_limit]
 (batchId=46)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_pushdown] 
(batchId=49)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorized_case] 
(batchId=59)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorized_casts] 
(batchId=84)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorized_distinct_gby] 
(batchId=76)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorized_mapjoin] 
(batchId=75)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorized_math_funcs] 
(batchId=21)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorized_shufflejoin] 
(batchId=75)

[jira] [Comment Edited] (HIVE-17657) export/import for MM tables is broken

2018-04-27 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457291#comment-16457291
 ] 

Sergey Shelukhin edited comment on HIVE-17657 at 4/28/18 2:00 AM:
--

[~sankarh] for reference for the above comment, not_delta directories (that 
will be renamed to export_delta or something like that) do not represent 
anything and are just nested dirs, like e.g. from Tez union 


was (Author: sershe):
not_delta directories (that will be renamed to export_delta or something like 
that) do not represent anything and are just nested dirs, like e.g. from Tez 
union 

> export/import for MM tables is broken
> -
>
> Key: HIVE-17657
> URL: https://issues.apache.org/jira/browse/HIVE-17657
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-17657.01.patch, HIVE-17657.02.patch, 
> HIVE-17657.03.patch, HIVE-17657.04.patch, HIVE-17657.05.patch, 
> HIVE-17657.patch
>
>
> there is mm_exim.q but it's not clear from the tests what file structure it 
> creates 
> On import the txnids in the directory names would have to be remapped if 
> importing to a different cluster.  Perhaps export can be smart and export 
> highest base_x and accretive deltas (minus aborted ones).  Then import can 
> ...?  It would have to remap txn ids from the archive to new txn ids.  This 
> would then mean that import is made up of several transactions rather than 1 
> atomic op.  (all locks must belong to a transaction)
> One possibility is to open a new txn for each dir in the archive (where 
> start/end txn of file name is the same) and commit all of them at once (need 
> new TMgr API for that).  This assumes using a shared lock (if any!) and thus 
> allows other inserts (not related to import) to occur.
> What if you have delta_6_9, such as a result of concatenate?  If we stipulate 
> that this must mean that there is no delta_6_6 or any other "obsolete" delta 
> in the archive we can map it to a new single txn delta_x_x.
> Add read_only mode for tables (useful in general, may be needed for upgrade 
> etc) and use that to make the above atomic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17657) export/import for MM tables is broken

2018-04-27 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457291#comment-16457291
 ] 

Sergey Shelukhin commented on HIVE-17657:
-

not_delta directories (that will be renamed to export_delta or something like 
that) do not represent anything and are just nested dirs, like e.g. from Tez 
union 

> export/import for MM tables is broken
> -
>
> Key: HIVE-17657
> URL: https://issues.apache.org/jira/browse/HIVE-17657
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-17657.01.patch, HIVE-17657.02.patch, 
> HIVE-17657.03.patch, HIVE-17657.04.patch, HIVE-17657.05.patch, 
> HIVE-17657.patch
>
>
> there is mm_exim.q but it's not clear from the tests what file structure it 
> creates 
> On import the txnids in the directory names would have to be remapped if 
> importing to a different cluster.  Perhaps export can be smart and export 
> highest base_x and accretive deltas (minus aborted ones).  Then import can 
> ...?  It would have to remap txn ids from the archive to new txn ids.  This 
> would then mean that import is made up of several transactions rather than 1 
> atomic op.  (all locks must belong to a transaction)
> One possibility is to open a new txn for each dir in the archive (where 
> start/end txn of file name is the same) and commit all of them at once (need 
> new TMgr API for that).  This assumes using a shared lock (if any!) and thus 
> allows other inserts (not related to import) to occur.
> What if you have delta_6_9, such as a result of concatenate?  If we stipulate 
> that this must mean that there is no delta_6_6 or any other "obsolete" delta 
> in the archive we can map it to a new single txn delta_x_x.
> Add read_only mode for tables (useful in general, may be needed for upgrade 
> etc) and use that to make the above atomic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19258) add originals support to MM tables (and make the conversion a metadata only operation)

2018-04-27 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-19258:
--
Component/s: Transactions

> add originals support to MM tables (and make the conversion a metadata only 
> operation)
> --
>
> Key: HIVE-19258
> URL: https://issues.apache.org/jira/browse/HIVE-19258
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19258.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17657) export/import for MM tables is broken

2018-04-27 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457286#comment-16457286
 ] 

Eugene Koifman commented on HIVE-17657:
---

[~sershe],
left some RB comments - mostly nits, except one about original files

[~sankarh],
MM import will create structure like
{noformat}
├── mm_table_import
│   └── delta_001_001_
│   ├── not_delta_001_001_
│   │   └── 00_0
│   └── not_delta_002_002_
│   └── 00_0
{noformat}
i.e. with subdirs.  does this impact replication at all?


> export/import for MM tables is broken
> -
>
> Key: HIVE-17657
> URL: https://issues.apache.org/jira/browse/HIVE-17657
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-17657.01.patch, HIVE-17657.02.patch, 
> HIVE-17657.03.patch, HIVE-17657.04.patch, HIVE-17657.05.patch, 
> HIVE-17657.patch
>
>
> there is mm_exim.q but it's not clear from the tests what file structure it 
> creates 
> On import the txnids in the directory names would have to be remapped if 
> importing to a different cluster.  Perhaps export can be smart and export 
> highest base_x and accretive deltas (minus aborted ones).  Then import can 
> ...?  It would have to remap txn ids from the archive to new txn ids.  This 
> would then mean that import is made up of several transactions rather than 1 
> atomic op.  (all locks must belong to a transaction)
> One possibility is to open a new txn for each dir in the archive (where 
> start/end txn of file name is the same) and commit all of them at once (need 
> new TMgr API for that).  This assumes using a shared lock (if any!) and thus 
> allows other inserts (not related to import) to occur.
> What if you have delta_6_9, such as a result of concatenate?  If we stipulate 
> that this must mean that there is no delta_6_6 or any other "obsolete" delta 
> in the archive we can map it to a new single txn delta_x_x.
> Add read_only mode for tables (useful in general, may be needed for upgrade 
> etc) and use that to make the above atomic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HIVE-19258) add originals support to MM tables (and make the conversion a metadata only operation)

2018-04-27 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457285#comment-16457285
 ] 

Sergey Shelukhin edited comment on HIVE-19258 at 4/28/18 1:52 AM:
--

A patch that works, except when it hits the issue linked to as blocking this. 
Also added compactor support and a test.


was (Author: sershe):
A patch that works, except when it hits the issue linked to as blocking this.

> add originals support to MM tables (and make the conversion a metadata only 
> operation)
> --
>
> Key: HIVE-19258
> URL: https://issues.apache.org/jira/browse/HIVE-19258
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19258.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19258) add originals support to MM tables (and make the conversion a metadata only operation)

2018-04-27 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19258:

Status: Patch Available  (was: In Progress)

A patch that works, except when it hits the issue linked to as blocking this.

> add originals support to MM tables (and make the conversion a metadata only 
> operation)
> --
>
> Key: HIVE-19258
> URL: https://issues.apache.org/jira/browse/HIVE-19258
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19258.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19258) add originals support to MM tables (and make the conversion a metadata only operation)

2018-04-27 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19258:

Attachment: HIVE-19258.patch

> add originals support to MM tables (and make the conversion a metadata only 
> operation)
> --
>
> Key: HIVE-19258
> URL: https://issues.apache.org/jira/browse/HIVE-19258
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19258.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19258) add originals support to MM tables (and make the conversion a metadata only operation)

2018-04-27 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19258:

Attachment: (was: HIVE-19258.WIP.patch)

> add originals support to MM tables (and make the conversion a metadata only 
> operation)
> --
>
> Key: HIVE-19258
> URL: https://issues.apache.org/jira/browse/HIVE-19258
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19258.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19339) Regenerate alltypesorc file with latest ORC

2018-04-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457264#comment-16457264
 ] 

Hive QA commented on HIVE-19339:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
59s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  1m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10540/dev-support/hive-personality.sh
 |
| git revision | master / cbc3863 |
| modules | C: . U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10540/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Regenerate alltypesorc file with latest ORC
> ---
>
> Key: HIVE-19339
> URL: https://issues.apache.org/jira/browse/HIVE-19339
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-19339.patch
>
>
> Among others, new files contain timezone information in the stripe footer. We 
> want to run tests over {{alltypesorc}} file generated using more recent 
> format.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19312) MM tables don't work with BucketizedHIF

2018-04-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457259#comment-16457259
 ] 

Hive QA commented on HIVE-19312:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12920920/HIVE-19312.01.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 67 failed/errored test(s), 14285 tests 
executed
*Failed tests:*
{noformat}
TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=93)

[infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q]
TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed 
out) (batchId=217)
TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[filter_in_or_dup] 
(batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] 
(batchId=17)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[truncate_column] 
(batchId=85)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[truncate_column_buckets] 
(batchId=26)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[truncate_column_list_bucket]
 (batchId=88)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[truncate_column_merge] 
(batchId=63)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[explainuser_2] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_smb] 
(batchId=176)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_5]
 (batchId=154)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mm_bhif] 
(batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_stats]
 (batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_part]
 (batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[truncate_column_buckets]
 (batchId=182)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] 
(batchId=105)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[cluster_tasklog_retrieval]
 (batchId=98)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace]
 (batchId=98)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace_turnoff]
 (batchId=98)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[minimr_broken_pipe]
 (batchId=98)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=225)
org.apache.hadoop.hive.ql.TestAcidOnTez.testAcidInsertWithRemoveUnion 
(batchId=228)
org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228)
org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 
(batchId=228)
org.apache.hadoop.hive.ql.TestAutoPurgeTables.testNoAutoPurge (batchId=233)
org.apache.hadoop.hive.ql.TestAutoPurgeTables.testPartitionedNoAutoPurge 
(batchId=233)
org.apache.hadoop.hive.ql.TestAutoPurgeTables.testTruncateInvalidAutoPurge 
(batchId=233)
org.apache.hadoop.hive.ql.TestAutoPurgeTables.testTruncatePartitionedNoAutoPurge
 (batchId=233)
org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232)
org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate
 (batchId=231)
org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys
 (batchId=231)
org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testDifferentFiltersAreNotMatched
 (batchId=298)
org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testSameFiltersMatched 
(batchId=298)
org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
 (batchId=298)
org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
 (batchId=298)
org.apache.hadoop.hive.ql.plan.mapping.TestReOptimization.testNotReExecutedIfAssertionError
 (batchId=298)
org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgressParallel 
(batchId=235)

[jira] [Updated] (HIVE-19279) remove magic directory skipping from CopyTask

2018-04-27 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-19279:
--
Component/s: Transactions

> remove magic directory skipping from CopyTask
> -
>
> Key: HIVE-19279
> URL: https://issues.apache.org/jira/browse/HIVE-19279
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Sergey Shelukhin
>Priority: Major
>
> Follow up from HIVE-17657.
> Code exists in copytask that copies files (fancy that); however, when listing 
> the files, if a single directory exists at the source with no other files, it 
> will skip the directory and copy the files inside instead.
> This directory in various tests is either the "data" directory from export, 
> or some random partition directory ("foo=bar") that if not skipped makes it 
> into the real partition directory at the destination.
> The directory is not skipped if it's not by itself, i.e. any other files or 
> directories are present.
> This seems brittle. Caller of the CopyTask should specify exactly what it 
> wants copied instead of relying on this behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19345) create table fails with NPE on branch-2.3

2018-04-27 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HIVE-19345:
---
Summary: create table fails with NPE on branch-2.3  (was: createSources 
fails on branch-2.3)

> create table fails with NPE on branch-2.3
> -
>
> Key: HIVE-19345
> URL: https://issues.apache.org/jira/browse/HIVE-19345
> Project: Hive
>  Issue Type: Bug
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Major
>
> I see the following NPE when the source tables are getting created when I try 
> to run a qtest.
> {noformat}
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.estimateRowSizeFromSchema(StatsUtils.java:546)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.getNumRows(StatsUtils.java:183)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:207)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:157)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:145)
> at 
> org.apache.hadoop.hive.ql.optimizer.stats.annotation.StatsRulesProcFactory$TableScanStatsRule.process(StatsRulesProcFactory.java:130)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:89)
> at 
> org.apache.hadoop.hive.ql.lib.LevelOrderWalker.walk(LevelOrderWalker.java:143)
> at 
> org.apache.hadoop.hive.ql.lib.LevelOrderWalker.startWalking(LevelOrderWalker.java:122)
> at 
> org.apache.hadoop.hive.ql.optimizer.stats.annotation.AnnotateWithStatistics.transform(AnnotateWithStatistics.java:78)
> at 
> org.apache.hadoop.hive.ql.parse.spark.SparkCompiler.runStatsAnnotation(SparkCompiler.java:240)
> at 
> org.apache.hadoop.hive.ql.parse.spark.SparkCompiler.optimizeOperatorPlan(SparkCompiler.java:119)
> at 
> org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:140)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11273)
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)
> at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:336)
> at 
> org.apache.hadoop.hive.ql.QTestUtil.createSources(QTestUtil.java:1096)
> at 
> org.apache.hadoop.hive.ql.QTestUtil.createSources(QTestUtil.java:1073)
> at 
> org.apache.hadoop.hive.cli.control.CoreCliDriver$3.invokeInternal(CoreCliDriver.java:81)
> at 
> org.apache.hadoop.hive.cli.control.CoreCliDriver$3.invokeInternal(CoreCliDriver.java:78)
> at 
> org.apache.hadoop.hive.util.ElapsedTimeLoggingWrapper.invoke(ElapsedTimeLoggingWrapper.java:33)
> at 
> org.apache.hadoop.hive.cli.control.CoreCliDriver.beforeClass(CoreCliDriver.java:84)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-19345) createSources fails on branch-2.3

2018-04-27 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar reassigned HIVE-19345:
--


> createSources fails on branch-2.3
> -
>
> Key: HIVE-19345
> URL: https://issues.apache.org/jira/browse/HIVE-19345
> Project: Hive
>  Issue Type: Bug
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Major
>
> I see the following NPE when the source tables are getting created when I try 
> to run a qtest.
> {noformat}
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.estimateRowSizeFromSchema(StatsUtils.java:546)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.getNumRows(StatsUtils.java:183)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:207)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:157)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:145)
> at 
> org.apache.hadoop.hive.ql.optimizer.stats.annotation.StatsRulesProcFactory$TableScanStatsRule.process(StatsRulesProcFactory.java:130)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:89)
> at 
> org.apache.hadoop.hive.ql.lib.LevelOrderWalker.walk(LevelOrderWalker.java:143)
> at 
> org.apache.hadoop.hive.ql.lib.LevelOrderWalker.startWalking(LevelOrderWalker.java:122)
> at 
> org.apache.hadoop.hive.ql.optimizer.stats.annotation.AnnotateWithStatistics.transform(AnnotateWithStatistics.java:78)
> at 
> org.apache.hadoop.hive.ql.parse.spark.SparkCompiler.runStatsAnnotation(SparkCompiler.java:240)
> at 
> org.apache.hadoop.hive.ql.parse.spark.SparkCompiler.optimizeOperatorPlan(SparkCompiler.java:119)
> at 
> org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:140)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11273)
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)
> at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:336)
> at 
> org.apache.hadoop.hive.ql.QTestUtil.createSources(QTestUtil.java:1096)
> at 
> org.apache.hadoop.hive.ql.QTestUtil.createSources(QTestUtil.java:1073)
> at 
> org.apache.hadoop.hive.cli.control.CoreCliDriver$3.invokeInternal(CoreCliDriver.java:81)
> at 
> org.apache.hadoop.hive.cli.control.CoreCliDriver$3.invokeInternal(CoreCliDriver.java:78)
> at 
> org.apache.hadoop.hive.util.ElapsedTimeLoggingWrapper.invoke(ElapsedTimeLoggingWrapper.java:33)
> at 
> org.apache.hadoop.hive.cli.control.CoreCliDriver.beforeClass(CoreCliDriver.java:84)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19322) broken test: TestNegativeMinimrCliDriver#testCliDriver[minimr_broken_pipe]

2018-04-27 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457218#comment-16457218
 ] 

Eugene Koifman commented on HIVE-19322:
---

[~jdere] could you review please

> broken test: TestNegativeMinimrCliDriver#testCliDriver[minimr_broken_pipe]
> --
>
> Key: HIVE-19322
> URL: https://issues.apache.org/jira/browse/HIVE-19322
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test, Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-19322.02.patch
>
>
> this is apparently caused by HIVE-18739, specifically changing
> {{private static ThreadLocal tss}} in {{SessionState}} to 
> {{private static InheritableThreadLocal tss}}
> need to figure out why this is.  
> Looks like
> {{TestNegativeMinimrCliDriver 
> -Dqfile=mapreduce_stack_trace_turnoff.q,mapreduce_stack_trace.q,cluster_tasklog_retrieval.q}}
> are also broken by this



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19312) MM tables don't work with BucketizedHIF

2018-04-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457216#comment-16457216
 ] 

Hive QA commented on HIVE-19312:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
49s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
44s{color} | {color:red} ql: The patch generated 4 new + 21 unchanged - 0 fixed 
= 25 total (was 21) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10539/dev-support/hive-personality.sh
 |
| git revision | master / cbc3863 |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10539/yetus/diff-checkstyle-ql.txt
 |
| modules | C: itests ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10539/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> MM tables don't work with BucketizedHIF
> ---
>
> Key: HIVE-19312
> URL: https://issues.apache.org/jira/browse/HIVE-19312
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19312.01.patch, HIVE-19312.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HIVE-19322) broken test: TestNegativeMinimrCliDriver#testCliDriver[minimr_broken_pipe]

2018-04-27 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457211#comment-16457211
 ] 

Eugene Koifman edited comment on HIVE-19322 at 4/27/18 11:31 PM:
-

{noformat}
commit 699c5768c88967abd507122d775bd5955ca45218
Author: Eugene Koifman 
Date:   Tue Apr 17 18:23:13 2018 -0700

HIVE-18739 - Add support for Import/Export from Acid table (Eugene Koifman, 
reviewed by Sergey Shelukhin)

commit bd6b582581de461e620ac9220b862e938dfec8cd
Author: Ashutosh Chauhan 
Date:   Tue Apr 17 18:17:35 2018 -0700

HIVE-19235 : Update golden files for Minimr tests

commit 4cfec3eb9c4c47df6195692ef535b42f9ac36588
Author: Vaibhav Gumashta 
Date:   Tue Apr 17 12:53:40 2018 -0700

HIVE-19126: CachedStore: Use memory estimation to limit cache size during 
prewarm (Vaibhav Gumashta reviewed by Thejas Nair)
{noformat}

If I checkout to just before HIVE-18739 then all the following tests pass and 
fail if I include HIVE-18739. 
{noformat}
mvn test -Dtest=TestNegativeMinimrCliDriver 
-Dqfile=cluster_tasklog_retrieval.q,mapreduce_stack_trace.q,mapreduce_stack_trace_turnoff.q,minimr_broken_pipe.q;
mvn test -Dtest=TestNegativeCliDriver 
-Dqfile=subquery_corr_in_agg.q,subquery_scalar_corr_multi_rows.q,udf_assert_true.q,script_broken_pipe2.q
{noformat}
The attached patch fixes all the tests, however if I checkout HEAD the tests 
TestNegativeMinimrCliDriver are still fixed by the patch but not any of 
TestNegativeCliDriver tests even though the diff that is output is exactly the 
same.  TestNegativeCliDriver clearly could be improved wrt error logging.

So TestNegativeCliDriver will need to be looked at separately in another ticket.


was (Author: ekoifman):
{noformat}
commit 699c5768c88967abd507122d775bd5955ca45218
Author: Eugene Koifman 
Date:   Tue Apr 17 18:23:13 2018 -0700

HIVE-18739 - Add support for Import/Export from Acid table (Eugene Koifman, 
reviewed by Sergey Shelukhin)

commit bd6b582581de461e620ac9220b862e938dfec8cd
Author: Ashutosh Chauhan 
Date:   Tue Apr 17 18:17:35 2018 -0700

HIVE-19235 : Update golden files for Minimr tests

commit 4cfec3eb9c4c47df6195692ef535b42f9ac36588
Author: Vaibhav Gumashta 
Date:   Tue Apr 17 12:53:40 2018 -0700

HIVE-19126: CachedStore: Use memory estimation to limit cache size during 
prewarm (Vaibhav Gumashta reviewed by Thejas Nair)
{noformat}

If I checkout to just before HIVE-18739 then all the following tests pass and 
fail if I include HIVE-18739. 
mvn test -Dtest=TestNegativeMinimrCliDriver 
-Dqfile=cluster_tasklog_retrieval.q,mapreduce_stack_trace.q,mapreduce_stack_trace_turnoff.q,minimr_broken_pipe.q;
mvn test -Dtest=TestNegativeCliDriver 
-Dqfile=subquery_corr_in_agg.q,subquery_scalar_corr_multi_rows.q,udf_assert_true.q,script_broken_pipe2.q

The attached patch fixes all the tests, however if I checkout HEAD the tests 
TestNegativeMinimrCliDriver are still fixed by the patch but not any of 
TestNegativeCliDriver tests even though the diff that is output is exactly the 
same.  TestNegativeCliDriver clearly could be improved wrt error logging.

So TestNegativeCliDriver will need to be looked at separately in another ticket.

> broken test: TestNegativeMinimrCliDriver#testCliDriver[minimr_broken_pipe]
> --
>
> Key: HIVE-19322
> URL: https://issues.apache.org/jira/browse/HIVE-19322
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test, Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-19322.02.patch
>
>
> this is apparently caused by HIVE-18739, specifically changing
> {{private static ThreadLocal tss}} in {{SessionState}} to 
> {{private static InheritableThreadLocal tss}}
> need to figure out why this is.  
> Looks like
> {{TestNegativeMinimrCliDriver 
> -Dqfile=mapreduce_stack_trace_turnoff.q,mapreduce_stack_trace.q,cluster_tasklog_retrieval.q}}
> are also broken by this



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19322) broken test: TestNegativeMinimrCliDriver#testCliDriver[minimr_broken_pipe]

2018-04-27 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457211#comment-16457211
 ] 

Eugene Koifman commented on HIVE-19322:
---

{noformat}
commit 699c5768c88967abd507122d775bd5955ca45218
Author: Eugene Koifman 
Date:   Tue Apr 17 18:23:13 2018 -0700

HIVE-18739 - Add support for Import/Export from Acid table (Eugene Koifman, 
reviewed by Sergey Shelukhin)

commit bd6b582581de461e620ac9220b862e938dfec8cd
Author: Ashutosh Chauhan 
Date:   Tue Apr 17 18:17:35 2018 -0700

HIVE-19235 : Update golden files for Minimr tests

commit 4cfec3eb9c4c47df6195692ef535b42f9ac36588
Author: Vaibhav Gumashta 
Date:   Tue Apr 17 12:53:40 2018 -0700

HIVE-19126: CachedStore: Use memory estimation to limit cache size during 
prewarm (Vaibhav Gumashta reviewed by Thejas Nair)
{noformat}

If I checkout to just before HIVE-18739 then all the following tests pass and 
fail if I include HIVE-18739. 
mvn test -Dtest=TestNegativeMinimrCliDriver 
-Dqfile=cluster_tasklog_retrieval.q,mapreduce_stack_trace.q,mapreduce_stack_trace_turnoff.q,minimr_broken_pipe.q;
mvn test -Dtest=TestNegativeCliDriver 
-Dqfile=subquery_corr_in_agg.q,subquery_scalar_corr_multi_rows.q,udf_assert_true.q,script_broken_pipe2.q

The attached patch fixes all the tests, however if I checkout HEAD the tests 
TestNegativeMinimrCliDriver are still fixed by the patch but not any of 
TestNegativeCliDriver tests even though the diff that is output is exactly the 
same.  TestNegativeCliDriver clearly could be improved wrt error logging.

So TestNegativeCliDriver will need to be looked at separately in another ticket.

> broken test: TestNegativeMinimrCliDriver#testCliDriver[minimr_broken_pipe]
> --
>
> Key: HIVE-19322
> URL: https://issues.apache.org/jira/browse/HIVE-19322
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test, Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-19322.02.patch
>
>
> this is apparently caused by HIVE-18739, specifically changing
> {{private static ThreadLocal tss}} in {{SessionState}} to 
> {{private static InheritableThreadLocal tss}}
> need to figure out why this is.  
> Looks like
> {{TestNegativeMinimrCliDriver 
> -Dqfile=mapreduce_stack_trace_turnoff.q,mapreduce_stack_trace.q,cluster_tasklog_retrieval.q}}
> are also broken by this



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19322) broken test: TestNegativeMinimrCliDriver#testCliDriver[minimr_broken_pipe]

2018-04-27 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-19322:
--
Attachment: HIVE-19322.02.patch

> broken test: TestNegativeMinimrCliDriver#testCliDriver[minimr_broken_pipe]
> --
>
> Key: HIVE-19322
> URL: https://issues.apache.org/jira/browse/HIVE-19322
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test, Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-19322.02.patch
>
>
> this is apparently caused by HIVE-18739, specifically changing
> {{private static ThreadLocal tss}} in {{SessionState}} to 
> {{private static InheritableThreadLocal tss}}
> need to figure out why this is.  
> Looks like
> {{TestNegativeMinimrCliDriver 
> -Dqfile=mapreduce_stack_trace_turnoff.q,mapreduce_stack_trace.q,cluster_tasklog_retrieval.q}}
> are also broken by this



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19322) broken test: TestNegativeMinimrCliDriver#testCliDriver[minimr_broken_pipe]

2018-04-27 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-19322:
--
Status: Patch Available  (was: Open)

> broken test: TestNegativeMinimrCliDriver#testCliDriver[minimr_broken_pipe]
> --
>
> Key: HIVE-19322
> URL: https://issues.apache.org/jira/browse/HIVE-19322
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test, Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-19322.02.patch
>
>
> this is apparently caused by HIVE-18739, specifically changing
> {{private static ThreadLocal tss}} in {{SessionState}} to 
> {{private static InheritableThreadLocal tss}}
> need to figure out why this is.  
> Looks like
> {{TestNegativeMinimrCliDriver 
> -Dqfile=mapreduce_stack_trace_turnoff.q,mapreduce_stack_trace.q,cluster_tasklog_retrieval.q}}
> are also broken by this



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19322) broken test: TestNegativeMinimrCliDriver#testCliDriver[minimr_broken_pipe]

2018-04-27 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-19322:
--
Attachment: (was: HIVE-19322.01.patch)

> broken test: TestNegativeMinimrCliDriver#testCliDriver[minimr_broken_pipe]
> --
>
> Key: HIVE-19322
> URL: https://issues.apache.org/jira/browse/HIVE-19322
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test, Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
>
> this is apparently caused by HIVE-18739, specifically changing
> {{private static ThreadLocal tss}} in {{SessionState}} to 
> {{private static InheritableThreadLocal tss}}
> need to figure out why this is.  
> Looks like
> {{TestNegativeMinimrCliDriver 
> -Dqfile=mapreduce_stack_trace_turnoff.q,mapreduce_stack_trace.q,cluster_tasklog_retrieval.q}}
> are also broken by this



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19324) improve YARN queue check error message in Tez pool

2018-04-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457195#comment-16457195
 ] 

Hive QA commented on HIVE-19324:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12920921/HIVE-19324.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 56 failed/errored test(s), 14284 tests 
executed
*Failed tests:*
{noformat}
TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=93)

[infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q]
TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed 
out) (batchId=217)
TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] 
(batchId=17)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[explainuser_1]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_smb] 
(batchId=176)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_5]
 (batchId=154)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_stats]
 (batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_part]
 (batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_empty]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat]
 (batchId=183)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] 
(batchId=105)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[cluster_tasklog_retrieval]
 (batchId=98)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace]
 (batchId=98)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace_turnoff]
 (batchId=98)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[minimr_broken_pipe]
 (batchId=98)
org.apache.hadoop.hive.ql.TestAcidOnTez.testAcidInsertWithRemoveUnion 
(batchId=228)
org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228)
org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 
(batchId=228)
org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232)
org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate
 (batchId=231)
org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys
 (batchId=231)
org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testDifferentFiltersAreNotMatched
 (batchId=298)
org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testSameFiltersMatched 
(batchId=298)
org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
 (batchId=298)
org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
 (batchId=298)
org.apache.hadoop.hive.ql.plan.mapping.TestReOptimization.testNotReExecutedIfAssertionError
 (batchId=298)
org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgressParallel 
(batchId=235)
org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData (batchId=240)
org.apache.hive.jdbc.TestSSL.testSSLFetchHttp (batchId=239)
org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveConflictKill
 (batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesWrite 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes 
(batchId=242)

[jira] [Assigned] (HIVE-19344) Change default value of msck.repair.batch.size

2018-04-27 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar reassigned HIVE-19344:
--


> Change default value of msck.repair.batch.size 
> ---
>
> Key: HIVE-19344
> URL: https://issues.apache.org/jira/browse/HIVE-19344
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Minor
>
> {{msck.repair.batch.size}} default to 0 which means msck will try to add all 
> the partitions in one API call to HMS. This can potentially add huge memory 
> pressure on HMS. The default value should be changed to a reasonable number 
> so that in case of large number of partitions we can batch the addition of 
> partitions. Same goes for {{msck.repair.batch.max.retries}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19322) broken test: TestNegativeMinimrCliDriver#testCliDriver[minimr_broken_pipe]

2018-04-27 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-19322:
--
Attachment: HIVE-19322.01.patch

> broken test: TestNegativeMinimrCliDriver#testCliDriver[minimr_broken_pipe]
> --
>
> Key: HIVE-19322
> URL: https://issues.apache.org/jira/browse/HIVE-19322
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test, Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-19322.01.patch
>
>
> this is apparently caused by HIVE-18739, specifically changing
> {{private static ThreadLocal tss}} in {{SessionState}} to 
> {{private static InheritableThreadLocal tss}}
> need to figure out why this is.  
> Looks like
> {{TestNegativeMinimrCliDriver 
> -Dqfile=mapreduce_stack_trace_turnoff.q,mapreduce_stack_trace.q,cluster_tasklog_retrieval.q}}
> are also broken by this



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19054) Function replication shall use "hive.repl.replica.functions.root.dir" as root

2018-04-27 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457166#comment-16457166
 ] 

Thejas M Nair commented on HIVE-19054:
--

+1 pending tests

> Function replication shall use "hive.repl.replica.functions.root.dir" as root
> -
>
> Key: HIVE-19054
> URL: https://issues.apache.org/jira/browse/HIVE-19054
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-19054.1.patch, HIVE-19054.2.patch, 
> HIVE-19054.3.patch, HIVE-19054.4.patch
>
>
> It's wrongly use fs.defaultFS as the root, ignore 
> "hive.repl.replica.functions.root.dir" definition, thus prevent replicating 
> to cloud destination.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19324) improve YARN queue check error message in Tez pool

2018-04-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457152#comment-16457152
 ] 

Hive QA commented on HIVE-19324:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10538/dev-support/hive-personality.sh
 |
| git revision | master / cbc3863 |
| Default Java | 1.8.0_111 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10538/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> improve YARN queue check error message in Tez pool
> --
>
> Key: HIVE-19324
> URL: https://issues.apache.org/jira/browse/HIVE-19324
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepesh Khandelwal
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19324.01.patch, HIVE-19324.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18910) Migrate to Murmur hash for shuffle and bucketing

2018-04-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457114#comment-16457114
 ] 

Hive QA commented on HIVE-18910:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12920912/HIVE-18910.44.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/10537/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10537/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10537/

Messages:
{noformat}
 This message was trimmed, see log for full details 
error: a/ql/src/test/results/clientpositive/spark/union_remove_7.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/spark/union_remove_8.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/spark/union_remove_9.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/spark/vectorization_0.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/stats0.q.out: does not exist in 
index
error: a/ql/src/test/results/clientpositive/stats1.q.out: does not exist in 
index
error: a/ql/src/test/results/clientpositive/stats10.q.out: does not exist in 
index
error: a/ql/src/test/results/clientpositive/stats16.q.out: does not exist in 
index
error: a/ql/src/test/results/clientpositive/stats3.q.out: does not exist in 
index
error: a/ql/src/test/results/clientpositive/stats5.q.out: does not exist in 
index
error: a/ql/src/test/results/clientpositive/stats_empty_partition2.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/stats_invalidation.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/stats_list_bucket.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/stats_noscan_2.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/statsfs.q.out: does not exist in 
index
error: 
a/ql/src/test/results/clientpositive/temp_table_display_colstats_tbllvl.q.out: 
does not exist in index
error: 
a/ql/src/test/results/clientpositive/tez/acid_vectorization_original_tez.q.out: 
does not exist in index
error: a/ql/src/test/results/clientpositive/tez/explainanalyze_4.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/tez/explainanalyze_5.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/transform_ppr1.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/transform_ppr2.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/truncate_column.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/truncate_column_buckets.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/truncate_column_list_bucket.q.out: 
does not exist in index
error: a/ql/src/test/results/clientpositive/udf_explode.q.out: does not exist 
in index
error: a/ql/src/test/results/clientpositive/udtf_explode.q.out: does not exist 
in index
error: a/ql/src/test/results/clientpositive/unicode_comments.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/unicode_notation.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/union22.q.out: does not exist in 
index
error: a/ql/src/test/results/clientpositive/union24.q.out: does not exist in 
index
error: a/ql/src/test/results/clientpositive/union_pos_alias.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/union_ppr.q.out: does not exist in 
index
error: a/ql/src/test/results/clientpositive/union_remove_1.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/union_remove_10.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/union_remove_11.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/union_remove_12.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/union_remove_13.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/union_remove_14.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/union_remove_15.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/union_remove_16.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/union_remove_17.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/union_remove_18.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/union_remove_19.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/union_remove_2.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/union_remove_20.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/union_remove_21.q.out: does not 
exist in index
error: 

[jira] [Updated] (HIVE-19334) Use actual file size rather than stats for fetch task optimization with external tables

2018-04-27 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-19334:
--
Assignee: Jason Dere
  Status: Patch Available  (was: Open)

> Use actual file size rather than stats for fetch task optimization with 
> external tables
> ---
>
> Key: HIVE-19334
> URL: https://issues.apache.org/jira/browse/HIVE-19334
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-19334.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19334) Use actual file size rather than stats for fetch task optimization with external tables

2018-04-27 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-19334:
--
Attachment: HIVE-19334.1.patch

> Use actual file size rather than stats for fetch task optimization with 
> external tables
> ---
>
> Key: HIVE-19334
> URL: https://issues.apache.org/jira/browse/HIVE-19334
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jason Dere
>Priority: Major
> Attachments: HIVE-19334.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19327) qroupby_rollup_empty.q fails for insert-only transactional tables

2018-04-27 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457112#comment-16457112
 ] 

Sergey Shelukhin commented on HIVE-19327:
-

Yeah but it may not handle all cases correctly.
The basic idea is that Hive doesn't run operators for empty splits, but for GBY 
we still want to run them to generate summary (eg for rollup).
Usually Hive generates an empty split in such cases with 0 rows to force 
operators to run.
This patch returns original directory of MM table if there are no valid MM 
directories.
It should definitely work ok for the base case in this test - when there are no 
valid MM delta directories because there's no data at all (it will be 
equivalent to the custom 0-row split).
But unless I'm missing something, it won't work correctly if e.g. there are in 
progress/aborted txns, so while there are no valid MM deltas, the original 
directory is not empty. The split will just specify the table directory itself 
and will read all these directories recursively in Tez.

> qroupby_rollup_empty.q fails for insert-only transactional tables
> -
>
> Key: HIVE-19327
> URL: https://issues.apache.org/jira/browse/HIVE-19327
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Steve Yeom
>Assignee: Steve Yeom
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19327.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19338) isExplicitAnalyze method may be incorrect in BasicStatsTask

2018-04-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457106#comment-16457106
 ] 

Hive QA commented on HIVE-19338:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12920909/HIVE-19338.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 55 failed/errored test(s), 14284 tests 
executed
*Failed tests:*
{noformat}
TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=93)

[infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q]
TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed 
out) (batchId=217)
TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] 
(batchId=17)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_5]
 (batchId=154)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] 
(batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_stats]
 (batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_part]
 (batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_explainuser_1]
 (batchId=183)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] 
(batchId=105)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[cluster_tasklog_retrieval]
 (batchId=98)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace]
 (batchId=98)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace_turnoff]
 (batchId=98)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[minimr_broken_pipe]
 (batchId=98)
org.apache.hadoop.hive.ql.TestAcidOnTez.testAcidInsertWithRemoveUnion 
(batchId=228)
org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228)
org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 
(batchId=228)
org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232)
org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate
 (batchId=231)
org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys
 (batchId=231)
org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testDifferentFiltersAreNotMatched
 (batchId=298)
org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testSameFiltersMatched 
(batchId=298)
org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
 (batchId=298)
org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
 (batchId=298)
org.apache.hadoop.hive.ql.plan.mapping.TestReOptimization.testNotReExecutedIfAssertionError
 (batchId=298)
org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgress (batchId=235)
org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData (batchId=240)
org.apache.hive.jdbc.TestSSL.testSSLFetchHttp (batchId=239)
org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveBackKill 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesWrite 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryElapsedTime
 (batchId=242)

[jira] [Commented] (HIVE-19327) qroupby_rollup_empty.q fails for insert-only transactional tables

2018-04-27 Thread Steve Yeom (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457107#comment-16457107
 ] 

Steve Yeom commented on HIVE-19327:
---

Yes, but the test case for this jira is for a main success/happy flow scenario.
What I am going to check is when the empty table has pre-existing aborted 
transaction-related-dir.

> qroupby_rollup_empty.q fails for insert-only transactional tables
> -
>
> Key: HIVE-19327
> URL: https://issues.apache.org/jira/browse/HIVE-19327
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Steve Yeom
>Assignee: Steve Yeom
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19327.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19327) qroupby_rollup_empty.q fails for insert-only transactional tables

2018-04-27 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457104#comment-16457104
 ] 

Prasanth Jayachandran commented on HIVE-19327:
--

Isn't this a fix is for the failing test?

> qroupby_rollup_empty.q fails for insert-only transactional tables
> -
>
> Key: HIVE-19327
> URL: https://issues.apache.org/jira/browse/HIVE-19327
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Steve Yeom
>Assignee: Steve Yeom
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19327.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19211) New streaming ingest API and support for dynamic partitioning

2018-04-27 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457102#comment-16457102
 ] 

Prasanth Jayachandran commented on HIVE-19211:
--

Rebased patch. [~ekoifman] can you please take another look?

> New streaming ingest API and support for dynamic partitioning
> -
>
> Key: HIVE-19211
> URL: https://issues.apache.org/jira/browse/HIVE-19211
> Project: Hive
>  Issue Type: Sub-task
>  Components: Streaming
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-19211.1.patch, HIVE-19211.2.patch, 
> HIVE-19211.3.patch, HIVE-19211.4.patch, HIVE-19211.5.patch, 
> HIVE-19211.6.patch, HIVE-19211.7.patch, HIVE-19211.8.patch, HIVE-19211.9.patch
>
>
> - New streaming API under new hive sub-module
> - Dynamic partitioning support
> - Auto-rollover transactions
> - Automatic heartbeating



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19211) New streaming ingest API and support for dynamic partitioning

2018-04-27 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-19211:
-
Attachment: HIVE-19211.9.patch

> New streaming ingest API and support for dynamic partitioning
> -
>
> Key: HIVE-19211
> URL: https://issues.apache.org/jira/browse/HIVE-19211
> Project: Hive
>  Issue Type: Sub-task
>  Components: Streaming
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-19211.1.patch, HIVE-19211.2.patch, 
> HIVE-19211.3.patch, HIVE-19211.4.patch, HIVE-19211.5.patch, 
> HIVE-19211.6.patch, HIVE-19211.7.patch, HIVE-19211.8.patch, HIVE-19211.9.patch
>
>
> - New streaming API under new hive sub-module
> - Dynamic partitioning support
> - Auto-rollover transactions
> - Automatic heartbeating



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19307) Support ArrowOutputStream in LlapOutputFormatService

2018-04-27 Thread Eric Wohlstadter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Wohlstadter updated HIVE-19307:

Attachment: HIVE-19307.2.patch

> Support ArrowOutputStream in LlapOutputFormatService
> 
>
> Key: HIVE-19307
> URL: https://issues.apache.org/jira/browse/HIVE-19307
> Project: Hive
>  Issue Type: Task
>  Components: llap
>Reporter: Eric Wohlstadter
>Assignee: Eric Wohlstadter
>Priority: Major
> Attachments: HIVE-19307.2.patch
>
>
> Support pushing arrow batches through 
> org.apache.arrow.vector.ipc.ArrowOutputStream in LllapOutputFormatService.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19307) Support ArrowOutputStream in LlapOutputFormatService

2018-04-27 Thread Eric Wohlstadter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Wohlstadter updated HIVE-19307:

Attachment: (was: HIVE-19307.1.patch)

> Support ArrowOutputStream in LlapOutputFormatService
> 
>
> Key: HIVE-19307
> URL: https://issues.apache.org/jira/browse/HIVE-19307
> Project: Hive
>  Issue Type: Task
>  Components: llap
>Reporter: Eric Wohlstadter
>Assignee: Eric Wohlstadter
>Priority: Major
>
> Support pushing arrow batches through 
> org.apache.arrow.vector.ipc.ArrowOutputStream in LllapOutputFormatService.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19327) qroupby_rollup_empty.q fails for insert-only transactional tables

2018-04-27 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457083#comment-16457083
 ] 

Prasanth Jayachandran commented on HIVE-19327:
--

lgtm, +1. Pending tests

> qroupby_rollup_empty.q fails for insert-only transactional tables
> -
>
> Key: HIVE-19327
> URL: https://issues.apache.org/jira/browse/HIVE-19327
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Steve Yeom
>Assignee: Steve Yeom
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19327.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19343) Replication: The file uris being dumped should contain information about the uri of the source cluster's cm root

2018-04-27 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457074#comment-16457074
 ] 

Vaibhav Gumashta commented on HIVE-19343:
-

Initial patch for review; will add tests and update

> Replication: The file uris being dumped should contain information about the 
> uri of the source cluster's cm root
> 
>
> Key: HIVE-19343
> URL: https://issues.apache.org/jira/browse/HIVE-19343
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-19343.1.patch
>
>
> In replication v2, we use change manager (the location is specified by 
> cmroot: {{hive.repl.cmrootdir}}) to archive deleted files from the source 
> cluster so that they can later be copied on the target cluster. When files 
> are read from the cmroot, the target needs to know the appropriate file 
> system. This patch adds the fs information of the cmroot on the source to the 
> filenames that get written in the repldump command.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19343) Replication: The file uris being dumped should contain information about the uri of the source cluster's cm root

2018-04-27 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-19343:

Attachment: HIVE-19343.1.patch

> Replication: The file uris being dumped should contain information about the 
> uri of the source cluster's cm root
> 
>
> Key: HIVE-19343
> URL: https://issues.apache.org/jira/browse/HIVE-19343
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-19343.1.patch
>
>
> In replication v2, we use change manager (the location is specified by 
> cmroot: {{hive.repl.cmrootdir}}) to archive deleted files from the source 
> cluster so that they can later be copied on the target cluster. When files 
> are read from the cmroot, the target needs to know the appropriate file 
> system. This patch adds the fs information of the cmroot on the source to the 
> filenames that get written in the repldump command.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-19343) Replication: The file uris being dumped should contain information about the uri of the source cluster's cm root

2018-04-27 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta reassigned HIVE-19343:
---

Assignee: Vaibhav Gumashta

> Replication: The file uris being dumped should contain information about the 
> uri of the source cluster's cm root
> 
>
> Key: HIVE-19343
> URL: https://issues.apache.org/jira/browse/HIVE-19343
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
>
> In replication v2, we use change manager (the location is specified by 
> cmroot: {{hive.repl.cmrootdir}}) to archive deleted files from the source 
> cluster so that they can later be copied on the target cluster. When files 
> are read from the cmroot, the target needs to know the appropriate file 
> system. This patch adds the fs information of the cmroot on the source to the 
> filenames that get written in the repldump command.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19327) qroupby_rollup_empty.q fails for insert-only transactional tables

2018-04-27 Thread Steve Yeom (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457060#comment-16457060
 ] 

Steve Yeom commented on HIVE-19327:
---

Hey [~sershe] I will check that case and will make sure it works. 

> qroupby_rollup_empty.q fails for insert-only transactional tables
> -
>
> Key: HIVE-19327
> URL: https://issues.apache.org/jira/browse/HIVE-19327
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Steve Yeom
>Assignee: Steve Yeom
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19327.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19327) qroupby_rollup_empty.q fails for insert-only transactional tables

2018-04-27 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457053#comment-16457053
 ] 

Sergey Shelukhin commented on HIVE-19327:
-

[~steveyeom2017] for the case when there are no finalDirs, and operators must 
run, it will not filter dirs, just return original dirs as it... 
This will work fine if original dir is empty (causing finalDirs to be null).
However if there's something inside dirs that was excluded from finalDirs, it 
will be included and read, which should not happen. I think this condition 
needs to be propagated up and handled the same as the other case - by 
generation a custom 0-row split.


> qroupby_rollup_empty.q fails for insert-only transactional tables
> -
>
> Key: HIVE-19327
> URL: https://issues.apache.org/jira/browse/HIVE-19327
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Steve Yeom
>Assignee: Steve Yeom
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19327.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19282) don't nest delta directories inside LB directories for ACID tables

2018-04-27 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19282:

Attachment: HIVE-19282.02.patch

> don't nest delta directories inside LB directories for ACID tables
> --
>
> Key: HIVE-19282
> URL: https://issues.apache.org/jira/browse/HIVE-19282
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19282.01.patch, HIVE-19282.02.patch, 
> HIVE-19282.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19282) don't nest delta directories inside LB directories for ACID tables

2018-04-27 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457047#comment-16457047
 ] 

Sergey Shelukhin commented on HIVE-19282:
-

Rebased the patch. [~prasanth_j] can you please review? thnx

> don't nest delta directories inside LB directories for ACID tables
> --
>
> Key: HIVE-19282
> URL: https://issues.apache.org/jira/browse/HIVE-19282
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19282.01.patch, HIVE-19282.02.patch, 
> HIVE-19282.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18958) Fix Spark config warnings

2018-04-27 Thread Bharathkrishna Guruvayoor Murali (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457027#comment-16457027
 ] 

Bharathkrishna Guruvayoor Murali commented on HIVE-18958:
-

Tested the tests in master branch, and noticed the same difference in the q.out 
files.

Hence, test failures look unrelated.

> Fix Spark config warnings
> -
>
> Key: HIVE-18958
> URL: https://issues.apache.org/jira/browse/HIVE-18958
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Major
> Attachments: HIVE-18958.01.patch, HIVE-18958.02.patch, 
> HIVE-18958.03.patch, HIVE-18958.testDiff.patch
>
>
> Getting a few configuration warnings in the logs that we should fix:
> {code}
> 2018-03-14T10:06:19,164  WARN [d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> spark.SparkConf: The configuration key 'spark.yarn.driver.memoryOverhead' has 
> been deprecated as of Spark 2.3 and may be removed in the future. Please use 
> the new key 'spark.driver.memoryOverhead' instead.
> 2018-03-14T10:06:19,165  WARN [d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> spark.SparkConf: The configuration key spark.akka.logLifecycleEvents is not 
> supported any more because Spark doesn't use Akka since 2.0
> 2018-03-14T10:06:19,165  WARN [d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> spark.SparkConf: The configuration key 'spark.yarn.executor.memoryOverhead' 
> has been deprecated as of Spark 2.3 and may be removed in the future. Please 
> use the new key 'spark.executor.memoryOverhead' instead.
> 2018-03-14T10:06:20,351  INFO 
> [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> client.SparkClientImpl: Warning: Ignoring non-spark config property: 
> hive.spark.client.server.connect.timeout=9
> 2018-03-14T10:06:20,351  INFO 
> [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> client.SparkClientImpl: Warning: Ignoring non-spark config property: 
> hive.spark.client.rpc.threads=8
> 2018-03-14T10:06:20,351  INFO 
> [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> client.SparkClientImpl: Warning: Ignoring non-spark config property: 
> hive.spark.client.connect.timeout=3
> 2018-03-14T10:06:20,351  INFO 
> [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> client.SparkClientImpl: Warning: Ignoring non-spark config property: 
> hive.spark.client.secret.bits=256
> 2018-03-14T10:06:20,351  INFO 
> [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> client.SparkClientImpl: Warning: Ignoring non-spark config property: 
> hive.spark.client.rpc.max.size=52428800
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18916) SparkClientImpl doesn't error out if spark-submit fails

2018-04-27 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457021#comment-16457021
 ] 

Sahil Takiar commented on HIVE-18916:
-

Did some work on this a while ago, attaching what I have so far.

> SparkClientImpl doesn't error out if spark-submit fails
> ---
>
> Key: HIVE-18916
> URL: https://issues.apache.org/jira/browse/HIVE-18916
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18916.1.WIP.patch
>
>
> If {{spark-submit}} returns a non-zero exit code, {{SparkClientImpl}} will 
> simply log the exit code, but won't throw an error. Eventually, the 
> connection timeout will get triggered and an exception like {{Timed out 
> waiting for client connection}} will be logged, which is pretty misleading.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18916) SparkClientImpl doesn't error out if spark-submit fails

2018-04-27 Thread Sahil Takiar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-18916:

Attachment: HIVE-18916.1.WIP.patch

> SparkClientImpl doesn't error out if spark-submit fails
> ---
>
> Key: HIVE-18916
> URL: https://issues.apache.org/jira/browse/HIVE-18916
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18916.1.WIP.patch
>
>
> If {{spark-submit}} returns a non-zero exit code, {{SparkClientImpl}} will 
> simply log the exit code, but won't throw an error. Eventually, the 
> connection timeout will get triggered and an exception like {{Timed out 
> waiting for client connection}} will be logged, which is pretty misleading.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19338) isExplicitAnalyze method may be incorrect in BasicStatsTask

2018-04-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457020#comment-16457020
 ] 

Hive QA commented on HIVE-19338:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10536/dev-support/hive-personality.sh
 |
| git revision | master / 6f54709 |
| Default Java | 1.8.0_111 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10536/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> isExplicitAnalyze method may be incorrect in BasicStatsTask
> ---
>
> Key: HIVE-19338
> URL: https://issues.apache.org/jira/browse/HIVE-19338
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19338.patch
>
>
> It relies on a specific ctor being used, however this ctor is used on 
> non-analyze paths too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19239) Check for possible null timestamp fields during SerDe from Druid events

2018-04-27 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-19239:

   Resolution: Fixed
Fix Version/s: 3.1.0
   Status: Resolved  (was: Patch Available)

Pushed to master. Thanks, Slim!
[~vgarg] Please consider it for branch-3

> Check for possible null timestamp fields during SerDe from Druid events
> ---
>
> Key: HIVE-19239
> URL: https://issues.apache.org/jira/browse/HIVE-19239
> Project: Hive
>  Issue Type: Bug
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-19239.patch
>
>
> Currently we do not check for possible null timestamp events.
> This might lead to NPE.
> This Patch add addition check for such case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18915) Better client logging when a HoS session can't be opened

2018-04-27 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457016#comment-16457016
 ] 

Sahil Takiar commented on HIVE-18915:
-

+1 LGTM

> Better client logging when a HoS session can't be opened
> 
>
> Key: HIVE-18915
> URL: https://issues.apache.org/jira/browse/HIVE-18915
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: 3.0.0
>Reporter: Sahil Takiar
>Assignee: Aihua Xu
>Priority: Major
> Attachments: HIVE-18915.1.patch, HIVE-18915.2.patch, 
> HIVE-18915.3.patch, HIVE-18915.4.patch
>
>
> Users just get a {{FAILED: Execution Error, return code 30041 from 
> org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create Spark client 
> for Spark session [id]}} when a HoS session can't be opened, would be better 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19327) qroupby_rollup_empty.q fails for insert-only transactional tables

2018-04-27 Thread Steve Yeom (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457013#comment-16457013
 ] 

Steve Yeom commented on HIVE-19327:
---

Hi [~prasanth_j] could you look at the patch? 
1. The failed tests of age 1 of the p-tests run are all clear in my environment 
  except TestJdbcWithDBTokenStoreNoDoAs.java (which tests connection and auth. 
So does not seem to related).
2. As we said, HiveInputFormat.processPathsForMmRead() is used for the test 
case. I have modified the method. 
  This method runs getAcidState() to filter out 
aborted-transaction-related-directories to get a result set.

Thanks, 
Steve. 

> qroupby_rollup_empty.q fails for insert-only transactional tables
> -
>
> Key: HIVE-19327
> URL: https://issues.apache.org/jira/browse/HIVE-19327
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Steve Yeom
>Assignee: Steve Yeom
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19327.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19239) Check for possible null timestamp fields during SerDe from Druid events

2018-04-27 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457010#comment-16457010
 ] 

Ashutosh Chauhan commented on HIVE-19239:
-

I see. +1

> Check for possible null timestamp fields during SerDe from Druid events
> ---
>
> Key: HIVE-19239
> URL: https://issues.apache.org/jira/browse/HIVE-19239
> Project: Hive
>  Issue Type: Bug
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Attachments: HIVE-19239.patch
>
>
> Currently we do not check for possible null timestamp events.
> This might lead to NPE.
> This Patch add addition check for such case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17657) export/import for MM tables is broken

2018-04-27 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-17657:

Attachment: HIVE-17657.05.patch

> export/import for MM tables is broken
> -
>
> Key: HIVE-17657
> URL: https://issues.apache.org/jira/browse/HIVE-17657
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-17657.01.patch, HIVE-17657.02.patch, 
> HIVE-17657.03.patch, HIVE-17657.04.patch, HIVE-17657.05.patch, 
> HIVE-17657.patch
>
>
> there is mm_exim.q but it's not clear from the tests what file structure it 
> creates 
> On import the txnids in the directory names would have to be remapped if 
> importing to a different cluster.  Perhaps export can be smart and export 
> highest base_x and accretive deltas (minus aborted ones).  Then import can 
> ...?  It would have to remap txn ids from the archive to new txn ids.  This 
> would then mean that import is made up of several transactions rather than 1 
> atomic op.  (all locks must belong to a transaction)
> One possibility is to open a new txn for each dir in the archive (where 
> start/end txn of file name is the same) and commit all of them at once (need 
> new TMgr API for that).  This assumes using a shared lock (if any!) and thus 
> allows other inserts (not related to import) to occur.
> What if you have delta_6_9, such as a result of concatenate?  If we stipulate 
> that this must mean that there is no delta_6_6 or any other "obsolete" delta 
> in the archive we can map it to a new single txn delta_x_x.
> Add read_only mode for tables (useful in general, may be needed for upgrade 
> etc) and use that to make the above atomic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17657) export/import for MM tables is broken

2018-04-27 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457008#comment-16457008
 ] 

Sergey Shelukhin commented on HIVE-17657:
-

Rebased the patch. I bet HiveQA will find a way to lose it somehow.

> export/import for MM tables is broken
> -
>
> Key: HIVE-17657
> URL: https://issues.apache.org/jira/browse/HIVE-17657
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-17657.01.patch, HIVE-17657.02.patch, 
> HIVE-17657.03.patch, HIVE-17657.04.patch, HIVE-17657.05.patch, 
> HIVE-17657.patch
>
>
> there is mm_exim.q but it's not clear from the tests what file structure it 
> creates 
> On import the txnids in the directory names would have to be remapped if 
> importing to a different cluster.  Perhaps export can be smart and export 
> highest base_x and accretive deltas (minus aborted ones).  Then import can 
> ...?  It would have to remap txn ids from the archive to new txn ids.  This 
> would then mean that import is made up of several transactions rather than 1 
> atomic op.  (all locks must belong to a transaction)
> One possibility is to open a new txn for each dir in the archive (where 
> start/end txn of file name is the same) and commit all of them at once (need 
> new TMgr API for that).  This assumes using a shared lock (if any!) and thus 
> allows other inserts (not related to import) to occur.
> What if you have delta_6_9, such as a result of concatenate?  If we stipulate 
> that this must mean that there is no delta_6_6 or any other "obsolete" delta 
> in the archive we can map it to a new single txn delta_x_x.
> Add read_only mode for tables (useful in general, may be needed for upgrade 
> etc) and use that to make the above atomic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18958) Fix Spark config warnings

2018-04-27 Thread Bharathkrishna Guruvayoor Murali (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457005#comment-16457005
 ] 

Bharathkrishna Guruvayoor Murali commented on HIVE-18958:
-

[~stakiar]

Attached the file HIVE-18958.testDiff.patch which contains the test output 
differences.
All the builds were successful but noticed differences in q.out files.

> Fix Spark config warnings
> -
>
> Key: HIVE-18958
> URL: https://issues.apache.org/jira/browse/HIVE-18958
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Major
> Attachments: HIVE-18958.01.patch, HIVE-18958.02.patch, 
> HIVE-18958.03.patch, HIVE-18958.testDiff.patch
>
>
> Getting a few configuration warnings in the logs that we should fix:
> {code}
> 2018-03-14T10:06:19,164  WARN [d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> spark.SparkConf: The configuration key 'spark.yarn.driver.memoryOverhead' has 
> been deprecated as of Spark 2.3 and may be removed in the future. Please use 
> the new key 'spark.driver.memoryOverhead' instead.
> 2018-03-14T10:06:19,165  WARN [d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> spark.SparkConf: The configuration key spark.akka.logLifecycleEvents is not 
> supported any more because Spark doesn't use Akka since 2.0
> 2018-03-14T10:06:19,165  WARN [d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> spark.SparkConf: The configuration key 'spark.yarn.executor.memoryOverhead' 
> has been deprecated as of Spark 2.3 and may be removed in the future. Please 
> use the new key 'spark.executor.memoryOverhead' instead.
> 2018-03-14T10:06:20,351  INFO 
> [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> client.SparkClientImpl: Warning: Ignoring non-spark config property: 
> hive.spark.client.server.connect.timeout=9
> 2018-03-14T10:06:20,351  INFO 
> [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> client.SparkClientImpl: Warning: Ignoring non-spark config property: 
> hive.spark.client.rpc.threads=8
> 2018-03-14T10:06:20,351  INFO 
> [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> client.SparkClientImpl: Warning: Ignoring non-spark config property: 
> hive.spark.client.connect.timeout=3
> 2018-03-14T10:06:20,351  INFO 
> [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> client.SparkClientImpl: Warning: Ignoring non-spark config property: 
> hive.spark.client.secret.bits=256
> 2018-03-14T10:06:20,351  INFO 
> [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> client.SparkClientImpl: Warning: Ignoring non-spark config property: 
> hive.spark.client.rpc.max.size=52428800
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18958) Fix Spark config warnings

2018-04-27 Thread Bharathkrishna Guruvayoor Murali (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharathkrishna Guruvayoor Murali updated HIVE-18958:

Attachment: HIVE-18958.testDiff.patch

> Fix Spark config warnings
> -
>
> Key: HIVE-18958
> URL: https://issues.apache.org/jira/browse/HIVE-18958
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Major
> Attachments: HIVE-18958.01.patch, HIVE-18958.02.patch, 
> HIVE-18958.03.patch, HIVE-18958.testDiff.patch
>
>
> Getting a few configuration warnings in the logs that we should fix:
> {code}
> 2018-03-14T10:06:19,164  WARN [d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> spark.SparkConf: The configuration key 'spark.yarn.driver.memoryOverhead' has 
> been deprecated as of Spark 2.3 and may be removed in the future. Please use 
> the new key 'spark.driver.memoryOverhead' instead.
> 2018-03-14T10:06:19,165  WARN [d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> spark.SparkConf: The configuration key spark.akka.logLifecycleEvents is not 
> supported any more because Spark doesn't use Akka since 2.0
> 2018-03-14T10:06:19,165  WARN [d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> spark.SparkConf: The configuration key 'spark.yarn.executor.memoryOverhead' 
> has been deprecated as of Spark 2.3 and may be removed in the future. Please 
> use the new key 'spark.executor.memoryOverhead' instead.
> 2018-03-14T10:06:20,351  INFO 
> [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> client.SparkClientImpl: Warning: Ignoring non-spark config property: 
> hive.spark.client.server.connect.timeout=9
> 2018-03-14T10:06:20,351  INFO 
> [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> client.SparkClientImpl: Warning: Ignoring non-spark config property: 
> hive.spark.client.rpc.threads=8
> 2018-03-14T10:06:20,351  INFO 
> [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> client.SparkClientImpl: Warning: Ignoring non-spark config property: 
> hive.spark.client.connect.timeout=3
> 2018-03-14T10:06:20,351  INFO 
> [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> client.SparkClientImpl: Warning: Ignoring non-spark config property: 
> hive.spark.client.secret.bits=256
> 2018-03-14T10:06:20,351  INFO 
> [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> client.SparkClientImpl: Warning: Ignoring non-spark config property: 
> hive.spark.client.rpc.max.size=52428800
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19206) Automatic memory management for open streaming writers

2018-04-27 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-19206:
-
Attachment: HIVE-19206.2.patch

> Automatic memory management for open streaming writers
> --
>
> Key: HIVE-19206
> URL: https://issues.apache.org/jira/browse/HIVE-19206
> Project: Hive
>  Issue Type: Sub-task
>  Components: Streaming
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-19206.1.patch, HIVE-19206.2.patch
>
>
> Problem:
>  When there are 100s of record updaters open, the amount of memory required 
> by orc writers keeps growing because of ORC's internal buffers. This can lead 
> to potential high GC or OOM during streaming ingest.
> Solution:
>  The high level idea is for the streaming connection to remember all the open 
> record updaters and flush the record updater periodically (at some interval). 
> Records written to each record updater can be used as a metric to determine 
> the candidate record updaters for flushing. 
>  If stripe size of orc file is 64MB, the default memory management check 
> happens only after every 5000 rows which may which may be too late when there 
> are too many concurrent writers in a process. Example case would be 100 
> writers open and each of them have almost full stripe of 64MB buffered data, 
> this would take 100*64MB ~=6GB of memory. When all of the record writers 
> flush, the memory usage drops down to 100*~2MB which is just ~200MB memory 
> usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19206) Automatic memory management for open streaming writers

2018-04-27 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16456992#comment-16456992
 ] 

Prasanth Jayachandran commented on HIVE-19206:
--

- Added config to disable auto flush (mainly for testing)
- minor fixes

> Automatic memory management for open streaming writers
> --
>
> Key: HIVE-19206
> URL: https://issues.apache.org/jira/browse/HIVE-19206
> Project: Hive
>  Issue Type: Sub-task
>  Components: Streaming
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-19206.1.patch, HIVE-19206.2.patch
>
>
> Problem:
>  When there are 100s of record updaters open, the amount of memory required 
> by orc writers keeps growing because of ORC's internal buffers. This can lead 
> to potential high GC or OOM during streaming ingest.
> Solution:
>  The high level idea is for the streaming connection to remember all the open 
> record updaters and flush the record updater periodically (at some interval). 
> Records written to each record updater can be used as a metric to determine 
> the candidate record updaters for flushing. 
>  If stripe size of orc file is 64MB, the default memory management check 
> happens only after every 5000 rows which may which may be too late when there 
> are too many concurrent writers in a process. Example case would be 100 
> writers open and each of them have almost full stripe of 64MB buffered data, 
> this would take 100*64MB ~=6GB of memory. When all of the record writers 
> flush, the memory usage drops down to 100*~2MB which is just ~200MB memory 
> usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19211) New streaming ingest API and support for dynamic partitioning

2018-04-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16456988#comment-16456988
 ] 

Hive QA commented on HIVE-19211:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12920905/HIVE-19211.8.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/10535/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10535/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10535/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2018-04-27 20:15:00.052
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-10535/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2018-04-27 20:15:00.055
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 6f54709 HIVE-19330: multi_insert_partitioned.q fails with "src 
table does not exist" message. (Steve Yeom, reviewed by Jason Dere)
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 6f54709 HIVE-19330: multi_insert_partitioned.q fails with "src 
table does not exist" message. (Steve Yeom, reviewed by Jason Dere)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2018-04-27 20:15:00.647
+ rm -rf ../yetus_PreCommit-HIVE-Build-10535
+ mkdir ../yetus_PreCommit-HIVE-Build-10535
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-10535
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-10535/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java: does not 
exist in index
error: a/itests/hive-unit/pom.xml: does not exist in index
error: 
a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/txn/compactor/TestCompactor.java:
 does not exist in index
error: 
a/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreUtils.java: 
does not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcRecordUpdater.java: 
does not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/lockmgr/DbTxnManager.java: does 
not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java: does not 
exist in index
error: a/streaming/pom.xml: does not exist in index
error: 
a/streaming/src/java/org/apache/hive/streaming/AbstractRecordWriter.java: does 
not exist in index
error: a/streaming/src/java/org/apache/hive/streaming/ConnectionError.java: 
does not exist in index
error: 
a/streaming/src/java/org/apache/hive/streaming/DelimitedInputWriter.java: does 
not exist in index
error: a/streaming/src/java/org/apache/hive/streaming/HeartBeatFailure.java: 
does not exist in index
error: a/streaming/src/java/org/apache/hive/streaming/HiveEndPoint.java: does 
not exist in index
error: a/streaming/src/java/org/apache/hive/streaming/ImpersonationFailed.java: 
does not exist in index
error: a/streaming/src/java/org/apache/hive/streaming/InvalidColumn.java: does 
not exist in index
error: a/streaming/src/java/org/apache/hive/streaming/InvalidPartition.java: 
does not exist in index
error: a/streaming/src/java/org/apache/hive/streaming/InvalidTable.java: does 
not exist in index
error: 
a/streaming/src/java/org/apache/hive/streaming/InvalidTrasactionState.java: 
does not exist in index
error: 
a/streaming/src/java/org/apache/hive/streaming/PartitionCreationFailed.java: 
does not exist in index
error: 
a/streaming/src/java/org/apache/hive/streaming/QueryFailedException.java: 

[jira] [Commented] (HIVE-19337) Partition whitelist regex doesn't work (and never did)

2018-04-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16456986#comment-16456986
 ] 

Hive QA commented on HIVE-19337:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12920899/HIVE-19337.01.branch-2.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/10534/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10534/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10534/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2018-04-27 20:12:04.291
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-10534/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2018-04-27 20:12:04.295
+ cd apache-github-source-source
+ git fetch origin
>From https://github.com/apache/hive
   331dd57..6f54709  master -> origin/master
   c20f7e1..7cbd648  branch-3   -> origin/branch-3
+ git reset --hard HEAD
HEAD is now at 331dd57 HIVE-18903: Lower Logging Level for ObjectStore (Antal 
Sinkovits, reviewed by Sahil Takiar)
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded.
  (use "git pull" to update your local branch)
+ git reset --hard origin/master
HEAD is now at 6f54709 HIVE-19330: multi_insert_partitioned.q fails with "src 
table does not exist" message. (Steve Yeom, reviewed by Jason Dere)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2018-04-27 20:12:08.027
+ rm -rf ../yetus_PreCommit-HIVE-Build-10534
+ mkdir ../yetus_PreCommit-HIVE-Build-10534
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-10534
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-10534/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: a/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java: 
does not exist in index
error: metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java: 
does not exist in index
error: src/java/org/apache/hadoop/hive/metastore/ObjectStore.java: does not 
exist in index
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12920899 - PreCommit-HIVE-Build

> Partition whitelist regex doesn't work (and never did)
> --
>
> Key: HIVE-19337
> URL: https://issues.apache.org/jira/browse/HIVE-19337
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 2.3.3
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
>Priority: Major
> Attachments: HIVE-19337.01.branch-2.patch
>
>
> {{ObjectStore.setConf()}} has the following code:
> {code:java}
> String partitionValidationRegex =
>  
> hiveConf.get(HiveConf.ConfVars.METASTORE_PARTITION_NAME_WHITELIST_PATTERN.name());
> {code}
>  Note that it uses name() method which returns enum name 
> (METASTORE_PARTITION_NAME_WHITELIST_PATTERN) rather then .varname
> As a result the regex will always be null.
> The code was introduced as part of 
> HIVE-7223 Support generic PartitionSpecs in Metastore partition-functions
> So looks like this was broken since the original code drop. This is fixed in 
> Hive3 - probably when [~alangates] reworked access to configuration 
> (HIVE-17733) so it isn't a bug in Hive-3.
> [~stakiar_impala_496e] FYI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19330) multi_insert_partitioned.q fails with "src table does not exist" message.

2018-04-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16456982#comment-16456982
 ] 

Hive QA commented on HIVE-19330:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12920892/HIVE-19330.01.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 54 failed/errored test(s), 14284 tests 
executed
*Failed tests:*
{noformat}
TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=93)

[infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q]
TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed 
out) (batchId=217)
TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] 
(batchId=17)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[explainuser_1]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_5]
 (batchId=154)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_stats]
 (batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_part]
 (batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_explainuser_1]
 (batchId=183)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] 
(batchId=105)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[cluster_tasklog_retrieval]
 (batchId=98)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace]
 (batchId=98)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace_turnoff]
 (batchId=98)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[minimr_broken_pipe]
 (batchId=98)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join_reordering_values]
 (batchId=110)
org.apache.hadoop.hive.ql.TestAcidOnTez.testAcidInsertWithRemoveUnion 
(batchId=228)
org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228)
org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 
(batchId=228)
org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232)
org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate
 (batchId=231)
org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys
 (batchId=231)
org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testDifferentFiltersAreNotMatched
 (batchId=298)
org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testSameFiltersMatched 
(batchId=298)
org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
 (batchId=298)
org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
 (batchId=298)
org.apache.hadoop.hive.ql.plan.mapping.TestReOptimization.testNotReExecutedIfAssertionError
 (batchId=298)
org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgress (batchId=235)
org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData (batchId=240)
org.apache.hive.jdbc.TestSSL.testSSLFetchHttp (batchId=239)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesWrite 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryElapsedTime
 (batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryExecutionTime
 (batchId=242)

[jira] [Commented] (HIVE-19332) Disable compute.query.using.stats for external table

2018-04-27 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16456981#comment-16456981
 ] 

Jason Dere commented on HIVE-19332:
---

Initial patch - external table stats will show up as not up-to-date.
[~gopalv] [~jcamachorodriguez] does this approach look good? If so then I will 
try to add a qtest.

> Disable compute.query.using.stats for external table
> 
>
> Key: HIVE-19332
> URL: https://issues.apache.org/jira/browse/HIVE-19332
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jason Dere
>Priority: Major
> Attachments: HIVE-19332.1.patch
>
>
> Hive can use statistics to answer queries like count(*). This can be 
> problematic on external tables where another tool might add files that Hive 
> doesn’t know about. In that case Hive will return incorrect results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19332) Disable compute.query.using.stats for external table

2018-04-27 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-19332:
--
Attachment: HIVE-19332.1.patch

> Disable compute.query.using.stats for external table
> 
>
> Key: HIVE-19332
> URL: https://issues.apache.org/jira/browse/HIVE-19332
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jason Dere
>Priority: Major
> Attachments: HIVE-19332.1.patch
>
>
> Hive can use statistics to answer queries like count(*). This can be 
> problematic on external tables where another tool might add files that Hive 
> doesn’t know about. In that case Hive will return incorrect results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19332) Disable compute.query.using.stats for external table

2018-04-27 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16456965#comment-16456965
 ] 

Jason Dere commented on HIVE-19332:
---

[~gopalv] has pointed out that both this and HIVE-19333 can be accomplished by 
preventing external tables stats from showing up as complete stats.

> Disable compute.query.using.stats for external table
> 
>
> Key: HIVE-19332
> URL: https://issues.apache.org/jira/browse/HIVE-19332
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jason Dere
>Priority: Major
>
> Hive can use statistics to answer queries like count(*). This can be 
> problematic on external tables where another tool might add files that Hive 
> doesn’t know about. In that case Hive will return incorrect results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18958) Fix Spark config warnings

2018-04-27 Thread Bharathkrishna Guruvayoor Murali (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16456937#comment-16456937
 ] 

Bharathkrishna Guruvayoor Murali commented on HIVE-18958:
-

I will verify once if any test failures are related and update if required.

> Fix Spark config warnings
> -
>
> Key: HIVE-18958
> URL: https://issues.apache.org/jira/browse/HIVE-18958
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Major
> Attachments: HIVE-18958.01.patch, HIVE-18958.02.patch, 
> HIVE-18958.03.patch
>
>
> Getting a few configuration warnings in the logs that we should fix:
> {code}
> 2018-03-14T10:06:19,164  WARN [d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> spark.SparkConf: The configuration key 'spark.yarn.driver.memoryOverhead' has 
> been deprecated as of Spark 2.3 and may be removed in the future. Please use 
> the new key 'spark.driver.memoryOverhead' instead.
> 2018-03-14T10:06:19,165  WARN [d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> spark.SparkConf: The configuration key spark.akka.logLifecycleEvents is not 
> supported any more because Spark doesn't use Akka since 2.0
> 2018-03-14T10:06:19,165  WARN [d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> spark.SparkConf: The configuration key 'spark.yarn.executor.memoryOverhead' 
> has been deprecated as of Spark 2.3 and may be removed in the future. Please 
> use the new key 'spark.executor.memoryOverhead' instead.
> 2018-03-14T10:06:20,351  INFO 
> [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> client.SparkClientImpl: Warning: Ignoring non-spark config property: 
> hive.spark.client.server.connect.timeout=9
> 2018-03-14T10:06:20,351  INFO 
> [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> client.SparkClientImpl: Warning: Ignoring non-spark config property: 
> hive.spark.client.rpc.threads=8
> 2018-03-14T10:06:20,351  INFO 
> [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> client.SparkClientImpl: Warning: Ignoring non-spark config property: 
> hive.spark.client.connect.timeout=3
> 2018-03-14T10:06:20,351  INFO 
> [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> client.SparkClientImpl: Warning: Ignoring non-spark config property: 
> hive.spark.client.secret.bits=256
> 2018-03-14T10:06:20,351  INFO 
> [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> client.SparkClientImpl: Warning: Ignoring non-spark config property: 
> hive.spark.client.rpc.max.size=52428800
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18958) Fix Spark config warnings

2018-04-27 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16456934#comment-16456934
 ] 

Sahil Takiar commented on HIVE-18958:
-

+1 LGTM

> Fix Spark config warnings
> -
>
> Key: HIVE-18958
> URL: https://issues.apache.org/jira/browse/HIVE-18958
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Major
> Attachments: HIVE-18958.01.patch, HIVE-18958.02.patch, 
> HIVE-18958.03.patch
>
>
> Getting a few configuration warnings in the logs that we should fix:
> {code}
> 2018-03-14T10:06:19,164  WARN [d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> spark.SparkConf: The configuration key 'spark.yarn.driver.memoryOverhead' has 
> been deprecated as of Spark 2.3 and may be removed in the future. Please use 
> the new key 'spark.driver.memoryOverhead' instead.
> 2018-03-14T10:06:19,165  WARN [d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> spark.SparkConf: The configuration key spark.akka.logLifecycleEvents is not 
> supported any more because Spark doesn't use Akka since 2.0
> 2018-03-14T10:06:19,165  WARN [d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> spark.SparkConf: The configuration key 'spark.yarn.executor.memoryOverhead' 
> has been deprecated as of Spark 2.3 and may be removed in the future. Please 
> use the new key 'spark.executor.memoryOverhead' instead.
> 2018-03-14T10:06:20,351  INFO 
> [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> client.SparkClientImpl: Warning: Ignoring non-spark config property: 
> hive.spark.client.server.connect.timeout=9
> 2018-03-14T10:06:20,351  INFO 
> [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> client.SparkClientImpl: Warning: Ignoring non-spark config property: 
> hive.spark.client.rpc.threads=8
> 2018-03-14T10:06:20,351  INFO 
> [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> client.SparkClientImpl: Warning: Ignoring non-spark config property: 
> hive.spark.client.connect.timeout=3
> 2018-03-14T10:06:20,351  INFO 
> [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> client.SparkClientImpl: Warning: Ignoring non-spark config property: 
> hive.spark.client.secret.bits=256
> 2018-03-14T10:06:20,351  INFO 
> [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] 
> client.SparkClientImpl: Warning: Ignoring non-spark config property: 
> hive.spark.client.rpc.max.size=52428800
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19325) Custom Hive Patch - Remove beeline -n flag in Hive 0.13.1

2018-04-27 Thread Alejandro Fernandez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Fernandez updated HIVE-19325:
---
Status: Patch Available  (was: Open)

> Custom Hive Patch - Remove beeline -n flag in Hive 0.13.1
> -
>
> Key: HIVE-19325
> URL: https://issues.apache.org/jira/browse/HIVE-19325
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 0.13.1
>Reporter: Alejandro Fernandez
>Assignee: Alejandro Fernandez
>Priority: Major
> Fix For: 0.13.1
>
> Attachments: HIVE-19325-branch-0.13.1.patch
>
>
> This Jira is not meant to be contributed back, but I'm using it as a way to 
> run unit tests against a patch file.
> Specifically, TestBeelineWithArgs, ProxyAuthTest, and TestSchemaTool
> Remove beeline -n flag used for impersonation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19325) Custom Hive Patch - Remove beeline -n flag in Hive 0.13.1

2018-04-27 Thread Alejandro Fernandez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Fernandez updated HIVE-19325:
---
Attachment: (was: HIVE-19325.branch-0.13.1.patch)

> Custom Hive Patch - Remove beeline -n flag in Hive 0.13.1
> -
>
> Key: HIVE-19325
> URL: https://issues.apache.org/jira/browse/HIVE-19325
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 0.13.1
>Reporter: Alejandro Fernandez
>Assignee: Alejandro Fernandez
>Priority: Major
> Fix For: 0.13.1
>
> Attachments: HIVE-19325-branch-0.13.1.patch
>
>
> This Jira is not meant to be contributed back, but I'm using it as a way to 
> run unit tests against a patch file.
> Specifically, TestBeelineWithArgs, ProxyAuthTest, and TestSchemaTool
> Remove beeline -n flag used for impersonation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19325) Custom Hive Patch - Remove beeline -n flag in Hive 0.13.1

2018-04-27 Thread Alejandro Fernandez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Fernandez updated HIVE-19325:
---
Attachment: (was: HIVE-19325-0.13.1.patch)

> Custom Hive Patch - Remove beeline -n flag in Hive 0.13.1
> -
>
> Key: HIVE-19325
> URL: https://issues.apache.org/jira/browse/HIVE-19325
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 0.13.1
>Reporter: Alejandro Fernandez
>Assignee: Alejandro Fernandez
>Priority: Major
> Fix For: 0.13.1
>
> Attachments: HIVE-19325-branch-0.13.1.patch
>
>
> This Jira is not meant to be contributed back, but I'm using it as a way to 
> run unit tests against a patch file.
> Specifically, TestBeelineWithArgs, ProxyAuthTest, and TestSchemaTool
> Remove beeline -n flag used for impersonation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19325) Custom Hive Patch - Remove beeline -n flag in Hive 0.13.1

2018-04-27 Thread Alejandro Fernandez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Fernandez updated HIVE-19325:
---
Status: Open  (was: Patch Available)

> Custom Hive Patch - Remove beeline -n flag in Hive 0.13.1
> -
>
> Key: HIVE-19325
> URL: https://issues.apache.org/jira/browse/HIVE-19325
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 0.13.1
>Reporter: Alejandro Fernandez
>Assignee: Alejandro Fernandez
>Priority: Major
> Fix For: 0.13.1
>
> Attachments: HIVE-19325-branch-0.13.1.patch
>
>
> This Jira is not meant to be contributed back, but I'm using it as a way to 
> run unit tests against a patch file.
> Specifically, TestBeelineWithArgs, ProxyAuthTest, and TestSchemaTool
> Remove beeline -n flag used for impersonation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19325) Custom Hive Patch - Remove beeline -n flag in Hive 0.13.1

2018-04-27 Thread Alejandro Fernandez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Fernandez updated HIVE-19325:
---
Attachment: (was: HIVE-19325.0.13.1.patch)

> Custom Hive Patch - Remove beeline -n flag in Hive 0.13.1
> -
>
> Key: HIVE-19325
> URL: https://issues.apache.org/jira/browse/HIVE-19325
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 0.13.1
>Reporter: Alejandro Fernandez
>Assignee: Alejandro Fernandez
>Priority: Major
> Fix For: 0.13.1
>
> Attachments: HIVE-19325-branch-0.13.1.patch
>
>
> This Jira is not meant to be contributed back, but I'm using it as a way to 
> run unit tests against a patch file.
> Specifically, TestBeelineWithArgs, ProxyAuthTest, and TestSchemaTool
> Remove beeline -n flag used for impersonation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-19342) Update Wiki with new murmur hash UDF

2018-04-27 Thread Deepak Jaiswal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal reassigned HIVE-19342:
-


> Update Wiki with new murmur hash UDF
> 
>
> Key: HIVE-19342
> URL: https://issues.apache.org/jira/browse/HIVE-19342
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19330) multi_insert_partitioned.q fails with "src table does not exist" message.

2018-04-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16456913#comment-16456913
 ] 

Hive QA commented on HIVE-19330:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 20s{color} 
| {color:red} 
/data/hiveptest/logs/PreCommit-HIVE-Build-10533/patches/PreCommit-HIVE-Build-10533.patch
 does not apply to master. Rebase required? Wrong Branch? See 
http://cwiki.apache.org/confluence/display/Hive/HowToContribute for help. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10533/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> multi_insert_partitioned.q fails with "src table does not exist" message.
> -
>
> Key: HIVE-19330
> URL: https://issues.apache.org/jira/browse/HIVE-19330
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Steve Yeom
>Assignee: Steve Yeom
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19330.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19330) multi_insert_partitioned.q fails with "src table does not exist" message.

2018-04-27 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-19330:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Committed to master/branch-3

> multi_insert_partitioned.q fails with "src table does not exist" message.
> -
>
> Key: HIVE-19330
> URL: https://issues.apache.org/jira/browse/HIVE-19330
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Steve Yeom
>Assignee: Steve Yeom
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19330.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18958) Fix Spark config warnings

2018-04-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16456895#comment-16456895
 ] 

Hive QA commented on HIVE-18958:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12920894/HIVE-18958.03.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 55 failed/errored test(s), 14284 tests 
executed
*Failed tests:*
{noformat}
TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=93)

[infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q]
TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed 
out) (batchId=217)
TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] 
(batchId=17)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[explainuser_1]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_smb] 
(batchId=176)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_5]
 (batchId=154)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_stats]
 (batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_part]
 (batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_explainuser_1]
 (batchId=183)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] 
(batchId=105)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[cluster_tasklog_retrieval]
 (batchId=98)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace]
 (batchId=98)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace_turnoff]
 (batchId=98)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[minimr_broken_pipe]
 (batchId=98)
org.apache.hadoop.hive.ql.TestAcidOnTez.testAcidInsertWithRemoveUnion 
(batchId=228)
org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228)
org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 
(batchId=228)
org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232)
org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate
 (batchId=231)
org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys
 (batchId=231)
org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testDifferentFiltersAreNotMatched
 (batchId=298)
org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testSameFiltersMatched 
(batchId=298)
org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testUnrelatedFiltersAreNotMatched0
 (batchId=298)
org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testUnrelatedFiltersAreNotMatched1
 (batchId=298)
org.apache.hadoop.hive.ql.plan.mapping.TestReOptimization.testNotReExecutedIfAssertionError
 (batchId=298)
org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgress (batchId=235)
org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData (batchId=240)
org.apache.hive.jdbc.TestSSL.testSSLFetchHttp (batchId=239)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesWrite 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes 
(batchId=242)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryElapsedTime
 (batchId=242)

[jira] [Commented] (HIVE-19320) MapRedLocalTask is printing child log to stderr and stdout

2018-04-27 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16456889#comment-16456889
 ] 

Aihua Xu commented on HIVE-19320:
-

[~pvary] You are right. I think it's redirected to HS2 log now with that 
change, but we can remove the output to the console. Do you agree?

> MapRedLocalTask is printing child log to stderr and stdout
> --
>
> Key: HIVE-19320
> URL: https://issues.apache.org/jira/browse/HIVE-19320
> Project: Hive
>  Issue Type: Sub-task
>  Components: Logging
>Affects Versions: 3.0.0
>Reporter: Aihua Xu
>Priority: Major
>
> In this line, local child MR task is printing the logs to stderr and stdout. 
> https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/MapredLocalTask.java#L341
> stderr/stdout should capture the service running log rather than the query 
> execution output. Those should be reasonable to go to HS2 log and propagate 
> to beeline console. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >