[jira] [Commented] (HIVE-19031) Mark duplicate configs in HiveConf as deprecated

2018-03-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412412#comment-16412412
 ] 

Hive QA commented on HIVE-19031:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  9s{color} 
| {color:red} 
/data/hiveptest/logs/PreCommit-HIVE-Build-9792/patches/PreCommit-HIVE-Build-9792.patch
 does not apply to master. Rebase required? Wrong Branch? See 
http://cwiki.apache.org/confluence/display/Hive/HowToContribute for help. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9792/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Mark duplicate configs in HiveConf as deprecated
> 
>
> Key: HIVE-19031
> URL: https://issues.apache.org/jira/browse/HIVE-19031
> Project: Hive
>  Issue Type: Sub-task
>  Components: Configuration, Standalone Metastore
>Affects Versions: 3.0.0
>Reporter: Alan Gates
>Assignee: Alan Gates
>Priority: Major
> Attachments: HIVE-19031.patch
>
>
> There are a number of configuration values that were copied from HiveConf to 
> MetastoreConf.  They have been left in HiveConf for backwards compatibility.  
> But they need to be marked as deprecated so that users know to use the new 
> values in MetastoreConf.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18910) Migrate to Murmur hash for shuffle and bucketing

2018-03-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412408#comment-16412408
 ] 

Hive QA commented on HIVE-18910:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12915975/HIVE-18910.12.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9791/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9791/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9791/

Messages:
{noformat}
 This message was trimmed, see log for full details 
error: a/ql/src/test/results/clientpositive/spark/groupby_sort_1_23.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/spark/groupby_sort_skew_1_23.q.out: 
does not exist in index
error: 
a/ql/src/test/results/clientpositive/spark/infer_bucket_sort_bucketed_table.q.out:
 does not exist in index
error: 
a/ql/src/test/results/clientpositive/spark/infer_bucket_sort_num_buckets.q.out: 
does not exist in index
error: a/ql/src/test/results/clientpositive/spark/input_part2.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/spark/join26.q.out: does not exist 
in index
error: a/ql/src/test/results/clientpositive/spark/join32.q.out: does not exist 
in index
error: a/ql/src/test/results/clientpositive/spark/join32_lessSize.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/spark/join33.q.out: does not exist 
in index
error: a/ql/src/test/results/clientpositive/spark/join34.q.out: does not exist 
in index
error: a/ql/src/test/results/clientpositive/spark/join35.q.out: does not exist 
in index
error: a/ql/src/test/results/clientpositive/spark/join_filters_overlap.q.out: 
does not exist in index
error: a/ql/src/test/results/clientpositive/spark/join_map_ppr.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/spark/list_bucket_dml_10.q.out: 
does not exist in index
error: a/ql/src/test/results/clientpositive/spark/list_bucket_dml_2.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/spark/load_dyn_part8.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/spark/optimize_nullscan.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/spark/parallel_orderby.q.out: does 
not exist in index
error: 
a/ql/src/test/results/clientpositive/spark/parquet_vectorization_0.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/spark/pcr.q.out: does not exist in 
index
error: a/ql/src/test/results/clientpositive/spark/quotedid_smb.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/spark/reduce_deduplicate.q.out: 
does not exist in index
error: a/ql/src/test/results/clientpositive/spark/sample1.q.out: does not exist 
in index
error: a/ql/src/test/results/clientpositive/spark/sample2.q.out: does not exist 
in index
error: a/ql/src/test/results/clientpositive/spark/sample3.q.out: does not exist 
in index
error: a/ql/src/test/results/clientpositive/spark/sample4.q.out: does not exist 
in index
error: a/ql/src/test/results/clientpositive/spark/sample5.q.out: does not exist 
in index
error: a/ql/src/test/results/clientpositive/spark/sample6.q.out: does not exist 
in index
error: a/ql/src/test/results/clientpositive/spark/sample7.q.out: does not exist 
in index
error: a/ql/src/test/results/clientpositive/spark/sample8.q.out: does not exist 
in index
error: a/ql/src/test/results/clientpositive/spark/sample9.q.out: does not exist 
in index
error: a/ql/src/test/results/clientpositive/spark/smb_mapjoin_1.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/spark/smb_mapjoin_13.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/spark/smb_mapjoin_15.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/spark/smb_mapjoin_18.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/spark/smb_mapjoin_19.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/spark/smb_mapjoin_20.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/spark/smb_mapjoin_22.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/spark/spark_union_merge.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/spark/stats0.q.out: does not exist 
in index
error: a/ql/src/test/results/clientpositive/spark/stats1.q.out: does not exist 
in index
error: a/ql/src/test/results/clientpositive/spark/stats10.q.out: does not exist 
in index
error: a/ql/src/test/results/clientpositive/spark/stats16.q.out: does not exist 
in index
error: a/ql/src/test/results/clientpositive/spark/stats3.q.out: does not exist 
in index
error: a/ql/src/test/results/clientpositive/spark/stats5.q.out: does not 

[jira] [Commented] (HIVE-18747) Cleaner for TXN_TO_WRITE_ID table entries using MIN_HISTORY_LEVEL.

2018-03-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412407#comment-16412407
 ] 

Hive QA commented on HIVE-18747:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12915762/HIVE-18747.02.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9789/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9789/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9789/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2018-03-24 04:41:12.516
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-9789/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2018-03-24 04:41:12.519
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at f1d4fcf HIVE-18982: Provide a CLI option to manually trigger 
failover (Prasanth Jayachandran reviewed by Sergey Shelukhin)
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at f1d4fcf HIVE-18982: Provide a CLI option to manually trigger 
failover (Prasanth Jayachandran reviewed by Sergey Shelukhin)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2018-03-24 04:41:16.604
+ rm -rf ../yetus_PreCommit-HIVE-Build-9789
+ mkdir ../yetus_PreCommit-HIVE-Build-9789
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-9789
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-9789/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: a/metastore/scripts/upgrade/derby/hive-txn-schema-3.0.0.derby.sql: does 
not exist in index
error: a/metastore/scripts/upgrade/derby/upgrade-2.3.0-to-3.0.0.derby.sql: does 
not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java: does 
not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Initiator.java: 
does not exist in index
error: a/ql/src/test/org/apache/hadoop/hive/ql/TestTxnCommands2.java: does not 
exist in index
error: 
a/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/txn/CompactionTxnHandler.java:
 does not exist in index
error: 
a/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnDbUtil.java:
 does not exist in index
error: 
a/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java:
 does not exist in index
error: 
a/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnStore.java:
 does not exist in index
error: a/standalone-metastore/src/main/sql/derby/hive-schema-3.0.0.derby.sql: 
does not exist in index
error: 
a/standalone-metastore/src/main/sql/derby/upgrade-2.3.0-to-3.0.0.derby.sql: 
does not exist in index
error: a/standalone-metastore/src/main/sql/mssql/hive-schema-3.0.0.mssql.sql: 
does not exist in index
error: 
a/standalone-metastore/src/main/sql/mssql/upgrade-2.3.0-to-3.0.0.mssql.sql: 
does not exist in index
error: a/standalone-metastore/src/main/sql/mysql/hive-schema-3.0.0.mysql.sql: 
does not exist in index
error: 
a/standalone-metastore/src/main/sql/mysql/upgrade-2.3.0-to-3.0.0.mysql.sql: 
does not exist in index
error: a/standalone-metastore/src/main/sql/oracle/hive-schema-3.0.0.oracle.sql: 
does not exist in index
error: 
a/standalone-metastore/src/main/sql/oracle/upgrade-2.3.0-to-3.0.0.oracle.sql: 
does not exist in index
error: 
a/standalone-metastore/src/main/sql/postgres/hive-schema-3.0.0.postgres.sql: 
does not exist in index
error: 

[jira] [Commented] (HIVE-19024) Vectorization: Disable complex type constants for VectorUDFAdaptor

2018-03-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412396#comment-16412396
 ] 

Hive QA commented on HIVE-19024:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12915760/HIVE-19024.01.patch

{color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 28 failed/errored test(s), 13421 tests 
executed
*Failed tests:*
{noformat}
TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=92)

[infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q]
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=95)


[jira] [Updated] (HIVE-19035) Vectorization: Disable exotic STRUCT field reference form

2018-03-23 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-19035:

Summary: Vectorization: Disable exotic STRUCT field reference form  (was: 
Vectorization: Disable exotic field reference form)

> Vectorization: Disable exotic STRUCT field reference form
> -
>
> Key: HIVE-19035
> URL: https://issues.apache.org/jira/browse/HIVE-19035
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-19035.01.patch
>
>
> We currently don't support exotic field references like get a struct field 
> from array returns a type array.  Attempt 
> causes ClassCastException in VectorizationContext that kills query planning.
> The Q file is input_testxpath3.q



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19019) Vectorization: When vectorized, orc_merge_incompat_schema.q throws HiveException "Not implemented yet" from VectorExpressionWriterMap

2018-03-23 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-19019:

Summary: Vectorization: When vectorized, orc_merge_incompat_schema.q throws 
HiveException "Not implemented yet" from VectorExpressionWriterMap  (was: 
Vectorization and Parquet: When vectorized, orc_merge_incompat_schema.q throws 
HiveException "Not implemented yet" from VectorExpressionWriterMap)

> Vectorization: When vectorized, orc_merge_incompat_schema.q throws 
> HiveException "Not implemented yet" from VectorExpressionWriterMap
> -
>
> Key: HIVE-19019
> URL: https://issues.apache.org/jira/browse/HIVE-19019
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-19019.01.patch, HIVE-19019.02.patch
>
>
> Adding "SET hive.vectorized.execution.enabled=true;" to 
> orc_merge_incompat_schema.q triggers this call stack:
> {noformat}
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Not implemented 
> yet
>   at 
> org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpressionWriterFactory$19.writeValue(VectorExpressionWriterFactory.java:1496)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFArgDesc.getDeferredJavaObject(VectorUDFArgDesc.java:123)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFAdaptor.setResult(VectorUDFAdaptor.java:199)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFAdaptor.evaluate(VectorUDFAdaptor.java:151)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:146)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.exec.Operator.vectorForward(Operator.java:955) 
> ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:928) 
> ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:125)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.flushDeserializerBatch(VectorMapOperator.java:630)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.setupPartitionContextVars(VectorMapOperator.java:698)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.cleanUpInputFileChangedOp(VectorMapOperator.java:607)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1210)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:829)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:154) 
> ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) 
> ~[hadoop-mapreduce-client-core-3.0.0-beta1.jar:?]
> {noformat}
> The complex types in VectorExpressionWriterFactory are not fully implemented.
> Also, null_cast.q, nullMap.q, and nested_column_pruning.q



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18780) Improve schema discovery For Druid Storage Handler

2018-03-23 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412390#comment-16412390
 ] 

Ashutosh Chauhan commented on HIVE-18780:
-

+1 pending tests

> Improve schema discovery For Druid Storage Handler
> --
>
> Key: HIVE-18780
> URL: https://issues.apache.org/jira/browse/HIVE-18780
> Project: Hive
>  Issue Type: Improvement
>  Components: Druid integration
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18780.11.patch, HIVE-18780.12.patch, 
> HIVE-18780.13.patch, HIVE-18780.2.patch, HIVE-18780.4.patch, 
> HIVE-18780.5.patch, HIVE-18780.6.patch, HIVE-18780.7.patch, 
> HIVE-18780.8.patch, HIVE-18780.patch, HIVE-18780.patch
>
>
> Currently, Druid Storage adapter issues a Segment metadata Query every time 
> the query is of type Select or Scan. Not only that but then every input split 
> (map) will do the same as well since it is using the same Serde, this is very 
> expensive and put a lot of pressure on the Druid Cluster. The way to fix this 
> is to add the schema out of the calcite plan instead of serializing the query 
> itself as part of the Hive query context.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19043) Vectorization: LazySimpleDeserializeRead fewer fields handling is broken for Complex Types

2018-03-23 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-19043:

Summary: Vectorization: LazySimpleDeserializeRead fewer fields handling is 
broken for Complex Types  (was: Vectorization: LazySimpleDeserializeRead fewer 
fields handling broken for Complex Types)

> Vectorization: LazySimpleDeserializeRead fewer fields handling is broken for 
> Complex Types
> --
>
> Key: HIVE-19043
> URL: https://issues.apache.org/jira/browse/HIVE-19043
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
>
> Issues were revealed by vectorizing create_struct_table.q



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-19043) Vectorization: LazySimpleDeserializeRead fewer fields handling broken for Complex Types

2018-03-23 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline reassigned HIVE-19043:
---


> Vectorization: LazySimpleDeserializeRead fewer fields handling broken for 
> Complex Types
> ---
>
> Key: HIVE-19043
> URL: https://issues.apache.org/jira/browse/HIVE-19043
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
>
> Issues were revealed by vectorizing create_struct_table.q



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19024) Vectorization: Disable complex type constants for VectorUDFAdaptor

2018-03-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412383#comment-16412383
 ] 

Hive QA commented on HIVE-19024:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
13s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9788/dev-support/hive-personality.sh
 |
| git revision | master / f1d4fcf |
| Default Java | 1.8.0_111 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9788/yetus/whitespace-eol.txt 
|
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9788/yetus/patch-asflicense-problems.txt
 |
| modules | C: itests ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9788/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Vectorization: Disable complex type constants for VectorUDFAdaptor
> --
>
> Key: HIVE-19024
> URL: https://issues.apache.org/jira/browse/HIVE-19024
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-19024.01.patch
>
>
> Currently, complex type constants are not detected and cause execution 
> failures.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18780) Improve schema discovery For Druid Storage Handler

2018-03-23 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated HIVE-18780:
--
Attachment: HIVE-18780.13.patch

> Improve schema discovery For Druid Storage Handler
> --
>
> Key: HIVE-18780
> URL: https://issues.apache.org/jira/browse/HIVE-18780
> Project: Hive
>  Issue Type: Improvement
>  Components: Druid integration
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18780.11.patch, HIVE-18780.12.patch, 
> HIVE-18780.13.patch, HIVE-18780.2.patch, HIVE-18780.4.patch, 
> HIVE-18780.5.patch, HIVE-18780.6.patch, HIVE-18780.7.patch, 
> HIVE-18780.8.patch, HIVE-18780.patch, HIVE-18780.patch
>
>
> Currently, Druid Storage adapter issues a Segment metadata Query every time 
> the query is of type Select or Scan. Not only that but then every input split 
> (map) will do the same as well since it is using the same Serde, this is very 
> expensive and put a lot of pressure on the Druid Cluster. The way to fix this 
> is to add the schema out of the calcite plan instead of serializing the query 
> itself as part of the Hive query context.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19021) WM counters are not properly propagated from LLAP to AM

2018-03-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412368#comment-16412368
 ] 

Hive QA commented on HIVE-19021:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12915984/HIVE-19021.03.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 55 failed/errored test(s), 13337 tests 
executed
*Failed tests:*
{noformat}
TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=92)

[infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q]
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=94)


[jira] [Commented] (HIVE-18885) DbNotificationListener has a deadlock between Java and DB locks (2.x line)

2018-03-23 Thread Alexander Kolbasov (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412358#comment-16412358
 ] 

Alexander Kolbasov commented on HIVE-18885:
---

[~vihangk1] will do.

> DbNotificationListener has a deadlock between Java and DB locks (2.x line)
> --
>
> Key: HIVE-18885
> URL: https://issues.apache.org/jira/browse/HIVE-18885
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Metastore
>Affects Versions: 2.3.2
>Reporter: Alexander Kolbasov
>Assignee: Vihang Karajgaonkar
>Priority: Major
> Attachments: HIVE-18885.01.branch-2.patch, 
> HIVE-18885.02.branch-2.patch
>
>
> You can see the problem from looking at the code, but it actually created 
> severe problems for real life Hive user.
> When {{alter table}} has {{cascade}} option it does the following:
> {code:java}
>  msdb.openTransaction()
>   ...
>   List parts = msdb.getPartitions(dbname, name, -1);
>   for (Partition part : parts) {
> List oldCols = part.getSd().getCols();
> part.getSd().setCols(newt.getSd().getCols());
> String oldPartName = 
> Warehouse.makePartName(oldt.getPartitionKeys(), part.getValues());
> updatePartColumnStatsForAlterColumns(msdb, part, oldPartName, 
> part.getValues(), oldCols, part);
> msdb.alterPartition(dbname, name, part.getValues(), part);
>   }
>  {code}
> So it walks all partitions (and this may be huge list) and does some 
> non-trivial operations in one single uber-transaction.
> When DbNotificationListener is enabled, it adds an event for each partition, 
> all while
> holding a row lock on NOTIFICATION_SEQUENCE table. As a result, while this is 
> happening no other write DDL can proceed. This can sometimes cause DB lock 
> timeouts which cause HMS level operation retries which make things even worse.
> In one particular case this pretty much made HMS unusable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18994) Handle client connections on failover

2018-03-23 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412355#comment-16412355
 ] 

Prasanth Jayachandran commented on HIVE-18994:
--

[~sershe] can you please take a look?

> Handle client connections on failover
> -
>
> Key: HIVE-18994
> URL: https://issues.apache.org/jira/browse/HIVE-18994
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-18994.1.patch, HIVE-18994.2.patch
>
>
> When leader failover happens (either automatically or manually), tez sessions 
> are closed. But client connections are not. We need to close the client 
> connections explicitly so that workload manager revokes all the guaranteed 
> slots and upon reconnection client will connect to active HS2 instance (this 
> is to avoid clients reusing the same connection and submitting queries to 
> passive HS2). In future, some timeout or other policies (may be WM will run 
> everything speculatively) can be added.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18994) Handle client connections on failover

2018-03-23 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-18994:
-
Attachment: HIVE-18994.2.patch

> Handle client connections on failover
> -
>
> Key: HIVE-18994
> URL: https://issues.apache.org/jira/browse/HIVE-18994
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-18994.1.patch, HIVE-18994.2.patch
>
>
> When leader failover happens (either automatically or manually), tez sessions 
> are closed. But client connections are not. We need to close the client 
> connections explicitly so that workload manager revokes all the guaranteed 
> slots and upon reconnection client will connect to active HS2 instance (this 
> is to avoid clients reusing the same connection and submitting queries to 
> passive HS2). In future, some timeout or other policies (may be WM will run 
> everything speculatively) can be added.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18994) Handle client connections on failover

2018-03-23 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-18994:
-
Status: Patch Available  (was: Open)

> Handle client connections on failover
> -
>
> Key: HIVE-18994
> URL: https://issues.apache.org/jira/browse/HIVE-18994
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-18994.1.patch, HIVE-18994.2.patch
>
>
> When leader failover happens (either automatically or manually), tez sessions 
> are closed. But client connections are not. We need to close the client 
> connections explicitly so that workload manager revokes all the guaranteed 
> slots and upon reconnection client will connect to active HS2 instance (this 
> is to avoid clients reusing the same connection and submitting queries to 
> passive HS2). In future, some timeout or other policies (may be WM will run 
> everything speculatively) can be added.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18994) Handle client connections on failover

2018-03-23 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412354#comment-16412354
 ] 

Prasanth Jayachandran commented on HIVE-18994:
--

Rebased patch. Attaching RB. 

> Handle client connections on failover
> -
>
> Key: HIVE-18994
> URL: https://issues.apache.org/jira/browse/HIVE-18994
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-18994.1.patch, HIVE-18994.2.patch
>
>
> When leader failover happens (either automatically or manually), tez sessions 
> are closed. But client connections are not. We need to close the client 
> connections explicitly so that workload manager revokes all the guaranteed 
> slots and upon reconnection client will connect to active HS2 instance (this 
> is to avoid clients reusing the same connection and submitting queries to 
> passive HS2). In future, some timeout or other policies (may be WM will run 
> everything speculatively) can be added.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18982) Provide a CLI option to manually trigger failover

2018-03-23 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-18982:
-
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Test failures are already happening in master. This patch mostly affect HS2 and 
no test failures related to that. Committed patch to master. Thanks for the 
reviews!

> Provide a CLI option to manually trigger failover
> -
>
> Key: HIVE-18982
> URL: https://issues.apache.org/jira/browse/HIVE-18982
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18982.1.patch, HIVE-18982.2.patch, 
> HIVE-18982.3.patch, HIVE-18982.4.patch, HIVE-18982.5.patch
>
>
> HIVE-18281 added active-passive HA. There might be a administrative need to 
> trigger a manual failover of HS2 Active server. Add command line tool to view 
> list of all HS2 instances and trigger manual failover (only under force 
> mode). The clients currently connected to active HS2 will be closed. In 
> future, more options to existing clients connections can be handled via 
> configs/options (like wait until timeout, wait until current sessions are 
> closed etc.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19021) WM counters are not properly propagated from LLAP to AM

2018-03-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412341#comment-16412341
 ] 

Hive QA commented on HIVE-19021:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
13s{color} | {color:red} llap-server: The patch generated 2 new + 75 unchanged 
- 0 fixed = 77 total (was 75) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
13s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9787/dev-support/hive-personality.sh
 |
| git revision | master / 3ea96ee |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9787/yetus/diff-checkstyle-llap-server.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9787/yetus/patch-asflicense-problems.txt
 |
| modules | C: llap-server U: llap-server |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9787/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> WM counters are not properly propagated from LLAP to AM
> ---
>
> Key: HIVE-19021
> URL: https://issues.apache.org/jira/browse/HIVE-19021
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19021.01.patch, HIVE-19021.02.patch, 
> HIVE-19021.03.patch, HIVE-19021.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19042) set MALLOC_ARENA_MAX for LLAP

2018-03-23 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19042:

Status: Patch Available  (was: Open)

> set MALLOC_ARENA_MAX for LLAP
> -
>
> Key: HIVE-19042
> URL: https://issues.apache.org/jira/browse/HIVE-19042
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19042.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19042) set MALLOC_ARENA_MAX for LLAP

2018-03-23 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412334#comment-16412334
 ] 

Sergey Shelukhin commented on HIVE-19042:
-

[~gopalv]  does this make sense?

> set MALLOC_ARENA_MAX for LLAP
> -
>
> Key: HIVE-19042
> URL: https://issues.apache.org/jira/browse/HIVE-19042
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19042.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19042) set MALLOC_ARENA_MAX for LLAP

2018-03-23 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19042:

Attachment: HIVE-19042.patch

> set MALLOC_ARENA_MAX for LLAP
> -
>
> Key: HIVE-19042
> URL: https://issues.apache.org/jira/browse/HIVE-19042
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19042.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-19042) set MALLOC_ARENA_MAX for LLAP

2018-03-23 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin reassigned HIVE-19042:
---


> set MALLOC_ARENA_MAX for LLAP
> -
>
> Key: HIVE-19042
> URL: https://issues.apache.org/jira/browse/HIVE-19042
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18885) DbNotificationListener has a deadlock between Java and DB locks (2.x line)

2018-03-23 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HIVE-18885:
---
Attachment: HIVE-18885.02.branch-2.patch

> DbNotificationListener has a deadlock between Java and DB locks (2.x line)
> --
>
> Key: HIVE-18885
> URL: https://issues.apache.org/jira/browse/HIVE-18885
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Metastore
>Affects Versions: 2.3.2
>Reporter: Alexander Kolbasov
>Assignee: Vihang Karajgaonkar
>Priority: Major
> Attachments: HIVE-18885.01.branch-2.patch, 
> HIVE-18885.02.branch-2.patch
>
>
> You can see the problem from looking at the code, but it actually created 
> severe problems for real life Hive user.
> When {{alter table}} has {{cascade}} option it does the following:
> {code:java}
>  msdb.openTransaction()
>   ...
>   List parts = msdb.getPartitions(dbname, name, -1);
>   for (Partition part : parts) {
> List oldCols = part.getSd().getCols();
> part.getSd().setCols(newt.getSd().getCols());
> String oldPartName = 
> Warehouse.makePartName(oldt.getPartitionKeys(), part.getValues());
> updatePartColumnStatsForAlterColumns(msdb, part, oldPartName, 
> part.getValues(), oldCols, part);
> msdb.alterPartition(dbname, name, part.getValues(), part);
>   }
>  {code}
> So it walks all partitions (and this may be huge list) and does some 
> non-trivial operations in one single uber-transaction.
> When DbNotificationListener is enabled, it adds an event for each partition, 
> all while
> holding a row lock on NOTIFICATION_SEQUENCE table. As a result, while this is 
> happening no other write DDL can proceed. This can sometimes cause DB lock 
> timeouts which cause HMS level operation retries which make things even worse.
> In one particular case this pretty much made HMS unusable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18991) Drop database cascade doesn't work with materialized views

2018-03-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412316#comment-16412316
 ] 

Hive QA commented on HIVE-18991:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12915739/HIVE-18991.01.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 89 failed/errored test(s), 13425 tests 
executed
*Failed tests:*
{noformat}
TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=92)

[infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q]
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=94)


[jira] [Assigned] (HIVE-19041) Thrift deserialization of Partition objects should intern fields

2018-03-23 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar reassigned HIVE-19041:
--


> Thrift deserialization of Partition objects should intern fields
> 
>
> Key: HIVE-19041
> URL: https://issues.apache.org/jira/browse/HIVE-19041
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 2.3.2, 3.0.0
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Major
>
> When a client is creating large number of partitions, the thrift objects are 
> deserialized into Partition objects. The read method of these objects does 
> not intern the inputformat, location, outputformat which cause large number 
> of duplicate Strings in the HMS memory. We should intern these objects while 
> deserialization to reduce memory pressure. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18999) Filter operator does not work for List

2018-03-23 Thread Steve Yeom (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Yeom reassigned HIVE-18999:
-

Assignee: Steve Yeom

> Filter operator does not work for List
> --
>
> Key: HIVE-18999
> URL: https://issues.apache.org/jira/browse/HIVE-18999
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 3.0.0
>Reporter: Steve Yeom
>Assignee: Steve Yeom
>Priority: Major
>
> {code:sql}
> create table table1(col0 int, col1 bigint, col2 string, col3 bigint, col4 
> bigint);
> insert into table1 values (1, 1, 'ccl',2014, 11);
> insert into table1 values (1, 1, 'ccl',2015, 11);
> insert into table1 values (1, 1, 'ccl',2014, 11);
> insert into table1 values (1, 1, 'ccl',2013, 11);
> -- INCORRECT
> SELECT COUNT(t1.col0) from table1 t1 where struct(col3, col4) in 
> (struct(2014,11));
> -- CORRECT
> SELECT COUNT(t1.col0) from table1 t1 where struct(col3, col4) in 
> (struct('2014','11'));
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18755) Modifications to the metastore for catalogs

2018-03-23 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412282#comment-16412282
 ] 

Thejas M Nair commented on HIVE-18755:
--

On the lines of what Alexander said, I think it makes sense to create a 
GetCatalogReq object and DropCatalogReq objects for use in get_catalog and 
drop_catalog methods respectively. You might have arguments that are specific 
to drop_catalog in future  (eg "cascade=true").

Similarly even for create_catalog, having a CreateCatalogReq object would help 
to keep it future proof.  Now thinking on those lines, I think its simpler are 
more consistent to use *Req and *Response objects as far as possible.





> Modifications to the metastore for catalogs
> ---
>
> Key: HIVE-18755
> URL: https://issues.apache.org/jira/browse/HIVE-18755
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Reporter: Alan Gates
>Assignee: Alan Gates
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18755.2.patch, HIVE-18755.nothrift, HIVE-18755.patch
>
>
> Step 1 of adding catalogs is to add support in the metastore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18863) trunc() calls itself trunk() in an error message

2018-03-23 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HIVE-18863:
---
   Resolution: Fixed
Fix Version/s: 2.4.0
   3.0.0
   Status: Resolved  (was: Patch Available)

Patch merged in master and branch-2. Thanks for your contribution [~bharos92]

> trunc() calls itself trunk() in an error message
> 
>
> Key: HIVE-18863
> URL: https://issues.apache.org/jira/browse/HIVE-18863
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Reporter: Tim Armstrong
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0, 2.4.0
>
> Attachments: HIVE-18863.1.patch, HIVE-18863.2.patch
>
>
> {noformat}
> > select  trunc('millennium', cast('2001-02-16 20:38:40' as timestamp))
> FAILED: SemanticException Line 0:-1 Argument type mismatch ''2001-02-16 
> 20:38:40'': trunk() only takes STRING/CHAR/VARCHAR types as second argument, 
> got TIMESTAMP
> {noformat}
> I saw this on a derivative of Hive 1.1.0 (cdh5.15.0), but the string still 
> seems to be present on master:
> https://github.com/apache/hive/blob/6d890faf22fd1ede3658a5eed097476eab3c67e9/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFTrunc.java#L262



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18991) Drop database cascade doesn't work with materialized views

2018-03-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412268#comment-16412268
 ] 

Hive QA commented on HIVE-18991:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
25s{color} | {color:red} standalone-metastore: The patch generated 3 new + 1134 
unchanged - 8 fixed = 1137 total (was 1142) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
13s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9786/dev-support/hive-personality.sh
 |
| git revision | master / 51104e3 |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9786/yetus/diff-checkstyle-standalone-metastore.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9786/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql standalone-metastore U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9786/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Drop database cascade doesn't work with materialized views
> --
>
> Key: HIVE-18991
> URL: https://issues.apache.org/jira/browse/HIVE-18991
> Project: Hive
>  Issue Type: Bug
>  Components: Materialized views, Metastore
>Affects Versions: 3.0.0
>Reporter: Alan Gates
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-18991.01.patch, HIVE-18991.patch
>
>
> Create a database, add a table and then a materialized view that depends on 
> the table.  Then drop the database with cascade set.  Sometimes this will 
> fail because when HiveMetaStore.drop_database_core goes to drop all of the 
> tables it may drop the base table before the materialized view, which will 
> cause an integrity constraint violation in the RDBMS.  To resolve this that 
> method should change to fetch and drop materialized views before tables.
> cc [~jcamachorodriguez]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18885) DbNotificationListener has a deadlock between Java and DB locks (2.x line)

2018-03-23 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-18885:
-
Summary: DbNotificationListener has a deadlock between Java and DB locks 
(2.x line)  (was: DbNotificationListener has a deadlock between Java and DB 
locks)

> DbNotificationListener has a deadlock between Java and DB locks (2.x line)
> --
>
> Key: HIVE-18885
> URL: https://issues.apache.org/jira/browse/HIVE-18885
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Metastore
>Affects Versions: 2.3.2
>Reporter: Alexander Kolbasov
>Assignee: Vihang Karajgaonkar
>Priority: Major
> Attachments: HIVE-18885.01.branch-2.patch
>
>
> You can see the problem from looking at the code, but it actually created 
> severe problems for real life Hive user.
> When {{alter table}} has {{cascade}} option it does the following:
> {code:java}
>  msdb.openTransaction()
>   ...
>   List parts = msdb.getPartitions(dbname, name, -1);
>   for (Partition part : parts) {
> List oldCols = part.getSd().getCols();
> part.getSd().setCols(newt.getSd().getCols());
> String oldPartName = 
> Warehouse.makePartName(oldt.getPartitionKeys(), part.getValues());
> updatePartColumnStatsForAlterColumns(msdb, part, oldPartName, 
> part.getValues(), oldCols, part);
> msdb.alterPartition(dbname, name, part.getValues(), part);
>   }
>  {code}
> So it walks all partitions (and this may be huge list) and does some 
> non-trivial operations in one single uber-transaction.
> When DbNotificationListener is enabled, it adds an event for each partition, 
> all while
> holding a row lock on NOTIFICATION_SEQUENCE table. As a result, while this is 
> happening no other write DDL can proceed. This can sometimes cause DB lock 
> timeouts which cause HMS level operation retries which make things even worse.
> In one particular case this pretty much made HMS unusable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18885) DbNotificationListener has a deadlock between Java and DB locks

2018-03-23 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412263#comment-16412263
 ] 

Vihang Karajgaonkar commented on HIVE-18885:


[~akolb] Can you take a look at the patch?

> DbNotificationListener has a deadlock between Java and DB locks
> ---
>
> Key: HIVE-18885
> URL: https://issues.apache.org/jira/browse/HIVE-18885
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Metastore
>Affects Versions: 2.3.2
>Reporter: Alexander Kolbasov
>Assignee: Vihang Karajgaonkar
>Priority: Major
> Attachments: HIVE-18885.01.branch-2.patch
>
>
> You can see the problem from looking at the code, but it actually created 
> severe problems for real life Hive user.
> When {{alter table}} has {{cascade}} option it does the following:
> {code:java}
>  msdb.openTransaction()
>   ...
>   List parts = msdb.getPartitions(dbname, name, -1);
>   for (Partition part : parts) {
> List oldCols = part.getSd().getCols();
> part.getSd().setCols(newt.getSd().getCols());
> String oldPartName = 
> Warehouse.makePartName(oldt.getPartitionKeys(), part.getValues());
> updatePartColumnStatsForAlterColumns(msdb, part, oldPartName, 
> part.getValues(), oldCols, part);
> msdb.alterPartition(dbname, name, part.getValues(), part);
>   }
>  {code}
> So it walks all partitions (and this may be huge list) and does some 
> non-trivial operations in one single uber-transaction.
> When DbNotificationListener is enabled, it adds an event for each partition, 
> all while
> holding a row lock on NOTIFICATION_SEQUENCE table. As a result, while this is 
> happening no other write DDL can proceed. This can sometimes cause DB lock 
> timeouts which cause HMS level operation retries which make things even worse.
> In one particular case this pretty much made HMS unusable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18685) Add catalogs to metastore

2018-03-23 Thread Alexander Kolbasov (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412230#comment-16412230
 ] 

Alexander Kolbasov commented on HIVE-18685:
---

[~alangates] Since you changes the way catalogs are encoded in dbnames - can 
you updated the design doc here to describe this schema?

> Add catalogs to metastore
> -
>
> Key: HIVE-18685
> URL: https://issues.apache.org/jira/browse/HIVE-18685
> Project: Hive
>  Issue Type: New Feature
>  Components: Metastore, Parser, Security, SQL
>Affects Versions: 3.0.0
>Reporter: Alan Gates
>Assignee: Alan Gates
>Priority: Major
> Attachments: HMS Catalog Design Doc.pdf
>
>
> SQL supports two levels of namespaces, called in the spec catalogs and 
> schemas (with schema being equivalent to Hive's database).  I propose to add 
> the upper level of catalog.  The attached design doc covers the use cases, 
> requirements, and brief discussion of how it will be implemented in a 
> backwards compatible way.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19027) Make materializations invalidation cache work with multiple active remote metastores

2018-03-23 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-19027:
---
Attachment: (was: HIVE-19027.patch)

> Make materializations invalidation cache work with multiple active remote 
> metastores
> 
>
> Key: HIVE-19027
> URL: https://issues.apache.org/jira/browse/HIVE-19027
> Project: Hive
>  Issue Type: Improvement
>  Components: Materialized views
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-19027.01.patch
>
>
> The main points:
>  - Only MVs stored in transactional tables can have a time window value of 0. 
> Those are the only MVs that can be guaranteed to not be outdated when a query 
> is executed, if we use custom storage handlers to store the materialized 
> view, we cannot make any promises.
>  - For MVs that +cannot be outdated+, we do not check the metastore. Instead, 
> comparison is based on valid write id lists.
>  - For MVs that +can be outdated+, we still rely on the invalidation cache.
>  ** The window for valid outdated MVs can be specified in intervals of 1 
> minute (less than that, it is difficult to have any guarantees about whether 
> the MV is actually outdated by less than a minute or not).
>  ** The async loading is done every interval / 2 (or probably better, we can 
> make it configurable).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19027) Make materializations invalidation cache work with multiple active remote metastores

2018-03-23 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-19027:
---
Attachment: HIVE-19027.01.patch

> Make materializations invalidation cache work with multiple active remote 
> metastores
> 
>
> Key: HIVE-19027
> URL: https://issues.apache.org/jira/browse/HIVE-19027
> Project: Hive
>  Issue Type: Improvement
>  Components: Materialized views
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-19027.01.patch
>
>
> The main points:
>  - Only MVs stored in transactional tables can have a time window value of 0. 
> Those are the only MVs that can be guaranteed to not be outdated when a query 
> is executed, if we use custom storage handlers to store the materialized 
> view, we cannot make any promises.
>  - For MVs that +cannot be outdated+, we do not check the metastore. Instead, 
> comparison is based on valid write id lists.
>  - For MVs that +can be outdated+, we still rely on the invalidation cache.
>  ** The window for valid outdated MVs can be specified in intervals of 1 
> minute (less than that, it is difficult to have any guarantees about whether 
> the MV is actually outdated by less than a minute or not).
>  ** The async loading is done every interval / 2 (or probably better, we can 
> make it configurable).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19027) Make materializations invalidation cache work with multiple active remote metastores

2018-03-23 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-19027:
---
Attachment: HIVE-19027.patch

> Make materializations invalidation cache work with multiple active remote 
> metastores
> 
>
> Key: HIVE-19027
> URL: https://issues.apache.org/jira/browse/HIVE-19027
> Project: Hive
>  Issue Type: Improvement
>  Components: Materialized views
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-19027.patch
>
>
> The main points:
>  - Only MVs stored in transactional tables can have a time window value of 0. 
> Those are the only MVs that can be guaranteed to not be outdated when a query 
> is executed, if we use custom storage handlers to store the materialized 
> view, we cannot make any promises.
>  - For MVs that +cannot be outdated+, we do not check the metastore. Instead, 
> comparison is based on valid write id lists.
>  - For MVs that +can be outdated+, we still rely on the invalidation cache.
>  ** The window for valid outdated MVs can be specified in intervals of 1 
> minute (less than that, it is difficult to have any guarantees about whether 
> the MV is actually outdated by less than a minute or not).
>  ** The async loading is done every interval / 2 (or probably better, we can 
> make it configurable).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HIVE-19027) Make materializations invalidation cache work with multiple active remote metastores

2018-03-23 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-19027 started by Jesus Camacho Rodriguez.
--
> Make materializations invalidation cache work with multiple active remote 
> metastores
> 
>
> Key: HIVE-19027
> URL: https://issues.apache.org/jira/browse/HIVE-19027
> Project: Hive
>  Issue Type: Improvement
>  Components: Materialized views
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>
> The main points:
>  - Only MVs stored in transactional tables can have a time window value of 0. 
> Those are the only MVs that can be guaranteed to not be outdated when a query 
> is executed, if we use custom storage handlers to store the materialized 
> view, we cannot make any promises.
>  - For MVs that +cannot be outdated+, we do not check the metastore. Instead, 
> comparison is based on valid write id lists.
>  - For MVs that +can be outdated+, we still rely on the invalidation cache.
>  ** The window for valid outdated MVs can be specified in intervals of 1 
> minute (less than that, it is difficult to have any guarantees about whether 
> the MV is actually outdated by less than a minute or not).
>  ** The async loading is done every interval / 2 (or probably better, we can 
> make it configurable).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19027) Make materializations invalidation cache work with multiple active remote metastores

2018-03-23 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-19027:
---
Status: Patch Available  (was: In Progress)

> Make materializations invalidation cache work with multiple active remote 
> metastores
> 
>
> Key: HIVE-19027
> URL: https://issues.apache.org/jira/browse/HIVE-19027
> Project: Hive
>  Issue Type: Improvement
>  Components: Materialized views
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>
> The main points:
>  - Only MVs stored in transactional tables can have a time window value of 0. 
> Those are the only MVs that can be guaranteed to not be outdated when a query 
> is executed, if we use custom storage handlers to store the materialized 
> view, we cannot make any promises.
>  - For MVs that +cannot be outdated+, we do not check the metastore. Instead, 
> comparison is based on valid write id lists.
>  - For MVs that +can be outdated+, we still rely on the invalidation cache.
>  ** The window for valid outdated MVs can be specified in intervals of 1 
> minute (less than that, it is difficult to have any guarantees about whether 
> the MV is actually outdated by less than a minute or not).
>  ** The async loading is done every interval / 2 (or probably better, we can 
> make it configurable).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HIVE-18971) add HS2 WM metrics for use in Grafana and such

2018-03-23 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412162#comment-16412162
 ] 

Sergey Shelukhin edited comment on HIVE-18971 at 3/23/18 9:58 PM:
--

Hmm... I'm not sure actually I can easily access those right now as I'm running 
custom built HS2, so it's probably not aggregating anywhere. But it should be 
the same logic right?
In codahale it looks like this:
{noformat}
 {
"name" : "metrics:name=WM_llap_numExecutors",
"modelerType" : "com.codahale.metrics.JmxReporter$JmxGauge",
"Value" : 72
 {noformat}

I'll test separately after committing and deploying on Ambari, in process of 
making dashboards.


was (Author: sershe):
Hmm... I'm not sure actually I can easily access those right now as I'm running 
custom built HS2, so it's probably not aggregating anywhere.
In codahale it looks like this:
{noformat}
 {
"name" : "metrics:name=WM_llap_numExecutors",
"modelerType" : "com.codahale.metrics.JmxReporter$JmxGauge",
"Value" : 72
 {noformat}

I'll test separately after committing and deploying on Ambari, in process of 
making dashboards.

> add HS2 WM metrics for use in Grafana and such
> --
>
> Key: HIVE-18971
> URL: https://issues.apache.org/jira/browse/HIVE-18971
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18971.01.patch, HIVE-18971.patch
>
>
> HS2 should have metrics added per pool, tagged accordingly. Not clear if HS2 
> even sets up metrics right now...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18971) add HS2 WM metrics for use in Grafana and such

2018-03-23 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412162#comment-16412162
 ] 

Sergey Shelukhin commented on HIVE-18971:
-

Hmm... I'm not sure actually I can easily access those right now as I'm running 
custom built HS2, so it's probably not aggregating anywhere.
In codahale it looks like this:
{noformat}
 {
"name" : "metrics:name=WM_llap_numExecutors",
"modelerType" : "com.codahale.metrics.JmxReporter$JmxGauge",
"Value" : 72
 {noformat}

I'll test separately after committing and deploying on Ambari, in process of 
making dashboards.

> add HS2 WM metrics for use in Grafana and such
> --
>
> Key: HIVE-18971
> URL: https://issues.apache.org/jira/browse/HIVE-18971
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18971.01.patch, HIVE-18971.patch
>
>
> HS2 should have metrics added per pool, tagged accordingly. Not clear if HS2 
> even sets up metrics right now...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18928) HS2: Perflogger has a race condition

2018-03-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412160#comment-16412160
 ] 

Hive QA commented on HIVE-18928:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12915737/HIVE-18928.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 28 failed/errored test(s), 13418 tests 
executed
*Failed tests:*
{noformat}
TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=92)

[infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q]
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=95)


[jira] [Updated] (HIVE-18533) Add option to use InProcessLauncher to submit spark jobs

2018-03-23 Thread Sahil Takiar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-18533:

Attachment: HIVE-18533.5.patch

> Add option to use InProcessLauncher to submit spark jobs
> 
>
> Key: HIVE-18533
> URL: https://issues.apache.org/jira/browse/HIVE-18533
> Project: Hive
>  Issue Type: Improvement
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18533.1.patch, HIVE-18533.2.patch, 
> HIVE-18533.3.patch, HIVE-18533.4.patch, HIVE-18533.5.patch
>
>
> See discussion in HIVE-16484 for details.
> I think this will help with reducing the amount of time it takes to open a 
> HoS session + debuggability (no need launch a separate process to run a Spark 
> app).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19014) utilize YARN-8028 (queue ACL check) in Hive Tez session pool

2018-03-23 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19014:

Attachment: HIVE-19014.01.patch

> utilize YARN-8028 (queue ACL check) in Hive Tez session pool
> 
>
> Key: HIVE-19014
> URL: https://issues.apache.org/jira/browse/HIVE-19014
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19014.01.patch, HIVE-19014.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18755) Modifications to the metastore for catalogs

2018-03-23 Thread Alexander Kolbasov (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412133#comment-16412133
 ] 

Alexander Kolbasov commented on HIVE-18755:
---

Thanks for the clarification - it isn't immediately clear which structures are 
old and which are new. Any reason not be consistent and make catalog always 
optional?

Your reasoning for CatalogName makes sense but what do you think about having a 
request structure instead?

Some more questions.

# Your current API assumes that catalogs must be created before they can be 
used. Is there any value in auto-creating them on first use or this is too 
complicated?
# I don't get the argument about Location - what does it mean 'place where data 
in the catalog will be stored?' Different data may be stored in different 
places. Even in the case of Hive every Database has its own location so what 
does it mean to have a catalog location? Is it some kind of default for 
Database locations when they are not specified? Why not just have Parameters 
instead (which may be way more useful) and if someone wants to store Location 
parameter it is just another parameter. Ot you envision HMS actually 
interpreting this Location field in some form?
# create_catalog() has Catalog as the only argument. What of later you want to 
change this to take more arguments? Wouldn't it be better to have 
CreateCatalogRequest?
# Why get_catalogs returns list of strings rather then list of catalogs? At the 
minimum the name is misleading. Also, would it make sense to add Request 
structure there by the time you'll want to add a namespace that all catalogs 
belong to?
# # drop_catalog - should it get request structure as an arg? What happens with 
all things inside the catalog when catalog is dropped? What is the general 
semantics?

> Modifications to the metastore for catalogs
> ---
>
> Key: HIVE-18755
> URL: https://issues.apache.org/jira/browse/HIVE-18755
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Reporter: Alan Gates
>Assignee: Alan Gates
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18755.2.patch, HIVE-18755.nothrift, HIVE-18755.patch
>
>
> Step 1 of adding catalogs is to add support in the metastore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19014) utilize YARN-8028 (queue ACL check) in Hive Tez session pool

2018-03-23 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19014:

Attachment: (was: HIVE-19014.01.patch)

> utilize YARN-8028 (queue ACL check) in Hive Tez session pool
> 
>
> Key: HIVE-19014
> URL: https://issues.apache.org/jira/browse/HIVE-19014
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19014.01.patch, HIVE-19014.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18727) Update GenericUDFEnforceNotNullConstraint to throw an ERROR instead of Exception on failure

2018-03-23 Thread Kryvenko Igor (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412127#comment-16412127
 ] 

Kryvenko Igor commented on HIVE-18727:
--

[~vgarg] Done. As i see job for this patch is  in queuing 

> Update GenericUDFEnforceNotNullConstraint to throw an ERROR instead of 
> Exception on failure
> ---
>
> Key: HIVE-18727
> URL: https://issues.apache.org/jira/browse/HIVE-18727
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vineet Garg
>Assignee: Kryvenko Igor
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18727.02.patch, HIVE-18727.03.patch, 
> HIVE-18727.patch
>
>
> Throwing an exception makes TezProcessor stop retrying the task. Since this 
> is NOT NULL constraint violation we don't want TezProcessor to keep retrying 
> on failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19014) utilize YARN-8028 (queue ACL check) in Hive Tez session pool

2018-03-23 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19014:

Attachment: HIVE-19014.01.patch

> utilize YARN-8028 (queue ACL check) in Hive Tez session pool
> 
>
> Key: HIVE-19014
> URL: https://issues.apache.org/jira/browse/HIVE-19014
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19014.01.patch, HIVE-19014.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19014) utilize YARN-8028 (queue ACL check) in Hive Tez session pool

2018-03-23 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19014:

Attachment: (was: HIVE-19014.01.patch)

> utilize YARN-8028 (queue ACL check) in Hive Tez session pool
> 
>
> Key: HIVE-19014
> URL: https://issues.apache.org/jira/browse/HIVE-19014
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19014.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19014) utilize YARN-8028 (queue ACL check) in Hive Tez session pool

2018-03-23 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19014:

Release Note: This feature will only work with YARN 3.2+; there's no 
compile time dependency thanks to the REST API usage, so Hive may release this 
before YARN 3.2 is even available.

> utilize YARN-8028 (queue ACL check) in Hive Tez session pool
> 
>
> Key: HIVE-19014
> URL: https://issues.apache.org/jira/browse/HIVE-19014
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19014.01.patch, HIVE-19014.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19014) utilize YARN-8028 (queue ACL check) in Hive Tez session pool

2018-03-23 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412123#comment-16412123
 ] 

Sergey Shelukhin commented on HIVE-19014:
-

Small update to use the proper YARN config setting and also to use Hadoop 
secure URL for secure case.
Still doesn't support SSL but it looks like RM might not use SSL even when 
secure, so it may be good. Need to test on cluster.

> utilize YARN-8028 (queue ACL check) in Hive Tez session pool
> 
>
> Key: HIVE-19014
> URL: https://issues.apache.org/jira/browse/HIVE-19014
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19014.01.patch, HIVE-19014.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19014) utilize YARN-8028 (queue ACL check) in Hive Tez session pool

2018-03-23 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19014:

Attachment: HIVE-19014.01.patch

> utilize YARN-8028 (queue ACL check) in Hive Tez session pool
> 
>
> Key: HIVE-19014
> URL: https://issues.apache.org/jira/browse/HIVE-19014
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19014.01.patch, HIVE-19014.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18727) Update GenericUDFEnforceNotNullConstraint to throw an ERROR instead of Exception on failure

2018-03-23 Thread Kryvenko Igor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kryvenko Igor updated HIVE-18727:
-
Attachment: HIVE-18727.03.patch

> Update GenericUDFEnforceNotNullConstraint to throw an ERROR instead of 
> Exception on failure
> ---
>
> Key: HIVE-18727
> URL: https://issues.apache.org/jira/browse/HIVE-18727
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vineet Garg
>Assignee: Kryvenko Igor
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18727.02.patch, HIVE-18727.03.patch, 
> HIVE-18727.patch
>
>
> Throwing an exception makes TezProcessor stop retrying the task. Since this 
> is NOT NULL constraint violation we don't want TezProcessor to keep retrying 
> on failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18937) LLAP: management API to dump cache on one node

2018-03-23 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412101#comment-16412101
 ] 

Prasanth Jayachandran commented on HIVE-18937:
--

Yeah. Kind of (one node vs all nodes).. I will move the other one here.

> LLAP: management API to dump cache on one node
> --
>
> Key: HIVE-18937
> URL: https://issues.apache.org/jira/browse/HIVE-18937
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18937) LLAP: management API to dump cache on one node

2018-03-23 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412093#comment-16412093
 ] 

Sergey Shelukhin commented on HIVE-18937:
-

[~prasanth_j] dup of your JIRA?

> LLAP: management API to dump cache on one node
> --
>
> Key: HIVE-18937
> URL: https://issues.apache.org/jira/browse/HIVE-18937
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19021) WM counters are not properly propagated from LLAP to AM

2018-03-23 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19021:

Attachment: HIVE-19021.03.patch

> WM counters are not properly propagated from LLAP to AM
> ---
>
> Key: HIVE-19021
> URL: https://issues.apache.org/jira/browse/HIVE-19021
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19021.01.patch, HIVE-19021.02.patch, 
> HIVE-19021.03.patch, HIVE-19021.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19021) WM counters are not properly propagated from LLAP to AM

2018-03-23 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412091#comment-16412091
 ] 

Sergey Shelukhin commented on HIVE-19021:
-

again for QA

> WM counters are not properly propagated from LLAP to AM
> ---
>
> Key: HIVE-19021
> URL: https://issues.apache.org/jira/browse/HIVE-19021
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19021.01.patch, HIVE-19021.02.patch, 
> HIVE-19021.03.patch, HIVE-19021.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18928) HS2: Perflogger has a race condition

2018-03-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412050#comment-16412050
 ] 

Hive QA commented on HIVE-18928:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
13s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9785/dev-support/hive-personality.sh
 |
| git revision | master / 51104e3 |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9785/yetus/patch-asflicense-problems.txt
 |
| modules | C: common U: common |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9785/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> HS2: Perflogger has a race condition
> 
>
> Key: HIVE-18928
> URL: https://issues.apache.org/jira/browse/HIVE-18928
> Project: Hive
>  Issue Type: Bug
>Reporter: Gopal V
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Major
> Attachments: HIVE-18928.1.patch
>
>
> {code}
> Caused by: java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextNode(HashMap.java:1437) 
> ~[?:1.8.0_112]
> at java.util.HashMap$EntryIterator.next(HashMap.java:1471) 
> ~[?:1.8.0_112]
> at java.util.HashMap$EntryIterator.next(HashMap.java:1469) 
> ~[?:1.8.0_112]
> at java.util.AbstractCollection.toArray(AbstractCollection.java:196) 
> ~[?:1.8.0_112]
> at com.google.common.collect.Iterables.toArray(Iterables.java:316) 
> ~[guava-19.0.jar:?]
> at 
> com.google.common.collect.ImmutableMap.copyOf(ImmutableMap.java:342) 
> ~[guava-19.0.jar:?]
> at 
> com.google.common.collect.ImmutableMap.copyOf(ImmutableMap.java:327) 
> ~[guava-19.0.jar:?]
> at 
> org.apache.hadoop.hive.ql.log.PerfLogger.getEndTimes(PerfLogger.java:218) 
> ~[hive-common-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1561) 
> ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1498) 
> ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:198)
>  ~[hive-service-3.0.0.3.0.0.2-132.jar:3.0.0.3.0.0.2-132]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18840) CachedStore: Prioritize loading of recently accessed tables during prewarm

2018-03-23 Thread Daniel Dai (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412040#comment-16412040
 ] 

Daniel Dai commented on HIVE-18840:
---

I'd like to create a separate class for tblNamesBeingPrewarmed and synchronized 
methods. Synchronize on entire CachedStore does not sound proper though there's 
no other synchronized methods currently.

> CachedStore: Prioritize loading of recently accessed tables during prewarm
> --
>
> Key: HIVE-18840
> URL: https://issues.apache.org/jira/browse/HIVE-18840
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Affects Versions: 3.0.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-18840.1.patch
>
>
> On clusters with large metadata, prewarming the cache can take several hours. 
> Now that CachedStore does not block on prewarm anymore (after HIVE-18264), we 
> should prioritize loading of recently accessed tables during prewarm.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18971) add HS2 WM metrics for use in Grafana and such

2018-03-23 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412038#comment-16412038
 ] 

Prasanth Jayachandran commented on HIVE-18971:
--

Could you paste output from metrics output with the custom pool tags? I am 
trying to understand how this would look like. I am assuming there will be 
metrics tag like "wm_pool_name" under which metrics about that specific pool is 
published is that correct? similar to how session level metrics are published 
(or llap metrics published with its workerIdentity tagged)

> add HS2 WM metrics for use in Grafana and such
> --
>
> Key: HIVE-18971
> URL: https://issues.apache.org/jira/browse/HIVE-18971
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18971.01.patch, HIVE-18971.patch
>
>
> HS2 should have metrics added per pool, tagged accordingly. Not clear if HS2 
> even sets up metrics right now...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18727) Update GenericUDFEnforceNotNullConstraint to throw an ERROR instead of Exception on failure

2018-03-23 Thread Vineet Garg (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412031#comment-16412031
 ] 

Vineet Garg commented on HIVE-18727:


[~vbeshka] Patch is fine I am waiting for ptests to run. I had manually started 
a job to run the ptests but not sure what's going on. Can you try re-uploading 
the patch to see if ptests job start?

> Update GenericUDFEnforceNotNullConstraint to throw an ERROR instead of 
> Exception on failure
> ---
>
> Key: HIVE-18727
> URL: https://issues.apache.org/jira/browse/HIVE-18727
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vineet Garg
>Assignee: Kryvenko Igor
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18727.02.patch, HIVE-18727.patch
>
>
> Throwing an exception makes TezProcessor stop retrying the task. Since this 
> is NOT NULL constraint violation we don't want TezProcessor to keep retrying 
> on failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18910) Migrate to Murmur hash for shuffle and bucketing

2018-03-23 Thread Deepak Jaiswal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal updated HIVE-18910:
--
Attachment: HIVE-18910.12.patch

> Migrate to Murmur hash for shuffle and bucketing
> 
>
> Key: HIVE-18910
> URL: https://issues.apache.org/jira/browse/HIVE-18910
> Project: Hive
>  Issue Type: Task
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-18910.1.patch, HIVE-18910.10.patch, 
> HIVE-18910.11.patch, HIVE-18910.12.patch, HIVE-18910.2.patch, 
> HIVE-18910.3.patch, HIVE-18910.4.patch, HIVE-18910.5.patch, 
> HIVE-18910.6.patch, HIVE-18910.7.patch, HIVE-18910.8.patch, HIVE-18910.9.patch
>
>
> Hive uses JAVA hash which is not as good as murmur for better distribution 
> and efficiency in bucketing a table.
> Migrate to murmur hash but still keep backward compatibility for existing 
> users so that they dont have to reload the existing tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17843) UINT32 Parquet columns are handled as signed INT32-s, silently reading incorrect data

2018-03-23 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HIVE-17843:
---
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

> UINT32 Parquet columns are handled as signed INT32-s, silently reading 
> incorrect data
> -
>
> Key: HIVE-17843
> URL: https://issues.apache.org/jira/browse/HIVE-17843
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Ivanfi
>Assignee: Janaki Lahorani
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-17843.1.patch, HIVE-17843.1.patch, 
> HIVE-17843.2.patch, HIVE-17843.3.patch, HIVE-17843.4.patch, 
> data_including_invalid_values.parquet, data_with_valid_values.parquet, 
> test_uint.parquet
>
>
> An unsigned 32 bit Parquet column, such as
> {noformat}
> optional int32 uint_32_col (UINT_32)
> {noformat}
> is read by Hive as if it were signed, leading to incorrect results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17843) UINT32 Parquet columns are handled as signed INT32-s, silently reading incorrect data

2018-03-23 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412005#comment-16412005
 ] 

Vihang Karajgaonkar commented on HIVE-17843:


patch merged to master branch. Thanks for your contribution [~janulatha]

> UINT32 Parquet columns are handled as signed INT32-s, silently reading 
> incorrect data
> -
>
> Key: HIVE-17843
> URL: https://issues.apache.org/jira/browse/HIVE-17843
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Ivanfi
>Assignee: Janaki Lahorani
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-17843.1.patch, HIVE-17843.1.patch, 
> HIVE-17843.2.patch, HIVE-17843.3.patch, HIVE-17843.4.patch, 
> data_including_invalid_values.parquet, data_with_valid_values.parquet, 
> test_uint.parquet
>
>
> An unsigned 32 bit Parquet column, such as
> {noformat}
> optional int32 uint_32_col (UINT_32)
> {noformat}
> is read by Hive as if it were signed, leading to incorrect results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18971) add HS2 WM metrics for use in Grafana and such

2018-03-23 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411988#comment-16411988
 ] 

Sergey Shelukhin commented on HIVE-18971:
-

Added codahale, tested in cluster - works fine.

> add HS2 WM metrics for use in Grafana and such
> --
>
> Key: HIVE-18971
> URL: https://issues.apache.org/jira/browse/HIVE-18971
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18971.01.patch, HIVE-18971.patch
>
>
> HS2 should have metrics added per pool, tagged accordingly. Not clear if HS2 
> even sets up metrics right now...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18533) Add option to use InProcessLauncher to submit spark jobs

2018-03-23 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411987#comment-16411987
 ] 

Sahil Takiar commented on HIVE-18533:
-

Thanks filed SPARK-23785 to fix the issue.

> Add option to use InProcessLauncher to submit spark jobs
> 
>
> Key: HIVE-18533
> URL: https://issues.apache.org/jira/browse/HIVE-18533
> Project: Hive
>  Issue Type: Improvement
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18533.1.patch, HIVE-18533.2.patch, 
> HIVE-18533.3.patch, HIVE-18533.4.patch
>
>
> See discussion in HIVE-16484 for details.
> I think this will help with reducing the amount of time it takes to open a 
> HoS session + debuggability (no need launch a separate process to run a Spark 
> app).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18971) add HS2 WM metrics for use in Grafana and such

2018-03-23 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18971:

Attachment: HIVE-18971.01.patch

> add HS2 WM metrics for use in Grafana and such
> --
>
> Key: HIVE-18971
> URL: https://issues.apache.org/jira/browse/HIVE-18971
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18971.01.patch, HIVE-18971.patch
>
>
> HS2 should have metrics added per pool, tagged accordingly. Not clear if HS2 
> even sets up metrics right now...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HIVE-18525) Add explain plan to Hive on Spark Web UI

2018-03-23 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411965#comment-16411965
 ] 

Sahil Takiar edited comment on HIVE-18525 at 3/23/18 7:58 PM:
--

Thanks for the feedback [~xuefuz]!
{quote} As a related question, do we show the plan at the job level? That is, 
show the whole query plan for a spark job. That could be useful too. {quote} 
That would be useful too. I haven't found a way to do that in the Spark Web UI 
yet. This might be possible if we implement HIVE-18515, but that would require 
quite a bit of work.

Attached an updated patch + RB link; new patch includes a unit test, PerLogger 
integration, and some other cleanup.


was (Author: stakiar):
Thanks for the feedback [~xuefuz]!

 {quote} s a related question, do we show the plan at the job level? That is, 
show the whole query plan for a spark job. That could be useful too. {quote} 
That would be useful too. I haven't found a way to do that in the Spark Web UI 
yet. This might be possible if we implement HIVE-18515, but that would require 
quite a bit of work.

Attached an updated patch + RB link; new patch includes a unit test, PerLogger 
integration, and some other cleanup.

> Add explain plan to Hive on Spark Web UI
> 
>
> Key: HIVE-18525
> URL: https://issues.apache.org/jira/browse/HIVE-18525
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18525.1.patch, HIVE-18525.2.patch, 
> HIVE-18525.3.patch, Job-Page-Collapsed.png, Job-Page-Expanded.png, 
> Map-Explain-Plan.png, Reduce-Explain-Plan.png
>
>
> More of an investigation JIRA. The Spark UI has a "long description" of each 
> stage in the Spark DAG. Typically one stage in the Spark DAG corresponds to 
> either a {{MapWork}} or {{ReduceWork}} object. It would be useful if the long 
> description contained the explain plan of the corresponding work object.
> I'm not sure how much additional overhead this would introduce. If not the 
> full explain plan, then maybe a modified one that just lists out all the 
> operator tree along with each operator name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18525) Add explain plan to Hive on Spark Web UI

2018-03-23 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411965#comment-16411965
 ] 

Sahil Takiar commented on HIVE-18525:
-

Thanks for the feedback [~xuefuz]!

 {quote} s a related question, do we show the plan at the job level? That is, 
show the whole query plan for a spark job. That could be useful too. {quote} 
That would be useful too. I haven't found a way to do that in the Spark Web UI 
yet. This might be possible if we implement HIVE-18515, but that would require 
quite a bit of work.

Attached an updated patch + RB link; new patch includes a unit test, PerLogger 
integration, and some other cleanup.

> Add explain plan to Hive on Spark Web UI
> 
>
> Key: HIVE-18525
> URL: https://issues.apache.org/jira/browse/HIVE-18525
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18525.1.patch, HIVE-18525.2.patch, 
> HIVE-18525.3.patch, Job-Page-Collapsed.png, Job-Page-Expanded.png, 
> Map-Explain-Plan.png, Reduce-Explain-Plan.png
>
>
> More of an investigation JIRA. The Spark UI has a "long description" of each 
> stage in the Spark DAG. Typically one stage in the Spark DAG corresponds to 
> either a {{MapWork}} or {{ReduceWork}} object. It would be useful if the long 
> description contained the explain plan of the corresponding work object.
> I'm not sure how much additional overhead this would introduce. If not the 
> full explain plan, then maybe a modified one that just lists out all the 
> operator tree along with each operator name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18525) Add explain plan to Hive on Spark Web UI

2018-03-23 Thread Sahil Takiar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-18525:

Attachment: HIVE-18525.3.patch

> Add explain plan to Hive on Spark Web UI
> 
>
> Key: HIVE-18525
> URL: https://issues.apache.org/jira/browse/HIVE-18525
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18525.1.patch, HIVE-18525.2.patch, 
> HIVE-18525.3.patch, Job-Page-Collapsed.png, Job-Page-Expanded.png, 
> Map-Explain-Plan.png, Reduce-Explain-Plan.png
>
>
> More of an investigation JIRA. The Spark UI has a "long description" of each 
> stage in the Spark DAG. Typically one stage in the Spark DAG corresponds to 
> either a {{MapWork}} or {{ReduceWork}} object. It would be useful if the long 
> description contained the explain plan of the corresponding work object.
> I'm not sure how much additional overhead this would introduce. If not the 
> full explain plan, then maybe a modified one that just lists out all the 
> operator tree along with each operator name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18909) Metrics for results cache

2018-03-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411871#comment-16411871
 ] 

Hive QA commented on HIVE-18909:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12915799/HIVE-18909.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 30 failed/errored test(s), 13416 tests 
executed
*Failed tests:*
{noformat}
TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=92)

[infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q]
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=95)


[jira] [Updated] (HIVE-17843) UINT32 Parquet columns are handled as signed INT32-s, silently reading incorrect data

2018-03-23 Thread Janaki Lahorani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Janaki Lahorani updated HIVE-17843:
---
Attachment: data_including_invalid_values.parquet

> UINT32 Parquet columns are handled as signed INT32-s, silently reading 
> incorrect data
> -
>
> Key: HIVE-17843
> URL: https://issues.apache.org/jira/browse/HIVE-17843
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Ivanfi
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-17843.1.patch, HIVE-17843.1.patch, 
> HIVE-17843.2.patch, HIVE-17843.3.patch, HIVE-17843.4.patch, 
> data_including_invalid_values.parquet, data_with_valid_values.parquet, 
> test_uint.parquet
>
>
> An unsigned 32 bit Parquet column, such as
> {noformat}
> optional int32 uint_32_col (UINT_32)
> {noformat}
> is read by Hive as if it were signed, leading to incorrect results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17843) UINT32 Parquet columns are handled as signed INT32-s, silently reading incorrect data

2018-03-23 Thread Janaki Lahorani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Janaki Lahorani updated HIVE-17843:
---
Attachment: data_with_valid_values.parquet

> UINT32 Parquet columns are handled as signed INT32-s, silently reading 
> incorrect data
> -
>
> Key: HIVE-17843
> URL: https://issues.apache.org/jira/browse/HIVE-17843
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Ivanfi
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-17843.1.patch, HIVE-17843.1.patch, 
> HIVE-17843.2.patch, HIVE-17843.3.patch, HIVE-17843.4.patch, 
> data_including_invalid_values.parquet, data_with_valid_values.parquet, 
> test_uint.parquet
>
>
> An unsigned 32 bit Parquet column, such as
> {noformat}
> optional int32 uint_32_col (UINT_32)
> {noformat}
> is read by Hive as if it were signed, leading to incorrect results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17843) UINT32 Parquet columns are handled as signed INT32-s, silently reading incorrect data

2018-03-23 Thread Janaki Lahorani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Janaki Lahorani updated HIVE-17843:
---
Attachment: test_uint.parquet

> UINT32 Parquet columns are handled as signed INT32-s, silently reading 
> incorrect data
> -
>
> Key: HIVE-17843
> URL: https://issues.apache.org/jira/browse/HIVE-17843
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Ivanfi
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-17843.1.patch, HIVE-17843.1.patch, 
> HIVE-17843.2.patch, HIVE-17843.3.patch, HIVE-17843.4.patch, 
> data_including_invalid_values.parquet, data_with_valid_values.parquet, 
> test_uint.parquet
>
>
> An unsigned 32 bit Parquet column, such as
> {noformat}
> optional int32 uint_32_col (UINT_32)
> {noformat}
> is read by Hive as if it were signed, leading to incorrect results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19038) LLAP: Service loader throws "Provider not found" exception if hive-llap-server is in class path while loading tokens

2018-03-23 Thread Arun Mahadevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411858#comment-16411858
 ] 

Arun Mahadevan commented on HIVE-19038:
---

[~gopalv] raised - https://github.com/apache/hive/pull/327

> LLAP: Service loader throws "Provider not found" exception if 
> hive-llap-server is in class path while loading tokens
> 
>
> Key: HIVE-19038
> URL: https://issues.apache.org/jira/browse/HIVE-19038
> Project: Hive
>  Issue Type: Bug
>Reporter: Arun Mahadevan
>Assignee: Arun Mahadevan
>Priority: Major
>  Labels: pull-request-available
>
> While testing storm in secure mode, the hive-llap-server jar file was 
> included in the class path and resulted in the below exception while trying 
> to renew credentials when invoking 
> "org.apache.hadoop.security.token.Token.getRenewer"
>  
>  
> {noformat}
> java.util.ServiceConfigurationError: 
> org.apache.hadoop.security.token.TokenRenewer: Provider 
> org.apache.hadoop.hive.llap.security.LlapTokenIdentifier.Renewer not found at 
> java.util.ServiceLoader.fail(ServiceLoader.java:239) ~[?:1.8.0_161] at 
> java.util.ServiceLoader.access$300(ServiceLoader.java:185) ~[?:1.8.0_161] at 
> java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:372) 
> ~[?:1.8.0_161] at 
> java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404) 
> ~[?:1.8.0_161] at 
> java.util.ServiceLoader$1.next(ServiceLoader.java:480) ~[?:1.8.0_161] at 
> org.apache.hadoop.security.token.Token.getRenewer(Token.java:463) 
> ~[hadoop-common-3.0.0.3.0.0.0-1064.jar:?] at 
> org.apache.hadoop.security.token.Token.renew(Token.java:490) 
> ~[hadoop-common-3.0.0.3.0.0.0-1064.jar:?] at 
> org.apache.storm.hdfs.security.AutoHDFS.doRenew(AutoHDFS.java:159) 
> ~[storm-autocreds-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.common.AbstractAutoCreds.renew(AbstractAutoCreds.java:104) 
> ~[storm-autocreds-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_161] at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_161] at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_161] at java.lang.reflect.Method.invoke(Method.java:498) 
> ~[?:1.8.0_161] at 
> clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93) 
> ~[clojure-1.7.0.jar:?] at 
> clojure.lang.Reflector.invokeInstanceMethod(Reflector.java:28) 
> ~[clojure-1.7.0.jar:?] at 
> org.apache.storm.daemon.nimbus$renew_credentials$fn__9121$fn__9126.invoke(nimbus.clj:1450)
>  ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.daemon.nimbus$renew_credentials$fn__9121.invoke(nimbus.clj:1449)
>  ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.daemon.nimbus$renew_credentials.invoke(nimbus.clj:1439) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.daemon.nimbus$fn__9547$exec_fn__3301__auto9548$fn__9567.invoke(nimbus.clj:2521)
>  ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$schedule_recurring$this__1656.invoke(timer.clj:105) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$mk_timer$fn__1639$fn__1640.invoke(timer.clj:50) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$mk_timer$fn__1639.invoke(timer.clj:42) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> clojure.lang.AFn.run(AFn.java:22) ~[clojure-1.7.0.jar:?] at 
> java.lang.Thread.run(Thread.java:748) [?:1.8.0_161] 2018-03-22 22:08:59.088 
> o.a.s.util timer [ERROR] Halting process: ("Error when processing an event") 
> java.lang.RuntimeException: ("Error when processing an event") at 
> org.apache.storm.util$exit_process_BANG_.doInvoke(util.clj:341) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> clojure.lang.RestFn.invoke(RestFn.java:423) ~[clojure-1.7.0.jar:?] at 
> org.apache.storm.daemon.nimbus$nimbus_data$fn__8334.invoke(nimbus.clj:221) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$mk_timer$fn__1639$fn__1640.invoke(timer.clj:71) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$mk_timer$fn__1639.invoke(timer.clj:42) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> clojure.lang.AFn.run(AFn.java:22) ~[clojure-1.7.0.jar:?] at 
> java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]{noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19038) LLAP: Service loader throws "Provider not found" exception if hive-llap-server is in class path while loading tokens

2018-03-23 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411859#comment-16411859
 ] 

Gopal V commented on HIVE-19038:


LGTM - +1 tests pending

> LLAP: Service loader throws "Provider not found" exception if 
> hive-llap-server is in class path while loading tokens
> 
>
> Key: HIVE-19038
> URL: https://issues.apache.org/jira/browse/HIVE-19038
> Project: Hive
>  Issue Type: Bug
>Reporter: Arun Mahadevan
>Assignee: Arun Mahadevan
>Priority: Major
>  Labels: pull-request-available
>
> While testing storm in secure mode, the hive-llap-server jar file was 
> included in the class path and resulted in the below exception while trying 
> to renew credentials when invoking 
> "org.apache.hadoop.security.token.Token.getRenewer"
>  
>  
> {noformat}
> java.util.ServiceConfigurationError: 
> org.apache.hadoop.security.token.TokenRenewer: Provider 
> org.apache.hadoop.hive.llap.security.LlapTokenIdentifier.Renewer not found at 
> java.util.ServiceLoader.fail(ServiceLoader.java:239) ~[?:1.8.0_161] at 
> java.util.ServiceLoader.access$300(ServiceLoader.java:185) ~[?:1.8.0_161] at 
> java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:372) 
> ~[?:1.8.0_161] at 
> java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404) 
> ~[?:1.8.0_161] at 
> java.util.ServiceLoader$1.next(ServiceLoader.java:480) ~[?:1.8.0_161] at 
> org.apache.hadoop.security.token.Token.getRenewer(Token.java:463) 
> ~[hadoop-common-3.0.0.3.0.0.0-1064.jar:?] at 
> org.apache.hadoop.security.token.Token.renew(Token.java:490) 
> ~[hadoop-common-3.0.0.3.0.0.0-1064.jar:?] at 
> org.apache.storm.hdfs.security.AutoHDFS.doRenew(AutoHDFS.java:159) 
> ~[storm-autocreds-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.common.AbstractAutoCreds.renew(AbstractAutoCreds.java:104) 
> ~[storm-autocreds-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_161] at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_161] at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_161] at java.lang.reflect.Method.invoke(Method.java:498) 
> ~[?:1.8.0_161] at 
> clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93) 
> ~[clojure-1.7.0.jar:?] at 
> clojure.lang.Reflector.invokeInstanceMethod(Reflector.java:28) 
> ~[clojure-1.7.0.jar:?] at 
> org.apache.storm.daemon.nimbus$renew_credentials$fn__9121$fn__9126.invoke(nimbus.clj:1450)
>  ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.daemon.nimbus$renew_credentials$fn__9121.invoke(nimbus.clj:1449)
>  ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.daemon.nimbus$renew_credentials.invoke(nimbus.clj:1439) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.daemon.nimbus$fn__9547$exec_fn__3301__auto9548$fn__9567.invoke(nimbus.clj:2521)
>  ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$schedule_recurring$this__1656.invoke(timer.clj:105) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$mk_timer$fn__1639$fn__1640.invoke(timer.clj:50) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$mk_timer$fn__1639.invoke(timer.clj:42) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> clojure.lang.AFn.run(AFn.java:22) ~[clojure-1.7.0.jar:?] at 
> java.lang.Thread.run(Thread.java:748) [?:1.8.0_161] 2018-03-22 22:08:59.088 
> o.a.s.util timer [ERROR] Halting process: ("Error when processing an event") 
> java.lang.RuntimeException: ("Error when processing an event") at 
> org.apache.storm.util$exit_process_BANG_.doInvoke(util.clj:341) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> clojure.lang.RestFn.invoke(RestFn.java:423) ~[clojure-1.7.0.jar:?] at 
> org.apache.storm.daemon.nimbus$nimbus_data$fn__8334.invoke(nimbus.clj:221) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$mk_timer$fn__1639$fn__1640.invoke(timer.clj:71) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$mk_timer$fn__1639.invoke(timer.clj:42) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> clojure.lang.AFn.run(AFn.java:22) ~[clojure-1.7.0.jar:?] at 
> java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]{noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19038) LLAP: Service loader throws "Provider not found" exception if hive-llap-server is in class path while loading tokens

2018-03-23 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411857#comment-16411857
 ] 

Gopal V commented on HIVE-19038:


[~arunmahadevan]: do you have a patch? This looks like a ".Renewer" -> 
"$Renewer", but if you can confirm that, I can review

> LLAP: Service loader throws "Provider not found" exception if 
> hive-llap-server is in class path while loading tokens
> 
>
> Key: HIVE-19038
> URL: https://issues.apache.org/jira/browse/HIVE-19038
> Project: Hive
>  Issue Type: Bug
>Reporter: Arun Mahadevan
>Assignee: Arun Mahadevan
>Priority: Major
>  Labels: pull-request-available
>
> While testing storm in secure mode, the hive-llap-server jar file was 
> included in the class path and resulted in the below exception while trying 
> to renew credentials when invoking 
> "org.apache.hadoop.security.token.Token.getRenewer"
>  
>  
> {noformat}
> java.util.ServiceConfigurationError: 
> org.apache.hadoop.security.token.TokenRenewer: Provider 
> org.apache.hadoop.hive.llap.security.LlapTokenIdentifier.Renewer not found at 
> java.util.ServiceLoader.fail(ServiceLoader.java:239) ~[?:1.8.0_161] at 
> java.util.ServiceLoader.access$300(ServiceLoader.java:185) ~[?:1.8.0_161] at 
> java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:372) 
> ~[?:1.8.0_161] at 
> java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404) 
> ~[?:1.8.0_161] at 
> java.util.ServiceLoader$1.next(ServiceLoader.java:480) ~[?:1.8.0_161] at 
> org.apache.hadoop.security.token.Token.getRenewer(Token.java:463) 
> ~[hadoop-common-3.0.0.3.0.0.0-1064.jar:?] at 
> org.apache.hadoop.security.token.Token.renew(Token.java:490) 
> ~[hadoop-common-3.0.0.3.0.0.0-1064.jar:?] at 
> org.apache.storm.hdfs.security.AutoHDFS.doRenew(AutoHDFS.java:159) 
> ~[storm-autocreds-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.common.AbstractAutoCreds.renew(AbstractAutoCreds.java:104) 
> ~[storm-autocreds-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_161] at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_161] at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_161] at java.lang.reflect.Method.invoke(Method.java:498) 
> ~[?:1.8.0_161] at 
> clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93) 
> ~[clojure-1.7.0.jar:?] at 
> clojure.lang.Reflector.invokeInstanceMethod(Reflector.java:28) 
> ~[clojure-1.7.0.jar:?] at 
> org.apache.storm.daemon.nimbus$renew_credentials$fn__9121$fn__9126.invoke(nimbus.clj:1450)
>  ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.daemon.nimbus$renew_credentials$fn__9121.invoke(nimbus.clj:1449)
>  ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.daemon.nimbus$renew_credentials.invoke(nimbus.clj:1439) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.daemon.nimbus$fn__9547$exec_fn__3301__auto9548$fn__9567.invoke(nimbus.clj:2521)
>  ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$schedule_recurring$this__1656.invoke(timer.clj:105) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$mk_timer$fn__1639$fn__1640.invoke(timer.clj:50) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$mk_timer$fn__1639.invoke(timer.clj:42) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> clojure.lang.AFn.run(AFn.java:22) ~[clojure-1.7.0.jar:?] at 
> java.lang.Thread.run(Thread.java:748) [?:1.8.0_161] 2018-03-22 22:08:59.088 
> o.a.s.util timer [ERROR] Halting process: ("Error when processing an event") 
> java.lang.RuntimeException: ("Error when processing an event") at 
> org.apache.storm.util$exit_process_BANG_.doInvoke(util.clj:341) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> clojure.lang.RestFn.invoke(RestFn.java:423) ~[clojure-1.7.0.jar:?] at 
> org.apache.storm.daemon.nimbus$nimbus_data$fn__8334.invoke(nimbus.clj:221) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$mk_timer$fn__1639$fn__1640.invoke(timer.clj:71) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$mk_timer$fn__1639.invoke(timer.clj:42) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> clojure.lang.AFn.run(AFn.java:22) ~[clojure-1.7.0.jar:?] at 
> java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]{noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19038) LLAP: Service loader throws "Provider not found" exception if hive-llap-server is in class path while loading tokens

2018-03-23 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-19038:
--
Labels: pull-request-available  (was: )

> LLAP: Service loader throws "Provider not found" exception if 
> hive-llap-server is in class path while loading tokens
> 
>
> Key: HIVE-19038
> URL: https://issues.apache.org/jira/browse/HIVE-19038
> Project: Hive
>  Issue Type: Bug
>Reporter: Arun Mahadevan
>Assignee: Arun Mahadevan
>Priority: Major
>  Labels: pull-request-available
>
> While testing storm in secure mode, the hive-llap-server jar file was 
> included in the class path and resulted in the below exception while trying 
> to renew credentials when invoking 
> "org.apache.hadoop.security.token.Token.getRenewer"
>  
>  
> {noformat}
> java.util.ServiceConfigurationError: 
> org.apache.hadoop.security.token.TokenRenewer: Provider 
> org.apache.hadoop.hive.llap.security.LlapTokenIdentifier.Renewer not found at 
> java.util.ServiceLoader.fail(ServiceLoader.java:239) ~[?:1.8.0_161] at 
> java.util.ServiceLoader.access$300(ServiceLoader.java:185) ~[?:1.8.0_161] at 
> java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:372) 
> ~[?:1.8.0_161] at 
> java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404) 
> ~[?:1.8.0_161] at 
> java.util.ServiceLoader$1.next(ServiceLoader.java:480) ~[?:1.8.0_161] at 
> org.apache.hadoop.security.token.Token.getRenewer(Token.java:463) 
> ~[hadoop-common-3.0.0.3.0.0.0-1064.jar:?] at 
> org.apache.hadoop.security.token.Token.renew(Token.java:490) 
> ~[hadoop-common-3.0.0.3.0.0.0-1064.jar:?] at 
> org.apache.storm.hdfs.security.AutoHDFS.doRenew(AutoHDFS.java:159) 
> ~[storm-autocreds-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.common.AbstractAutoCreds.renew(AbstractAutoCreds.java:104) 
> ~[storm-autocreds-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_161] at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_161] at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_161] at java.lang.reflect.Method.invoke(Method.java:498) 
> ~[?:1.8.0_161] at 
> clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93) 
> ~[clojure-1.7.0.jar:?] at 
> clojure.lang.Reflector.invokeInstanceMethod(Reflector.java:28) 
> ~[clojure-1.7.0.jar:?] at 
> org.apache.storm.daemon.nimbus$renew_credentials$fn__9121$fn__9126.invoke(nimbus.clj:1450)
>  ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.daemon.nimbus$renew_credentials$fn__9121.invoke(nimbus.clj:1449)
>  ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.daemon.nimbus$renew_credentials.invoke(nimbus.clj:1439) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.daemon.nimbus$fn__9547$exec_fn__3301__auto9548$fn__9567.invoke(nimbus.clj:2521)
>  ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$schedule_recurring$this__1656.invoke(timer.clj:105) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$mk_timer$fn__1639$fn__1640.invoke(timer.clj:50) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$mk_timer$fn__1639.invoke(timer.clj:42) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> clojure.lang.AFn.run(AFn.java:22) ~[clojure-1.7.0.jar:?] at 
> java.lang.Thread.run(Thread.java:748) [?:1.8.0_161] 2018-03-22 22:08:59.088 
> o.a.s.util timer [ERROR] Halting process: ("Error when processing an event") 
> java.lang.RuntimeException: ("Error when processing an event") at 
> org.apache.storm.util$exit_process_BANG_.doInvoke(util.clj:341) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> clojure.lang.RestFn.invoke(RestFn.java:423) ~[clojure-1.7.0.jar:?] at 
> org.apache.storm.daemon.nimbus$nimbus_data$fn__8334.invoke(nimbus.clj:221) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$mk_timer$fn__1639$fn__1640.invoke(timer.clj:71) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$mk_timer$fn__1639.invoke(timer.clj:42) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> clojure.lang.AFn.run(AFn.java:22) ~[clojure-1.7.0.jar:?] at 
> java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]{noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19038) LLAP: Service loader throws "Provider not found" exception if hive-llap-server is in class path while loading tokens

2018-03-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411856#comment-16411856
 ] 

ASF GitHub Bot commented on HIVE-19038:
---

GitHub user arunmahadevan opened a pull request:

https://github.com/apache/hive/pull/327

HIVE-19038: Fixed inner class format for TokenRenewer in META-INF/services



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/arunmahadevan/hive HIVE-19038

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hive/pull/327.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #327


commit 306841bc21f65cdaf7845615095f37eb861ccde9
Author: Arun Mahadevan 
Date:   2018-03-23T18:28:34Z

HIVE-19038: Fixed inner class format for TokenRenewer in META-INF/services




> LLAP: Service loader throws "Provider not found" exception if 
> hive-llap-server is in class path while loading tokens
> 
>
> Key: HIVE-19038
> URL: https://issues.apache.org/jira/browse/HIVE-19038
> Project: Hive
>  Issue Type: Bug
>Reporter: Arun Mahadevan
>Assignee: Arun Mahadevan
>Priority: Major
>  Labels: pull-request-available
>
> While testing storm in secure mode, the hive-llap-server jar file was 
> included in the class path and resulted in the below exception while trying 
> to renew credentials when invoking 
> "org.apache.hadoop.security.token.Token.getRenewer"
>  
>  
> {noformat}
> java.util.ServiceConfigurationError: 
> org.apache.hadoop.security.token.TokenRenewer: Provider 
> org.apache.hadoop.hive.llap.security.LlapTokenIdentifier.Renewer not found at 
> java.util.ServiceLoader.fail(ServiceLoader.java:239) ~[?:1.8.0_161] at 
> java.util.ServiceLoader.access$300(ServiceLoader.java:185) ~[?:1.8.0_161] at 
> java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:372) 
> ~[?:1.8.0_161] at 
> java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404) 
> ~[?:1.8.0_161] at 
> java.util.ServiceLoader$1.next(ServiceLoader.java:480) ~[?:1.8.0_161] at 
> org.apache.hadoop.security.token.Token.getRenewer(Token.java:463) 
> ~[hadoop-common-3.0.0.3.0.0.0-1064.jar:?] at 
> org.apache.hadoop.security.token.Token.renew(Token.java:490) 
> ~[hadoop-common-3.0.0.3.0.0.0-1064.jar:?] at 
> org.apache.storm.hdfs.security.AutoHDFS.doRenew(AutoHDFS.java:159) 
> ~[storm-autocreds-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.common.AbstractAutoCreds.renew(AbstractAutoCreds.java:104) 
> ~[storm-autocreds-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_161] at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_161] at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_161] at java.lang.reflect.Method.invoke(Method.java:498) 
> ~[?:1.8.0_161] at 
> clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93) 
> ~[clojure-1.7.0.jar:?] at 
> clojure.lang.Reflector.invokeInstanceMethod(Reflector.java:28) 
> ~[clojure-1.7.0.jar:?] at 
> org.apache.storm.daemon.nimbus$renew_credentials$fn__9121$fn__9126.invoke(nimbus.clj:1450)
>  ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.daemon.nimbus$renew_credentials$fn__9121.invoke(nimbus.clj:1449)
>  ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.daemon.nimbus$renew_credentials.invoke(nimbus.clj:1439) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.daemon.nimbus$fn__9547$exec_fn__3301__auto9548$fn__9567.invoke(nimbus.clj:2521)
>  ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$schedule_recurring$this__1656.invoke(timer.clj:105) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$mk_timer$fn__1639$fn__1640.invoke(timer.clj:50) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$mk_timer$fn__1639.invoke(timer.clj:42) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> clojure.lang.AFn.run(AFn.java:22) ~[clojure-1.7.0.jar:?] at 
> java.lang.Thread.run(Thread.java:748) [?:1.8.0_161] 2018-03-22 22:08:59.088 
> o.a.s.util timer [ERROR] Halting process: ("Error when processing an event") 
> java.lang.RuntimeException: ("Error when processing an event") at 
> org.apache.storm.util$exit_process_BANG_.doInvoke(util.clj:341) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> clojure.lang.RestFn.invoke(RestFn.java:423) 

[jira] [Commented] (HIVE-17573) LLAP: JDK9 support fixes

2018-03-23 Thread PRAFUL DASH (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411847#comment-16411847
 ] 

PRAFUL DASH commented on HIVE-17573:


Its very Important guys , please get this done as you know already jdk 10 is 
going to available  soon,  so need to fix / make this compatable asap.

 

Thanks,

PRAFUL

> LLAP: JDK9 support fixes
> 
>
> Key: HIVE-17573
> URL: https://issues.apache.org/jira/browse/HIVE-17573
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 3.0.0
>Reporter: Gopal V
>Assignee: Gopal V
>Priority: Major
>
> The perf diff between JDK8 -> JDK9 seems to be significant.  
> TPC-H Q6 on JDK8 takes 32s on a single node + 1 Tb scale warehouse. 
> TPC-H Q6 on JDK9 takes 19s on the same host + same data.
> The performance difference seems to come from better JIT and better NUMA 
> handling.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19038) LLAP: Service loader throws "Provider not found" exception if hive-llap-server is in class path while loading tokens

2018-03-23 Thread Arun Mahadevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411845#comment-16411845
 ] 

Arun Mahadevan commented on HIVE-19038:
---

The inner class should be specified in the right format in

META-INF/services/org.apache.hadoop.security.token.TokenRenewer

> LLAP: Service loader throws "Provider not found" exception if 
> hive-llap-server is in class path while loading tokens
> 
>
> Key: HIVE-19038
> URL: https://issues.apache.org/jira/browse/HIVE-19038
> Project: Hive
>  Issue Type: Bug
>Reporter: Arun Mahadevan
>Assignee: Arun Mahadevan
>Priority: Major
>
> While testing storm in secure mode, the hive-llap-server jar file was 
> included in the class path and resulted in the below exception while trying 
> to renew credentials when invoking 
> "org.apache.hadoop.security.token.Token.getRenewer"
>  
>  
> {noformat}
> java.util.ServiceConfigurationError: 
> org.apache.hadoop.security.token.TokenRenewer: Provider 
> org.apache.hadoop.hive.llap.security.LlapTokenIdentifier.Renewer not found at 
> java.util.ServiceLoader.fail(ServiceLoader.java:239) ~[?:1.8.0_161] at 
> java.util.ServiceLoader.access$300(ServiceLoader.java:185) ~[?:1.8.0_161] at 
> java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:372) 
> ~[?:1.8.0_161] at 
> java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404) 
> ~[?:1.8.0_161] at 
> java.util.ServiceLoader$1.next(ServiceLoader.java:480) ~[?:1.8.0_161] at 
> org.apache.hadoop.security.token.Token.getRenewer(Token.java:463) 
> ~[hadoop-common-3.0.0.3.0.0.0-1064.jar:?] at 
> org.apache.hadoop.security.token.Token.renew(Token.java:490) 
> ~[hadoop-common-3.0.0.3.0.0.0-1064.jar:?] at 
> org.apache.storm.hdfs.security.AutoHDFS.doRenew(AutoHDFS.java:159) 
> ~[storm-autocreds-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.common.AbstractAutoCreds.renew(AbstractAutoCreds.java:104) 
> ~[storm-autocreds-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_161] at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_161] at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_161] at java.lang.reflect.Method.invoke(Method.java:498) 
> ~[?:1.8.0_161] at 
> clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93) 
> ~[clojure-1.7.0.jar:?] at 
> clojure.lang.Reflector.invokeInstanceMethod(Reflector.java:28) 
> ~[clojure-1.7.0.jar:?] at 
> org.apache.storm.daemon.nimbus$renew_credentials$fn__9121$fn__9126.invoke(nimbus.clj:1450)
>  ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.daemon.nimbus$renew_credentials$fn__9121.invoke(nimbus.clj:1449)
>  ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.daemon.nimbus$renew_credentials.invoke(nimbus.clj:1439) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.daemon.nimbus$fn__9547$exec_fn__3301__auto9548$fn__9567.invoke(nimbus.clj:2521)
>  ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$schedule_recurring$this__1656.invoke(timer.clj:105) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$mk_timer$fn__1639$fn__1640.invoke(timer.clj:50) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$mk_timer$fn__1639.invoke(timer.clj:42) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> clojure.lang.AFn.run(AFn.java:22) ~[clojure-1.7.0.jar:?] at 
> java.lang.Thread.run(Thread.java:748) [?:1.8.0_161] 2018-03-22 22:08:59.088 
> o.a.s.util timer [ERROR] Halting process: ("Error when processing an event") 
> java.lang.RuntimeException: ("Error when processing an event") at 
> org.apache.storm.util$exit_process_BANG_.doInvoke(util.clj:341) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> clojure.lang.RestFn.invoke(RestFn.java:423) ~[clojure-1.7.0.jar:?] at 
> org.apache.storm.daemon.nimbus$nimbus_data$fn__8334.invoke(nimbus.clj:221) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$mk_timer$fn__1639$fn__1640.invoke(timer.clj:71) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$mk_timer$fn__1639.invoke(timer.clj:42) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> clojure.lang.AFn.run(AFn.java:22) ~[clojure-1.7.0.jar:?] at 
> java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]{noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19038) LLAP: Service loader throws "Provider not found" exception if hive-llap-server is in class path while loading tokens

2018-03-23 Thread Arun Mahadevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Mahadevan updated HIVE-19038:
--
Description: 
While testing storm in secure mode, the hive-llap-server jar file was included 
in the class path and resulted in the below exception while trying to renew 
credentials when invoking "org.apache.hadoop.security.token.Token.getRenewer"

 

 
{noformat}
java.util.ServiceConfigurationError: 
org.apache.hadoop.security.token.TokenRenewer: Provider 
org.apache.hadoop.hive.llap.security.LlapTokenIdentifier.Renewer not found at 
java.util.ServiceLoader.fail(ServiceLoader.java:239) ~[?:1.8.0_161] at 
java.util.ServiceLoader.access$300(ServiceLoader.java:185) ~[?:1.8.0_161] at 
java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:372) 
~[?:1.8.0_161] at 
java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404) 
~[?:1.8.0_161] at 
java.util.ServiceLoader$1.next(ServiceLoader.java:480) ~[?:1.8.0_161] at 
org.apache.hadoop.security.token.Token.getRenewer(Token.java:463) 
~[hadoop-common-3.0.0.3.0.0.0-1064.jar:?] at 
org.apache.hadoop.security.token.Token.renew(Token.java:490) 
~[hadoop-common-3.0.0.3.0.0.0-1064.jar:?] at 
org.apache.storm.hdfs.security.AutoHDFS.doRenew(AutoHDFS.java:159) 
~[storm-autocreds-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
org.apache.storm.common.AbstractAutoCreds.renew(AbstractAutoCreds.java:104) 
~[storm-autocreds-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_161] at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
~[?:1.8.0_161] at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:1.8.0_161] at java.lang.reflect.Method.invoke(Method.java:498) 
~[?:1.8.0_161] at 
clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93) 
~[clojure-1.7.0.jar:?] at 
clojure.lang.Reflector.invokeInstanceMethod(Reflector.java:28) 
~[clojure-1.7.0.jar:?] at 
org.apache.storm.daemon.nimbus$renew_credentials$fn__9121$fn__9126.invoke(nimbus.clj:1450)
 ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
org.apache.storm.daemon.nimbus$renew_credentials$fn__9121.invoke(nimbus.clj:1449)
 ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
org.apache.storm.daemon.nimbus$renew_credentials.invoke(nimbus.clj:1439) 
~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
org.apache.storm.daemon.nimbus$fn__9547$exec_fn__3301__auto9548$fn__9567.invoke(nimbus.clj:2521)
 ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
org.apache.storm.timer$schedule_recurring$this__1656.invoke(timer.clj:105) 
~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
org.apache.storm.timer$mk_timer$fn__1639$fn__1640.invoke(timer.clj:50) 
~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
org.apache.storm.timer$mk_timer$fn__1639.invoke(timer.clj:42) 
~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
clojure.lang.AFn.run(AFn.java:22) ~[clojure-1.7.0.jar:?] at 
java.lang.Thread.run(Thread.java:748) [?:1.8.0_161] 2018-03-22 22:08:59.088 
o.a.s.util timer [ERROR] Halting process: ("Error when processing an event") 
java.lang.RuntimeException: ("Error when processing an event") at 
org.apache.storm.util$exit_process_BANG_.doInvoke(util.clj:341) 
~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
clojure.lang.RestFn.invoke(RestFn.java:423) ~[clojure-1.7.0.jar:?] at 
org.apache.storm.daemon.nimbus$nimbus_data$fn__8334.invoke(nimbus.clj:221) 
~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
org.apache.storm.timer$mk_timer$fn__1639$fn__1640.invoke(timer.clj:71) 
~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
org.apache.storm.timer$mk_timer$fn__1639.invoke(timer.clj:42) 
~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
clojure.lang.AFn.run(AFn.java:22) ~[clojure-1.7.0.jar:?] at 
java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]{noformat}
 

  was:
While testing storm in secure mode, the hive-llap-server jar file was included 
in the class path and resulted in the below exception while trying to renew 
credentials when invoking "org.apache.hadoop.security.token.Token.getRenewer"

 

 
{noformat}
java.util.ServiceConfigurationError: 
org.apache.hadoop.security.token.TokenRenewer: Provider 
org.apache.hadoop.hive.llap.security.LlapTokenIdentifier.Renewer not found at 
java.util.ServiceLoader.fail(ServiceLoader.java:239) ~[?:1.8.0_161] at 
java.util.ServiceLoader.access$300(ServiceLoader.java:185) ~[?:1.8.0_161] at 
java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:372) 
~[?:1.8.0_161] at 
java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404) 
~[?:1.8.0_161] at java.util.ServiceLoader$1.next(ServiceLoader.java:480) 
~[?:1.8.0_161] at 
org.apache.hadoop.security.token.Token.getRenewer(Token.java:463) 
~[hadoop-common-3.0.0.3.0.0.0-1064.jar:?] 

[jira] [Assigned] (HIVE-19038) LLAP: Service loader throws "Provider not found" exception if hive-llap-server is in class path while loading tokens

2018-03-23 Thread Arun Mahadevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Mahadevan reassigned HIVE-19038:
-


> LLAP: Service loader throws "Provider not found" exception if 
> hive-llap-server is in class path while loading tokens
> 
>
> Key: HIVE-19038
> URL: https://issues.apache.org/jira/browse/HIVE-19038
> Project: Hive
>  Issue Type: Bug
>Reporter: Arun Mahadevan
>Assignee: Arun Mahadevan
>Priority: Major
>
> While testing storm in secure mode, the hive-llap-server jar file was 
> included in the class path and resulted in the below exception while trying 
> to renew credentials when invoking 
> "org.apache.hadoop.security.token.Token.getRenewer"
>  
>  
> {noformat}
> java.util.ServiceConfigurationError: 
> org.apache.hadoop.security.token.TokenRenewer: Provider 
> org.apache.hadoop.hive.llap.security.LlapTokenIdentifier.Renewer not found at 
> java.util.ServiceLoader.fail(ServiceLoader.java:239) ~[?:1.8.0_161] at 
> java.util.ServiceLoader.access$300(ServiceLoader.java:185) ~[?:1.8.0_161] at 
> java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:372) 
> ~[?:1.8.0_161] at 
> java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404) 
> ~[?:1.8.0_161] at java.util.ServiceLoader$1.next(ServiceLoader.java:480) 
> ~[?:1.8.0_161] at 
> org.apache.hadoop.security.token.Token.getRenewer(Token.java:463) 
> ~[hadoop-common-3.0.0.3.0.0.0-1064.jar:?] at 
> org.apache.hadoop.security.token.Token.renew(Token.java:490) 
> ~[hadoop-common-3.0.0.3.0.0.0-1064.jar:?] at 
> org.apache.storm.hdfs.security.AutoHDFS.doRenew(AutoHDFS.java:159) 
> ~[storm-autocreds-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.common.AbstractAutoCreds.renew(AbstractAutoCreds.java:104) 
> ~[storm-autocreds-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_161] at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_161] at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_161] at java.lang.reflect.Method.invoke(Method.java:498) 
> ~[?:1.8.0_161] at 
> clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93) 
> ~[clojure-1.7.0.jar:?] at 
> clojure.lang.Reflector.invokeInstanceMethod(Reflector.java:28) 
> ~[clojure-1.7.0.jar:?] at 
> org.apache.storm.daemon.nimbus$renew_credentials$fn__9121$fn__9126.invoke(nimbus.clj:1450)
>  ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.daemon.nimbus$renew_credentials$fn__9121.invoke(nimbus.clj:1449)
>  ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.daemon.nimbus$renew_credentials.invoke(nimbus.clj:1439) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.daemon.nimbus$fn__9547$exec_fn__3301__auto9548$fn__9567.invoke(nimbus.clj:2521)
>  ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$schedule_recurring$this__1656.invoke(timer.clj:105) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$mk_timer$fn__1639$fn__1640.invoke(timer.clj:50) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$mk_timer$fn__1639.invoke(timer.clj:42) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> clojure.lang.AFn.run(AFn.java:22) ~[clojure-1.7.0.jar:?] at 
> java.lang.Thread.run(Thread.java:748) [?:1.8.0_161] 2018-03-22 22:08:59.088 
> o.a.s.util timer [ERROR] Halting process: ("Error when processing an event") 
> java.lang.RuntimeException: ("Error when processing an event") at 
> org.apache.storm.util$exit_process_BANG_.doInvoke(util.clj:341) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> clojure.lang.RestFn.invoke(RestFn.java:423) ~[clojure-1.7.0.jar:?] at 
> org.apache.storm.daemon.nimbus$nimbus_data$fn__8334.invoke(nimbus.clj:221) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$mk_timer$fn__1639$fn__1640.invoke(timer.clj:71) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$mk_timer$fn__1639.invoke(timer.clj:42) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> clojure.lang.AFn.run(AFn.java:22) ~[clojure-1.7.0.jar:?] at 
> java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]{noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18953) Implement CHECK constraint

2018-03-23 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-18953:
---
Attachment: HIVE-18953.7.patch

> Implement CHECK constraint
> --
>
> Key: HIVE-18953
> URL: https://issues.apache.org/jira/browse/HIVE-18953
> Project: Hive
>  Issue Type: New Feature
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-18953.1.patch, HIVE-18953.2.patch, 
> HIVE-18953.3.patch, HIVE-18953.4.patch, HIVE-18953.5.patch, 
> HIVE-18953.6.patch, HIVE-18953.7.patch
>
>
> Implement column level CHECK constraint



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18953) Implement CHECK constraint

2018-03-23 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-18953:
---
Status: Patch Available  (was: Open)

Uploading rebased patch to kick ptests again

> Implement CHECK constraint
> --
>
> Key: HIVE-18953
> URL: https://issues.apache.org/jira/browse/HIVE-18953
> Project: Hive
>  Issue Type: New Feature
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-18953.1.patch, HIVE-18953.2.patch, 
> HIVE-18953.3.patch, HIVE-18953.4.patch, HIVE-18953.5.patch, 
> HIVE-18953.6.patch, HIVE-18953.7.patch
>
>
> Implement column level CHECK constraint



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18830) RemoteSparkJobMonitor failures are logged twice

2018-03-23 Thread Bharathkrishna Guruvayoor Murali (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharathkrishna Guruvayoor Murali updated HIVE-18830:

Attachment: HIVE-18830.1.patch

> RemoteSparkJobMonitor failures are logged twice
> ---
>
> Key: HIVE-18830
> URL: https://issues.apache.org/jira/browse/HIVE-18830
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Major
> Attachments: HIVE-18830.1.patch
>
>
> If there is an exception in {{RemoteSparkJobMonitor}} while monitoring the 
> remote Spark job the error is logged twice:
> {code}
> LOG.error(msg, e);
> console.printError(msg, "\n" + 
> org.apache.hadoop.util.StringUtils.stringifyException(e));
> {code}
> {{console#printError}} writes the stringified exception to the logs as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18953) Implement CHECK constraint

2018-03-23 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-18953:
---
Status: Open  (was: Patch Available)

> Implement CHECK constraint
> --
>
> Key: HIVE-18953
> URL: https://issues.apache.org/jira/browse/HIVE-18953
> Project: Hive
>  Issue Type: New Feature
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-18953.1.patch, HIVE-18953.2.patch, 
> HIVE-18953.3.patch, HIVE-18953.4.patch, HIVE-18953.5.patch, HIVE-18953.6.patch
>
>
> Implement column level CHECK constraint



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18830) RemoteSparkJobMonitor failures are logged twice

2018-03-23 Thread Bharathkrishna Guruvayoor Murali (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharathkrishna Guruvayoor Murali updated HIVE-18830:

Attachment: (was: HIVE-18830.1.patch)

> RemoteSparkJobMonitor failures are logged twice
> ---
>
> Key: HIVE-18830
> URL: https://issues.apache.org/jira/browse/HIVE-18830
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Major
>
> If there is an exception in {{RemoteSparkJobMonitor}} while monitoring the 
> remote Spark job the error is logged twice:
> {code}
> LOG.error(msg, e);
> console.printError(msg, "\n" + 
> org.apache.hadoop.util.StringUtils.stringifyException(e));
> {code}
> {{console#printError}} writes the stringified exception to the logs as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18825) Define ValidTxnList before starting query optimization

2018-03-23 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411796#comment-16411796
 ] 

Jesus Camacho Rodriguez commented on HIVE-18825:


Reuploading patch.

> Define ValidTxnList before starting query optimization
> --
>
> Key: HIVE-18825
> URL: https://issues.apache.org/jira/browse/HIVE-18825
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-18825.01.patch, HIVE-18825.02.patch, 
> HIVE-18825.03.patch, HIVE-18825.04.patch, HIVE-18825.05.patch, 
> HIVE-18825.06.patch, HIVE-18825.07.patch, HIVE-18825.08.patch, 
> HIVE-18825.patch
>
>
> Consider a set of tables used by a materialized view where inserts happened 
> after the materialization was created. To compute incremental view 
> maintenance, we need to be able to filter only new rows from those base 
> tables. That can be done by inserting a filter operator with condition e.g. 
> {{ROW\_\_ID.transactionId < highwatermark and ROW\_\_ID.transactionId NOT 
> IN()}} on top of the MVs query definition and triggering the 
> rewriting (which should in turn produce a partial rewriting). However, to do 
> that, we need to have a value for {{ValidTxnList}} during query compilation 
> so we know the snapshot that we are querying.
> This patch aims to generate {{ValidTxnList}} before query optimization. There 
> should not be any visible changes for end user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18825) Define ValidTxnList before starting query optimization

2018-03-23 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-18825:
---
Attachment: HIVE-18825.08.patch

> Define ValidTxnList before starting query optimization
> --
>
> Key: HIVE-18825
> URL: https://issues.apache.org/jira/browse/HIVE-18825
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-18825.01.patch, HIVE-18825.02.patch, 
> HIVE-18825.03.patch, HIVE-18825.04.patch, HIVE-18825.05.patch, 
> HIVE-18825.06.patch, HIVE-18825.07.patch, HIVE-18825.08.patch, 
> HIVE-18825.patch
>
>
> Consider a set of tables used by a materialized view where inserts happened 
> after the materialization was created. To compute incremental view 
> maintenance, we need to be able to filter only new rows from those base 
> tables. That can be done by inserting a filter operator with condition e.g. 
> {{ROW\_\_ID.transactionId < highwatermark and ROW\_\_ID.transactionId NOT 
> IN()}} on top of the MVs query definition and triggering the 
> rewriting (which should in turn produce a partial rewriting). However, to do 
> that, we need to have a value for {{ValidTxnList}} during query compilation 
> so we know the snapshot that we are querying.
> This patch aims to generate {{ValidTxnList}} before query optimization. There 
> should not be any visible changes for end user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18909) Metrics for results cache

2018-03-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411774#comment-16411774
 ] 

Hive QA commented on HIVE-18909:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
43s{color} | {color:red} ql: The patch generated 3 new + 445 unchanged - 0 
fixed = 448 total (was 445) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
14s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9784/dev-support/hive-personality.sh
 |
| git revision | master / 325c37a |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9784/yetus/diff-checkstyle-ql.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9784/yetus/whitespace-tabs.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9784/yetus/patch-asflicense-problems.txt
 |
| modules | C: common ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9784/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Metrics for results cache
> -
>
> Key: HIVE-18909
> URL: https://issues.apache.org/jira/browse/HIVE-18909
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
>  Labels: Metrics
> Attachments: HIVE-18909.1.patch, HIVE-18909.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19007) Support REPL LOAD from primary using replica connection configurations received through WITH clause.

2018-03-23 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411750#comment-16411750
 ] 

Sankar Hariappan commented on HIVE-19007:
-

Test failures are due to ptest build issue. Anyways, attaching the same patch 
to trigger ptest again to confirm.

> Support REPL LOAD from primary using replica connection configurations 
> received through WITH clause.
> 
>
> Key: HIVE-19007
> URL: https://issues.apache.org/jira/browse/HIVE-19007
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Fix For: 3.0.0
>
> Attachments: HIVE-19007.01.patch
>
>
> Need to support running REPL LOAD command from primary for different 
> use-cases such as Cloud replication (for efficient use of cloud resources) or 
> workload management.
> To achieve this, WITH clause of REPL LOAD lets user to pass Hive configs such 
> as hive.metastore.warehouse.dir, hive.metastore.uris, 
> hive.repl.replica.functions.root.dir etc, which can be used to establish 
> connection with replica warehouse.
> The configs received from WITH clause of REPL LOAD are not set properly (due 
> to changes by HIVE-18716) to the tasks created. It is also required to re-get 
> the Hive db object if the configs are changed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19007) Support REPL LOAD from primary using replica connection configurations received through WITH clause.

2018-03-23 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-19007:

Status: Patch Available  (was: Open)

> Support REPL LOAD from primary using replica connection configurations 
> received through WITH clause.
> 
>
> Key: HIVE-19007
> URL: https://issues.apache.org/jira/browse/HIVE-19007
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Fix For: 3.0.0
>
> Attachments: HIVE-19007.01.patch
>
>
> Need to support running REPL LOAD command from primary for different 
> use-cases such as Cloud replication (for efficient use of cloud resources) or 
> workload management.
> To achieve this, WITH clause of REPL LOAD lets user to pass Hive configs such 
> as hive.metastore.warehouse.dir, hive.metastore.uris, 
> hive.repl.replica.functions.root.dir etc, which can be used to establish 
> connection with replica warehouse.
> The configs received from WITH clause of REPL LOAD are not set properly (due 
> to changes by HIVE-18716) to the tasks created. It is also required to re-get 
> the Hive db object if the configs are changed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19007) Support REPL LOAD from primary using replica connection configurations received through WITH clause.

2018-03-23 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-19007:

Attachment: HIVE-19007.01.patch

> Support REPL LOAD from primary using replica connection configurations 
> received through WITH clause.
> 
>
> Key: HIVE-19007
> URL: https://issues.apache.org/jira/browse/HIVE-19007
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Fix For: 3.0.0
>
> Attachments: HIVE-19007.01.patch
>
>
> Need to support running REPL LOAD command from primary for different 
> use-cases such as Cloud replication (for efficient use of cloud resources) or 
> workload management.
> To achieve this, WITH clause of REPL LOAD lets user to pass Hive configs such 
> as hive.metastore.warehouse.dir, hive.metastore.uris, 
> hive.repl.replica.functions.root.dir etc, which can be used to establish 
> connection with replica warehouse.
> The configs received from WITH clause of REPL LOAD are not set properly (due 
> to changes by HIVE-18716) to the tasks created. It is also required to re-get 
> the Hive db object if the configs are changed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19007) Support REPL LOAD from primary using replica connection configurations received through WITH clause.

2018-03-23 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-19007:

Attachment: (was: HIVE-19007.01.patch)

> Support REPL LOAD from primary using replica connection configurations 
> received through WITH clause.
> 
>
> Key: HIVE-19007
> URL: https://issues.apache.org/jira/browse/HIVE-19007
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Fix For: 3.0.0
>
>
> Need to support running REPL LOAD command from primary for different 
> use-cases such as Cloud replication (for efficient use of cloud resources) or 
> workload management.
> To achieve this, WITH clause of REPL LOAD lets user to pass Hive configs such 
> as hive.metastore.warehouse.dir, hive.metastore.uris, 
> hive.repl.replica.functions.root.dir etc, which can be used to establish 
> connection with replica warehouse.
> The configs received from WITH clause of REPL LOAD are not set properly (due 
> to changes by HIVE-18716) to the tasks created. It is also required to re-get 
> the Hive db object if the configs are changed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19007) Support REPL LOAD from primary using replica connection configurations received through WITH clause.

2018-03-23 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-19007:

Status: Open  (was: Patch Available)

> Support REPL LOAD from primary using replica connection configurations 
> received through WITH clause.
> 
>
> Key: HIVE-19007
> URL: https://issues.apache.org/jira/browse/HIVE-19007
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Fix For: 3.0.0
>
>
> Need to support running REPL LOAD command from primary for different 
> use-cases such as Cloud replication (for efficient use of cloud resources) or 
> workload management.
> To achieve this, WITH clause of REPL LOAD lets user to pass Hive configs such 
> as hive.metastore.warehouse.dir, hive.metastore.uris, 
> hive.repl.replica.functions.root.dir etc, which can be used to establish 
> connection with replica warehouse.
> The configs received from WITH clause of REPL LOAD are not set properly (due 
> to changes by HIVE-18716) to the tasks created. It is also required to re-get 
> the Hive db object if the configs are changed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19017) Add util function to determine if 2 ValidWriteIdLists are at the same committed ID

2018-03-23 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-19017:
--
Attachment: HIVE-19017.2.patch

> Add util function to determine if 2 ValidWriteIdLists are at the same 
> committed ID
> --
>
> Key: HIVE-19017
> URL: https://issues.apache.org/jira/browse/HIVE-19017
> Project: Hive
>  Issue Type: Bug
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-19017.1.patch, HIVE-19017.2.patch
>
>
> May be useful for the materialized view/results cache work, since this could 
> be used to determine if there have been any changes to a table between when 
> the materialization was generated and a query trying to use the 
> materialization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18727) Update GenericUDFEnforceNotNullConstraint to throw an ERROR instead of Exception on failure

2018-03-23 Thread Kryvenko Igor (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411710#comment-16411710
 ] 

Kryvenko Igor commented on HIVE-18727:
--

[~vgarg] Is it patch ok?

> Update GenericUDFEnforceNotNullConstraint to throw an ERROR instead of 
> Exception on failure
> ---
>
> Key: HIVE-18727
> URL: https://issues.apache.org/jira/browse/HIVE-18727
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vineet Garg
>Assignee: Kryvenko Igor
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18727.02.patch, HIVE-18727.patch
>
>
> Throwing an exception makes TezProcessor stop retrying the task. Since this 
> is NOT NULL constraint violation we don't want TezProcessor to keep retrying 
> on failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18991) Drop database cascade doesn't work with materialized views

2018-03-23 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411709#comment-16411709
 ] 

Alan Gates commented on HIVE-18991:
---

HiveMetaStore.drop_database_core fetches the tables in batches.  This means 
your check inside the loop for materialized views won't work.  You'll have to 
fetch all the materialized views separately first and drop them.  Then fetch 
all the other tables in batches and drop them.

> Drop database cascade doesn't work with materialized views
> --
>
> Key: HIVE-18991
> URL: https://issues.apache.org/jira/browse/HIVE-18991
> Project: Hive
>  Issue Type: Bug
>  Components: Materialized views, Metastore
>Affects Versions: 3.0.0
>Reporter: Alan Gates
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-18991.01.patch, HIVE-18991.patch
>
>
> Create a database, add a table and then a materialized view that depends on 
> the table.  Then drop the database with cascade set.  Sometimes this will 
> fail because when HiveMetaStore.drop_database_core goes to drop all of the 
> tables it may drop the base table before the materialized view, which will 
> cause an integrity constraint violation in the RDBMS.  To resolve this that 
> method should change to fetch and drop materialized views before tables.
> cc [~jcamachorodriguez]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18967) Standalone metastore SQL upgrade scripts do not properly set schema version

2018-03-23 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-18967:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Patch committed.  Thanks Thejas for the review.

> Standalone metastore SQL upgrade scripts do not properly set schema version
> ---
>
> Key: HIVE-18967
> URL: https://issues.apache.org/jira/browse/HIVE-18967
> Project: Hive
>  Issue Type: Bug
>  Components: Standalone Metastore
>Affects Versions: 3.0.0
>Reporter: Alan Gates
>Assignee: Alan Gates
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18967.patch
>
>
> The new combined upgrade scripts for Hive 2.3 to 3.0 transition do not 
> properly set the schema version after they have completed the upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18755) Modifications to the metastore for catalogs

2018-03-23 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16411634#comment-16411634
 ] 

Alan Gates commented on HIVE-18755:
---

Responses to [~akolb]'s comments:

I don't think I broke wire compatibility anywhere.  I made catalog required 
only in newer structs that have never been in a release (like 
DefaultContraintRequest) and optional in anything that has been in a release.

I can add comments to the thrift code.

CatalogName is an attempt to avoid the problem we have with Databases and 
Tables and such where we are putting the string values directly in the method 
calls, making it hard to add arguments to the method later.  It makes 
getCatalog and dropCatalog future proof.

Catalog location is the same thing as dblocation, the place where data in the 
catalog will be stored.  It's not optional because we have to put the data 
somewhere.  I could have inferred it from the catalog name, but it doesn't seem 
unreasonable to require an administrator to designate a location for a catalog. 
 

> Modifications to the metastore for catalogs
> ---
>
> Key: HIVE-18755
> URL: https://issues.apache.org/jira/browse/HIVE-18755
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Reporter: Alan Gates
>Assignee: Alan Gates
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18755.2.patch, HIVE-18755.nothrift, HIVE-18755.patch
>
>
> Step 1 of adding catalogs is to add support in the metastore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >