[jira] [Updated] (HIVE-19435) Data loss when incremental replication with drop partitioned table followed by create and insert-into non-partitioned table using same name.

2018-05-07 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-19435:

Summary: Data loss when incremental replication with drop partitioned table 
followed by create and insert-into non-partitioned table using same name.  
(was: Data loss when incremental replication with Drop partitioned table 
followed by create and insert-into non-partitioned table using same name.)

> Data loss when incremental replication with drop partitioned table followed 
> by create and insert-into non-partitioned table using same name.
> 
>
> Key: HIVE-19435
> URL: https://issues.apache.org/jira/browse/HIVE-19435
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, replication
> Fix For: 3.0.0, 3.1.0
>
>
> If the incremental dump have drop of partitioned table followed by 
> create/insert on non-partitioned table with same name, doesn't replicate the 
> data. Explained below.
> Let's say we have a partitioned table T1 which was already replicated to 
> target.
> DROP_TABLE(T1)->CREATE_TABLE(T1) (Non-partitioned) -> INSERT(T1)(10) 
> After REPL LOAD, T1 doesn't have any data.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18533) Add option to use InProcessLauncher to submit spark jobs

2018-05-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16465530#comment-16465530
 ] 

Hive QA commented on HIVE-18533:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12922206/HIVE-18831.93.patch

{color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 35 failed/errored test(s), 14321 tests 
executed
*Failed tests:*
{noformat}
TestCustomAuthentication - did not produce a TEST-*.xml file (likely timed out) 
(batchId=236)
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed 
out) (batchId=217)
TestSequenceFileReadWrite - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez2]
 (batchId=156)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_dynpart_hashjoin_1]
 (batchId=174)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_vector_dynpart_hashjoin_1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] 
(batchId=105)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] 
(batchId=96)
org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228)
org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 
(batchId=228)
org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232)
org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate
 (batchId=231)
org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys
 (batchId=231)
org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgress (batchId=235)
org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgressParallel 
(batchId=235)
org.apache.hive.jdbc.TestSSL.testSSLFetchHttp (batchId=239)
org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveAndKill 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesWrite 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryElapsedTime
 (batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryExecutionTime
 (batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerVertexRawInputSplitsNoKill
 (batchId=241)
org.apache.hive.spark.client.rpc.TestRpc.testServerPort (batchId=304)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/10740/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10740/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10740/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 35 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12922206 - PreCommit-HIVE-Build

> Add option to use InProcessLauncher to submit spark jobs
> 
>
> Key: HIVE-18533
> URL: https://issues.apache.org/jira/browse/HIVE-18533
> Project: Hive
>  Issue Type: Improvement
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18533.1.patch, HIVE-18533.2.patc

[jira] [Assigned] (HIVE-19436) NullPointerException while getting block info

2018-05-07 Thread Amruth S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amruth S reassigned HIVE-19436:
---


> NullPointerException while getting block info
> -
>
> Key: HIVE-19436
> URL: https://issues.apache.org/jira/browse/HIVE-19436
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.3.2
>Reporter: Amruth S
>Assignee: Amruth S
>Priority: Major
>
> From hive 2.3.2, there are cases where block info object comes out to be null 
> (src/java/org/apache/hadoop/hive/ql/exec/persistence/PTFRowContainer.java)
> Comes in this code path
>  
> {code:java}
> if ( blockInfos.size() > 0 ) {
>  InputSplit[] inputSplits = getInputSplits();
>  FileSplit fS = null;
>  BlockInfo bI = blockInfos.get(0);
> ...
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19437) HiveServer2 Drops connection to Metastore when hiverserver2 webui is enabled

2018-05-07 Thread rr (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rr updated HIVE-19437:
--
Description: 
 

when ssl is enabled for hiveserver2 webui on port 10002, hiveserver2 is unable 
to start up. Keeps connecting to  metastore and then drops the connection and 
then retry again. Hiveserver2 pid will be available but its not actually UP as 
it drops the metastore connection.

Logs shows as follows :

2018-05-07T04:45:52,980 INFO [main] sqlstd.SQLStdHiveAccessController: Created 
SQLStdHiveAccessController for session context : HiveAuthzSessionContext 
[sessionString=9f65e1ba-8810-47ee-a370-238606f02479, clientType=HIVESERVER2]
 2018-05-07T04:45:52,980 WARN [main] session.SessionState: 
METASTORE_FILTER_HOOK will be ignored, since 
hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.

2018-05-07T04:45:52,981 INFO [main] hive.metastore: Mestastore configuration 
hive.metastore.filter.hook changed from 
org.apache.hadoop.hive.metastore.DefaultMetaStoreFilterHookImpl to 
org.apache.hadoop.hive.ql.security.authorization.plugin.AuthorizationMetaStoreFilterHook

2018-05-07T04:45:52,981 INFO [main] hive.metastore: Closed a connection to 
metastore, current connections: 0

2018-05-07T04:45:52,982 INFO [main] hive.metastore: Trying to connect to 
metastore with URI thrift://localhost:9083

2018-05-07T04:45:52,982 INFO [main] hive.metastore: Opened a connection to 
metastore, current connections: 1

2018-05-07T04:45:52,985 INFO [main] hive.metastore: Connected to metastore.

2018-05-07T04:45:52,986 INFO [main] service.CompositeService: Operation log 
root directory is created: /var/hive/hs2log/tmp

2018-05-07T04:45:52,986 INFO [main] service.CompositeService: HiveServer2: 
Background operation thread pool size: 100

2018-05-07T04:45:52,986 INFO [main] service.CompositeService: HiveServer2: 
Background operation thread wait queue size: 100

2018-05-07T04:45:52,986 INFO [main] service.CompositeService: HiveServer2: 
Background operation thread keepalive time: 10 seconds

2018-05-07T04:45:52,988 INFO [main] hive.metastore: Closed a connection to 
metastore, current connections: 0

  was:
 

when ssl is enabled for hiveserver2 webui on port 10002, hiveserver2 is unable 
to start up. I keeps connecting to hive metastore and then drops the connection 
and then retry again. Hiveserver2 pid will be available but its not actually UP 
as it drops the metastore connection.

Logs shows as follows :

 

2018-05-07T04:45:52,980 INFO [main] sqlstd.SQLStdHiveAccessController: Created 
SQLStdHiveAccessController for session context : HiveAuthzSessionContext 
[sessionString=9f65e1ba-8810-47ee-a370-238606f02479, clientType=HIVESERVER2]
2018-05-07T04:45:52,980 WARN [main] session.SessionState: METASTORE_FILTER_HOOK 
will be ignored, since hive.security.authorization.manager is set to instance 
of HiveAuthorizerFactory.

2018-05-07T04:45:52,981 INFO [main] hive.metastore: Mestastore configuration 
hive.metastore.filter.hook changed from 
org.apache.hadoop.hive.metastore.DefaultMetaStoreFilterHookImpl to 
org.apache.hadoop.hive.ql.security.authorization.plugin.AuthorizationMetaStoreFilterHook

2018-05-07T04:45:52,981 INFO [main] hive.metastore: Closed a connection to 
metastore, current connections: 0

2018-05-07T04:45:52,982 INFO [main] hive.metastore: Trying to connect to 
metastore with URI thrift://localhost:9083

2018-05-07T04:45:52,982 INFO [main] hive.metastore: Opened a connection to 
metastore, current connections: 1

2018-05-07T04:45:52,985 INFO [main] hive.metastore: Connected to metastore.

2018-05-07T04:45:52,986 INFO [main] service.CompositeService: Operation log 
root directory is created: /var/hive/hs2log/tmp

2018-05-07T04:45:52,986 INFO [main] service.CompositeService: HiveServer2: 
Background operation thread pool size: 100

2018-05-07T04:45:52,986 INFO [main] service.CompositeService: HiveServer2: 
Background operation thread wait queue size: 100

2018-05-07T04:45:52,986 INFO [main] service.CompositeService: HiveServer2: 
Background operation thread keepalive time: 10 seconds

2018-05-07T04:45:52,988 INFO [main] hive.metastore: Closed a connection to 
metastore, current connections: 0


> HiveServer2 Drops connection to Metastore when hiverserver2 webui is enabled
> 
>
> Key: HIVE-19437
> URL: https://issues.apache.org/jira/browse/HIVE-19437
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, SQL, Web UI
>Affects Versions: 2.1.1
>Reporter: rr
>Priority: Major
>
>  
> when ssl is enabled for hiveserver2 webui on port 10002, hiveserver2 is 
> unable to start up. Keeps connecting to  metastore and then drops the 
> connection and then retry again. Hiveserver2 pid will be available but its 
> not actually UP as it drops the metastore 

[jira] [Updated] (HIVE-19436) NullPointerException while getting block info

2018-05-07 Thread Amruth S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amruth S updated HIVE-19436:

Attachment: HIVE-19436.patch

> NullPointerException while getting block info
> -
>
> Key: HIVE-19436
> URL: https://issues.apache.org/jira/browse/HIVE-19436
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.3.2
>Reporter: Amruth S
>Assignee: Amruth S
>Priority: Major
> Attachments: HIVE-19436.patch
>
>
> From hive 2.3.2, there are cases where block info object comes out to be null 
> (src/java/org/apache/hadoop/hive/ql/exec/persistence/PTFRowContainer.java)
> Comes in this code path
>  
> {code:java}
> if ( blockInfos.size() > 0 ) {
>  InputSplit[] inputSplits = getInputSplits();
>  FileSplit fS = null;
>  BlockInfo bI = blockInfos.get(0);
> ...
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19436) NullPointerException while getting block info

2018-05-07 Thread Amruth S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amruth S updated HIVE-19436:

Status: Patch Available  (was: Open)

> NullPointerException while getting block info
> -
>
> Key: HIVE-19436
> URL: https://issues.apache.org/jira/browse/HIVE-19436
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.3.2
>Reporter: Amruth S
>Assignee: Amruth S
>Priority: Major
> Attachments: HIVE-19436.patch
>
>
> From hive 2.3.2, there are cases where block info object comes out to be null 
> (src/java/org/apache/hadoop/hive/ql/exec/persistence/PTFRowContainer.java)
> Comes in this code path
>  
> {code:java}
> if ( blockInfos.size() > 0 ) {
>  InputSplit[] inputSplits = getInputSplits();
>  FileSplit fS = null;
>  BlockInfo bI = blockInfos.get(0);
> ...
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19225) Class cast exception while running certain queries with UDAF like rank on internal struct columns

2018-05-07 Thread Amruth S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amruth S updated HIVE-19225:

Attachment: HIVE-19225.patch

> Class cast exception while running certain queries with UDAF like rank on 
> internal struct columns
> -
>
> Key: HIVE-19225
> URL: https://issues.apache.org/jira/browse/HIVE-19225
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.3.2
>Reporter: Amruth S
>Priority: Major
> Attachments: HIVE-19225.patch
>
>
> Certain queries with rank function is causing class cast exception.
> {noformat}
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct cannot be cast to 
> org.apache.hadoop.hive.serde2.io.TimestampWritable
>   at 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableTimestampObjectInspector.getPrimitiveJavaObject(WritableTimestampObjectInspector.java:39)
>   at 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableTimestampObjectInspector.getPrimitiveJavaObject(WritableTimestampObjectInspector.java:25)
>   at 
> org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:412)
>   at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDAFRank.copyToStandardObject(GenericUDAFRank.java:219)
>   at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDAFRank$GenericUDAFAbstractRankEvaluator.iterate(GenericUDAFRank.java:153)
>   at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator.aggregate(GenericUDAFEvaluator.java:192)
>   at 
> org.apache.hadoop.hive.ql.udf.ptf.WindowingTableFunction.processRow(WindowingTableFunction.java:407)
>   at 
> org.apache.hadoop.hive.ql.exec.PTFOperator$PTFInvocation.processRow(PTFOperator.java:325)
>   at 
> org.apache.hadoop.hive.ql.exec.PTFOperator.process(PTFOperator.java:139)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897)
>   at 
> org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95)
>   at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:236)
>   ... 7 more
> 2018-03-29 09:28:43,432 INFO [main] org.apache.hadoop.mapred.Task: Runnning 
> cleanup for the task
> {noformat}
> The following changes fixes this.
> The evaluator seem to skip the case where the primary obj emitted is struct. 
> Modified the code to find the field inside struct
> {code:java}
> diff --git 
> a/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/StandardStructObjectInspector.java
>  
> b/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/StandardStructObjectInspector.java
> index 36a500790a..e7731e99d7 100644
> --- 
> a/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/StandardStructObjectInspector.java
> +++ 
> b/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/StandardStructObjectInspector.java
> @@ -22,6 +22,7 @@
> import java.util.Arrays;
> import java.util.List;
> +import org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct;
> import org.slf4j.Logger;
> import org.slf4j.LoggerFactory;
> @@ -171,6 +172,10 @@ public Object getStructFieldData(Object data, 
> StructField fieldRef) {
> // so we have to do differently.
> boolean isArray = data.getClass().isArray();
> if (!isArray && !(data instanceof List)) {
> + if (data instanceof LazyBinaryStruct
> + && fieldRef.getFieldObjectInspector().getCategory() == Category.PRIMITIVE) {
> + return ((LazyBinaryStruct) data).getField(((MyField) fieldRef).fieldID);
> + }
> if (!warned) {
> LOG.warn("Invalid type for struct " + data.getClass());
> LOG.warn("ignoring similar errors.");
> {code}
> Let me know your thoughts



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19225) Class cast exception while running certain queries with UDAF like rank on internal struct columns

2018-05-07 Thread Amruth S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amruth S updated HIVE-19225:

Assignee: Amruth S
  Status: Patch Available  (was: Open)

> Class cast exception while running certain queries with UDAF like rank on 
> internal struct columns
> -
>
> Key: HIVE-19225
> URL: https://issues.apache.org/jira/browse/HIVE-19225
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.3.2
>Reporter: Amruth S
>Assignee: Amruth S
>Priority: Major
> Attachments: HIVE-19225.patch
>
>
> Certain queries with rank function is causing class cast exception.
> {noformat}
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct cannot be cast to 
> org.apache.hadoop.hive.serde2.io.TimestampWritable
>   at 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableTimestampObjectInspector.getPrimitiveJavaObject(WritableTimestampObjectInspector.java:39)
>   at 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableTimestampObjectInspector.getPrimitiveJavaObject(WritableTimestampObjectInspector.java:25)
>   at 
> org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:412)
>   at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDAFRank.copyToStandardObject(GenericUDAFRank.java:219)
>   at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDAFRank$GenericUDAFAbstractRankEvaluator.iterate(GenericUDAFRank.java:153)
>   at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator.aggregate(GenericUDAFEvaluator.java:192)
>   at 
> org.apache.hadoop.hive.ql.udf.ptf.WindowingTableFunction.processRow(WindowingTableFunction.java:407)
>   at 
> org.apache.hadoop.hive.ql.exec.PTFOperator$PTFInvocation.processRow(PTFOperator.java:325)
>   at 
> org.apache.hadoop.hive.ql.exec.PTFOperator.process(PTFOperator.java:139)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897)
>   at 
> org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95)
>   at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:236)
>   ... 7 more
> 2018-03-29 09:28:43,432 INFO [main] org.apache.hadoop.mapred.Task: Runnning 
> cleanup for the task
> {noformat}
> The following changes fixes this.
> The evaluator seem to skip the case where the primary obj emitted is struct. 
> Modified the code to find the field inside struct
> {code:java}
> diff --git 
> a/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/StandardStructObjectInspector.java
>  
> b/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/StandardStructObjectInspector.java
> index 36a500790a..e7731e99d7 100644
> --- 
> a/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/StandardStructObjectInspector.java
> +++ 
> b/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/StandardStructObjectInspector.java
> @@ -22,6 +22,7 @@
> import java.util.Arrays;
> import java.util.List;
> +import org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct;
> import org.slf4j.Logger;
> import org.slf4j.LoggerFactory;
> @@ -171,6 +172,10 @@ public Object getStructFieldData(Object data, 
> StructField fieldRef) {
> // so we have to do differently.
> boolean isArray = data.getClass().isArray();
> if (!isArray && !(data instanceof List)) {
> + if (data instanceof LazyBinaryStruct
> + && fieldRef.getFieldObjectInspector().getCategory() == Category.PRIMITIVE) {
> + return ((LazyBinaryStruct) data).getField(((MyField) fieldRef).fieldID);
> + }
> if (!warned) {
> LOG.warn("Invalid type for struct " + data.getClass());
> LOG.warn("ignoring similar errors.");
> {code}
> Let me know your thoughts



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18593) NPE on vectorization group by

2018-05-07 Thread Amruth S (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16465583#comment-16465583
 ] 

Amruth S commented on HIVE-18593:
-

This might be fixed by HIVE-18622. Will check and revert.

> NPE on vectorization group by
> -
>
> Key: HIVE-18593
> URL: https://issues.apache.org/jira/browse/HIVE-18593
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, HiveServer2
>Affects Versions: 2.3.2
>Reporter: Amruth S
>Priority: Major
>
> Vectorisation with some queries seem to be failing with null pointer 
> exceptions. This happens only with 2.3.2 release and not the older ones.
> In the case, (in BytesColumnVector.java, vector[0] is null, isRepeating is 
> true, length[0] is 0, start[0] is 0
> {code:java}
> public void copySelected(
> boolean selectedInUse, int[] sel, int size, BytesColumnVector output) {
>   // Output has nulls if and only if input has nulls.
>   output.noNulls = noNulls;
>   output.isRepeating = false;
>   // Handle repeating case
>   if (isRepeating) {
> output.setVal(0, vector[0], start[0], length[0]);
> output.isNull[0] = isNull[0];
> output.isRepeating = true;
> return;
>   }
> {code}
> Exception trace below
> {code:java}
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:883)
> at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:86)
> ... 17 more
> Caused by: java.lang.NullPointerException
> at java.lang.System.arraycopy(Native Method)
> at 
> org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector.setVal(BytesColumnVector.java:173)
> at 
> org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector.copySelected(BytesColumnVector.java:321)
> at 
> org.apache.hadoop.hive.ql.exec.vector.expressions.IfExprStringGroupColumnStringGroupColumn.evaluate(IfExprStringGroupColumnStringGroupColumn.java:85)
> at 
> org.apache.hadoop.hive.ql.exec.vector.expressions.aggregates.gen.VectorUDAFMaxString.aggregateInputSelection(VectorUDAFMaxString.java:135)
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorGroupByOperator$ProcessingModeBase.processAggregators(VectorGroupByOperator.java:218)
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorGroupByOperator$ProcessingModeHashAggregate.doProcessBatch(VectorGroupByOperator.java:408)
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorGroupByOperator$ProcessingModeBase.processBatch(VectorGroupByOperator.java:179)
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorGroupByOperator.process(VectorGroupByOperator.java:1021)
> at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897)
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:137)
> at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897)
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator.process(VectorFilterOperator.java:123)
> at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897)
> at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:130)
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:783)
> ... 18 more
> {code}
>  
> *Table details* 
> {code:java}
> CREATE TABLE `test_table`(
>  `a` string,
>  `b` string,
>  `c` string)
>  ROW FORMAT SERDE
>  'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
> STORED AS INPUTFORMAT
>  'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
>  OUTPUTFORMAT
>  'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'
>  TBLPROPERTIES (
>  'orc.compress'='SNAPPY',
>  'orc.compress.size'='262144',
>  'orc.create.index'='true',
>  'orc.row.index.stride'='1',
>  'orc.stripe.size'='268435456',
>  'transient_lastDdlTime'='1517556432');{code}
> *Query*
> {code:java}
> SELECT
>  NVL(Max(CASE WHEN b IN ('some_literal') THEN b ELSE c END),'') AS scol
> FROM test_table
> GROUP BY a
> LIMIT 1000;{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18497) JDBC connection parameter to control socket read and connect timeout

2018-05-07 Thread Amruth S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amruth S updated HIVE-18497:

Attachment: HIVE-18497.patch

> JDBC connection parameter to control socket read and connect timeout
> 
>
> Key: HIVE-18497
> URL: https://issues.apache.org/jira/browse/HIVE-18497
> Project: Hive
>  Issue Type: Improvement
>  Components: Clients, JDBC
>Reporter: Amruth S
>Priority: Minor
> Attachments: HIVE-18497.patch
>
>
> Hive server failures are making the JDBC client get stuck in socketRead.
> Users should be able to configure socket read timeout to fail fast in case of 
> server failures.
> *Proposed solution*
> Add a Jdbc connection param 
> *hive.client.read.socket.timeoutmillis*
> This can control the socket read timeout, connect timeout in TCP as well as 
> HTTP mode.
> Let me know your thoughts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18497) JDBC connection parameter to control socket read and connect timeout

2018-05-07 Thread Amruth S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amruth S updated HIVE-18497:

Assignee: Amruth S
  Status: Patch Available  (was: Open)

> JDBC connection parameter to control socket read and connect timeout
> 
>
> Key: HIVE-18497
> URL: https://issues.apache.org/jira/browse/HIVE-18497
> Project: Hive
>  Issue Type: Improvement
>  Components: Clients, JDBC
>Reporter: Amruth S
>Assignee: Amruth S
>Priority: Minor
> Attachments: HIVE-18497.patch
>
>
> Hive server failures are making the JDBC client get stuck in socketRead.
> Users should be able to configure socket read timeout to fail fast in case of 
> server failures.
> *Proposed solution*
> Add a Jdbc connection param 
> *hive.client.read.socket.timeoutmillis*
> This can control the socket read timeout, connect timeout in TCP as well as 
> HTTP mode.
> Let me know your thoughts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19225) Class cast exception while running certain queries with UDAF like rank on internal struct columns

2018-05-07 Thread Zoltan Haindrich (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16465609#comment-16465609
 ] 

Zoltan Haindrich commented on HIVE-19225:
-

[~amrk7] could you share a query which triggers this?
because I think the same problem might get triggered in different cases: I'm 
currently thinking about Map/Union...

> Class cast exception while running certain queries with UDAF like rank on 
> internal struct columns
> -
>
> Key: HIVE-19225
> URL: https://issues.apache.org/jira/browse/HIVE-19225
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.3.2
>Reporter: Amruth S
>Assignee: Amruth S
>Priority: Major
> Attachments: HIVE-19225.patch
>
>
> Certain queries with rank function is causing class cast exception.
> {noformat}
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct cannot be cast to 
> org.apache.hadoop.hive.serde2.io.TimestampWritable
>   at 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableTimestampObjectInspector.getPrimitiveJavaObject(WritableTimestampObjectInspector.java:39)
>   at 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableTimestampObjectInspector.getPrimitiveJavaObject(WritableTimestampObjectInspector.java:25)
>   at 
> org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:412)
>   at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDAFRank.copyToStandardObject(GenericUDAFRank.java:219)
>   at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDAFRank$GenericUDAFAbstractRankEvaluator.iterate(GenericUDAFRank.java:153)
>   at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator.aggregate(GenericUDAFEvaluator.java:192)
>   at 
> org.apache.hadoop.hive.ql.udf.ptf.WindowingTableFunction.processRow(WindowingTableFunction.java:407)
>   at 
> org.apache.hadoop.hive.ql.exec.PTFOperator$PTFInvocation.processRow(PTFOperator.java:325)
>   at 
> org.apache.hadoop.hive.ql.exec.PTFOperator.process(PTFOperator.java:139)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897)
>   at 
> org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95)
>   at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:236)
>   ... 7 more
> 2018-03-29 09:28:43,432 INFO [main] org.apache.hadoop.mapred.Task: Runnning 
> cleanup for the task
> {noformat}
> The following changes fixes this.
> The evaluator seem to skip the case where the primary obj emitted is struct. 
> Modified the code to find the field inside struct
> {code:java}
> diff --git 
> a/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/StandardStructObjectInspector.java
>  
> b/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/StandardStructObjectInspector.java
> index 36a500790a..e7731e99d7 100644
> --- 
> a/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/StandardStructObjectInspector.java
> +++ 
> b/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/StandardStructObjectInspector.java
> @@ -22,6 +22,7 @@
> import java.util.Arrays;
> import java.util.List;
> +import org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct;
> import org.slf4j.Logger;
> import org.slf4j.LoggerFactory;
> @@ -171,6 +172,10 @@ public Object getStructFieldData(Object data, 
> StructField fieldRef) {
> // so we have to do differently.
> boolean isArray = data.getClass().isArray();
> if (!isArray && !(data instanceof List)) {
> + if (data instanceof LazyBinaryStruct
> + && fieldRef.getFieldObjectInspector().getCategory() == Category.PRIMITIVE) {
> + return ((LazyBinaryStruct) data).getField(((MyField) fieldRef).fieldID);
> + }
> if (!warned) {
> LOG.warn("Invalid type for struct " + data.getClass());
> LOG.warn("ignoring similar errors.");
> {code}
> Let me know your thoughts



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19161) Add authorizations to information schema

2018-05-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16465616#comment-16465616
 ] 

Hive QA commented on HIVE-19161:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
39s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
34s{color} | {color:blue} common in master has 62 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
29s{color} | {color:blue} accumulo-handler in master has 22 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
37s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
22s{color} | {color:blue} jdbc-handler in master has 11 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
19s{color} | {color:blue} ql in master has 2318 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
40s{color} | {color:blue} service in master has 49 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m  
8s{color} | {color:blue} standalone-metastore in master has 214 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
31s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
17s{color} | {color:red} accumulo-handler in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
20s{color} | {color:red} hcatalog-unit in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
31s{color} | {color:red} hive-unit in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
13s{color} | {color:red} jdbc-handler in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
58s{color} | {color:red} ql in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
21s{color} | {color:red} service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
19s{color} | {color:red} accumulo-handler in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
32s{color} | {color:red} hive-unit in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
13s{color} | {color:red} jdbc-handler in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
19s{color} | {color:red} service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 19s{color} 
| {color:red} accumulo-handler in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 32s{color} 
| {color:red} hive-unit in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 13s{color} 
| {color:red} jdbc-handler in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 19s{color} 
| {color:red} service in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} The patch common passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} accumulo-handler: The patch generated 0 new + 5 
unchanged - 15 fixed = 5 total (was 20) {color} |
| {color:green}+1{color} | {color:g

[jira] [Commented] (HIVE-18533) Add option to use InProcessLauncher to submit spark jobs

2018-05-07 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16465624#comment-16465624
 ] 

Rui Li commented on HIVE-18533:
---

Hi [~stakiar], for SparkLauncherSparkClient, how about we use a thread to wait 
on the countdown latch and return a FutureTask like we did in 
SparkSubmitSparkClient? I think it makes the two clients more consistent and 
it's easier than implementing a custom Future. For example, when 
{{Future::cancel}} is called, threads waiting on {{Future::get}} should 
immediately be unblocked, and {{Future::isCancelled}} should return true. We 
don't have to worry about breaking these contracts if we use FutureTask. What 
do you think?

> Add option to use InProcessLauncher to submit spark jobs
> 
>
> Key: HIVE-18533
> URL: https://issues.apache.org/jira/browse/HIVE-18533
> Project: Hive
>  Issue Type: Improvement
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18533.1.patch, HIVE-18533.2.patch, 
> HIVE-18533.3.patch, HIVE-18533.4.patch, HIVE-18533.5.patch, 
> HIVE-18533.6.patch, HIVE-18533.7.patch, HIVE-18533.8.patch, 
> HIVE-18533.9.patch, HIVE-18533.91.patch, HIVE-18831.93.patch
>
>
> See discussion in HIVE-16484 for details.
> I think this will help with reducing the amount of time it takes to open a 
> HoS session + debuggability (no need launch a separate process to run a Spark 
> app).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19435) Incremental replication cause data loss if table with same name is dropped followed by create and insert-into with different partition type.

2018-05-07 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-19435:

Summary: Incremental replication cause data loss if table with same name is 
dropped followed by create and insert-into with different partition type.  
(was: Data loss when incremental replication with drop partitioned table 
followed by create and insert-into non-partitioned table using same name.)

> Incremental replication cause data loss if table with same name is dropped 
> followed by create and insert-into with different partition type.
> 
>
> Key: HIVE-19435
> URL: https://issues.apache.org/jira/browse/HIVE-19435
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, replication
> Fix For: 3.0.0, 3.1.0
>
>
> If the incremental dump have drop of partitioned table followed by 
> create/insert on non-partitioned table with same name, doesn't replicate the 
> data. Explained below.
> Let's say we have a partitioned table T1 which was already replicated to 
> target.
> DROP_TABLE(T1)->CREATE_TABLE(T1) (Non-partitioned) -> INSERT(T1)(10) 
> After REPL LOAD, T1 doesn't have any data.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19435) Incremental replication cause data loss if table with same name is dropped followed by create and insert-into with different partition type.

2018-05-07 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-19435:

Description: 
If the incremental dump have drop of partitioned table followed by 
create/insert on non-partitioned table with same name, doesn't replicate the 
data. Explained below.

Let's say we have a partitioned table T1 which was already replicated to target.

DROP_TABLE(T1)->CREATE_TABLE(T1) (Non-partitioned) -> INSERT(T1)(10) 

After REPL LOAD, T1 doesn't have any data.

Same is valid for no-partitioned to partitioned case as well.

 

  was:
If the incremental dump have drop of partitioned table followed by 
create/insert on non-partitioned table with same name, doesn't replicate the 
data. Explained below.

Let's say we have a partitioned table T1 which was already replicated to target.

DROP_TABLE(T1)->CREATE_TABLE(T1) (Non-partitioned) -> INSERT(T1)(10) 

After REPL LOAD, T1 doesn't have any data.

 


> Incremental replication cause data loss if table with same name is dropped 
> followed by create and insert-into with different partition type.
> 
>
> Key: HIVE-19435
> URL: https://issues.apache.org/jira/browse/HIVE-19435
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, replication
> Fix For: 3.0.0, 3.1.0
>
>
> If the incremental dump have drop of partitioned table followed by 
> create/insert on non-partitioned table with same name, doesn't replicate the 
> data. Explained below.
> Let's say we have a partitioned table T1 which was already replicated to 
> target.
> DROP_TABLE(T1)->CREATE_TABLE(T1) (Non-partitioned) -> INSERT(T1)(10) 
> After REPL LOAD, T1 doesn't have any data.
> Same is valid for no-partitioned to partitioned case as well.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19435) Incremental replication cause data loss if a table is dropped followed by create and insert-into with different partition type.

2018-05-07 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-19435:

Summary: Incremental replication cause data loss if a table is dropped 
followed by create and insert-into with different partition type.  (was: 
Incremental replication cause data loss if table with same name is dropped 
followed by create and insert-into with different partition type.)

> Incremental replication cause data loss if a table is dropped followed by 
> create and insert-into with different partition type.
> ---
>
> Key: HIVE-19435
> URL: https://issues.apache.org/jira/browse/HIVE-19435
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, replication
> Fix For: 3.0.0, 3.1.0
>
>
> If the incremental dump have drop of partitioned table followed by 
> create/insert on non-partitioned table with same name, doesn't replicate the 
> data. Explained below.
> Let's say we have a partitioned table T1 which was already replicated to 
> target.
> DROP_TABLE(T1)->CREATE_TABLE(T1) (Non-partitioned) -> INSERT(T1)(10) 
> After REPL LOAD, T1 doesn't have any data.
> Same is valid for no-partitioned to partitioned case as well.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HIVE-19435) Incremental replication cause data loss if a table is dropped followed by create and insert-into with different partition type.

2018-05-07 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-19435 started by Sankar Hariappan.
---
> Incremental replication cause data loss if a table is dropped followed by 
> create and insert-into with different partition type.
> ---
>
> Key: HIVE-19435
> URL: https://issues.apache.org/jira/browse/HIVE-19435
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, replication
> Fix For: 3.0.0, 3.1.0
>
>
> If the incremental dump have drop of partitioned table followed by 
> create/insert on non-partitioned table with same name, doesn't replicate the 
> data. Explained below.
> Let's say we have a partitioned table T1 which was already replicated to 
> target.
> DROP_TABLE(T1)->CREATE_TABLE(T1) (Non-partitioned) -> INSERT(T1)(10) 
> After REPL LOAD, T1 doesn't have any data.
> Same is valid for no-partitioned to partitioned case as well.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19161) Add authorizations to information schema

2018-05-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16465634#comment-16465634
 ] 

Hive QA commented on HIVE-19161:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12922196/HIVE-19161.15.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 34 failed/errored test(s), 14323 tests 
executed
*Failed tests:*
{noformat}
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed 
out) (batchId=217)
TestSequenceFileReadWrite - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez2]
 (batchId=156)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[default_constraint]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_vector_dynpart_hashjoin_1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat]
 (batchId=183)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] 
(batchId=105)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] 
(batchId=96)
org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228)
org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 
(batchId=228)
org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232)
org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate
 (batchId=231)
org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys
 (batchId=231)
org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgress (batchId=235)
org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgressParallel 
(batchId=235)
org.apache.hive.jdbc.TestSSL.testSSLFetchHttp (batchId=239)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesWrite 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryElapsedTime
 (batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryExecutionTime
 (batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerVertexRawInputSplitsNoKill
 (batchId=241)
org.apache.hive.spark.client.rpc.TestRpc.testServerPort (batchId=304)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/10741/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10741/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10741/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 34 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12922196 - PreCommit-HIVE-Build

> Add authorizations to information schema
> 
>
> Key: HIVE-19161
> URL: https://issues.apache.org/jira/browse/HIVE-19161
> Project: Hive
>  Issue Type: Improvement
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-19161.1.patch, HIVE-19161.10.patch, 
> HIVE-19161.11.patch, HIVE-19161.12.patch, HIVE-19161.13.patch, 
> HIVE-19161.14.patch, HIVE-19161.15.patch, HIVE-19161.2.patch, 
> HIVE-

[jira] [Work stopped] (HIVE-19435) Incremental replication cause data loss if a table is dropped followed by create and insert-into with different partition type.

2018-05-07 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-19435 stopped by Sankar Hariappan.
---
> Incremental replication cause data loss if a table is dropped followed by 
> create and insert-into with different partition type.
> ---
>
> Key: HIVE-19435
> URL: https://issues.apache.org/jira/browse/HIVE-19435
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-19435.01.patch
>
>
> If the incremental dump have drop of partitioned table followed by 
> create/insert on non-partitioned table with same name, doesn't replicate the 
> data. Explained below.
> Let's say we have a partitioned table T1 which was already replicated to 
> target.
> DROP_TABLE(T1)->CREATE_TABLE(T1) (Non-partitioned) -> INSERT(T1)(10) 
> After REPL LOAD, T1 doesn't have any data.
> Same is valid for no-partitioned to partitioned case as well.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19435) Incremental replication cause data loss if a table is dropped followed by create and insert-into with different partition type.

2018-05-07 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-19435:

Attachment: HIVE-19435.01.patch

> Incremental replication cause data loss if a table is dropped followed by 
> create and insert-into with different partition type.
> ---
>
> Key: HIVE-19435
> URL: https://issues.apache.org/jira/browse/HIVE-19435
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-19435.01.patch
>
>
> If the incremental dump have drop of partitioned table followed by 
> create/insert on non-partitioned table with same name, doesn't replicate the 
> data. Explained below.
> Let's say we have a partitioned table T1 which was already replicated to 
> target.
> DROP_TABLE(T1)->CREATE_TABLE(T1) (Non-partitioned) -> INSERT(T1)(10) 
> After REPL LOAD, T1 doesn't have any data.
> Same is valid for no-partitioned to partitioned case as well.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19435) Incremental replication cause data loss if a table is dropped followed by create and insert-into with different partition type.

2018-05-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16465635#comment-16465635
 ] 

ASF GitHub Bot commented on HIVE-19435:
---

GitHub user sankarh opened a pull request:

https://github.com/apache/hive/pull/343

HIVE-19435: Incremental replication cause data loss if a table is dropped 
followed by create and insert-into with different partition type.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sankarh/hive HIVE-19435

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hive/pull/343.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #343


commit 5d1d9dccbd8298ee8b593283315a6ad4f0f33c6e
Author: Sankar Hariappan 
Date:   2018-05-07T08:41:05Z

HIVE-19435: Incremental replication cause data loss if a table is dropped 
followed by create and insert-into with different partition type.




> Incremental replication cause data loss if a table is dropped followed by 
> create and insert-into with different partition type.
> ---
>
> Key: HIVE-19435
> URL: https://issues.apache.org/jira/browse/HIVE-19435
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-19435.01.patch
>
>
> If the incremental dump have drop of partitioned table followed by 
> create/insert on non-partitioned table with same name, doesn't replicate the 
> data. Explained below.
> Let's say we have a partitioned table T1 which was already replicated to 
> target.
> DROP_TABLE(T1)->CREATE_TABLE(T1) (Non-partitioned) -> INSERT(T1)(10) 
> After REPL LOAD, T1 doesn't have any data.
> Same is valid for no-partitioned to partitioned case as well.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19435) Incremental replication cause data loss if a table is dropped followed by create and insert-into with different partition type.

2018-05-07 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-19435:
--
Labels: DR pull-request-available replication  (was: DR replication)

> Incremental replication cause data loss if a table is dropped followed by 
> create and insert-into with different partition type.
> ---
>
> Key: HIVE-19435
> URL: https://issues.apache.org/jira/browse/HIVE-19435
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-19435.01.patch
>
>
> If the incremental dump have drop of partitioned table followed by 
> create/insert on non-partitioned table with same name, doesn't replicate the 
> data. Explained below.
> Let's say we have a partitioned table T1 which was already replicated to 
> target.
> DROP_TABLE(T1)->CREATE_TABLE(T1) (Non-partitioned) -> INSERT(T1)(10) 
> After REPL LOAD, T1 doesn't have any data.
> Same is valid for no-partitioned to partitioned case as well.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19435) Incremental replication cause data loss if a table is dropped followed by create and insert-into with different partition type.

2018-05-07 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-19435:

Attachment: (was: HIVE-19435.01.patch)

> Incremental replication cause data loss if a table is dropped followed by 
> create and insert-into with different partition type.
> ---
>
> Key: HIVE-19435
> URL: https://issues.apache.org/jira/browse/HIVE-19435
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Fix For: 3.0.0, 3.1.0
>
>
> If the incremental dump have drop of partitioned table followed by 
> create/insert on non-partitioned table with same name, doesn't replicate the 
> data. Explained below.
> Let's say we have a partitioned table T1 which was already replicated to 
> target.
> DROP_TABLE(T1)->CREATE_TABLE(T1) (Non-partitioned) -> INSERT(T1)(10) 
> After REPL LOAD, T1 doesn't have any data.
> Same is valid for no-partitioned to partitioned case as well.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HIVE-19435) Incremental replication cause data loss if a table is dropped followed by create and insert-into with different partition type.

2018-05-07 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-19435 started by Sankar Hariappan.
---
> Incremental replication cause data loss if a table is dropped followed by 
> create and insert-into with different partition type.
> ---
>
> Key: HIVE-19435
> URL: https://issues.apache.org/jira/browse/HIVE-19435
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Fix For: 3.0.0, 3.1.0
>
>
> If the incremental dump have drop of partitioned table followed by 
> create/insert on non-partitioned table with same name, doesn't replicate the 
> data. Explained below.
> Let's say we have a partitioned table T1 which was already replicated to 
> target.
> DROP_TABLE(T1)->CREATE_TABLE(T1) (Non-partitioned) -> INSERT(T1)(10) 
> After REPL LOAD, T1 doesn't have any data.
> Same is valid for no-partitioned to partitioned case as well.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19403) Demote 'Pattern' Logging

2018-05-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16465658#comment-16465658
 ] 

Hive QA commented on HIVE-19403:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
22s{color} | {color:blue} ql in master has 2318 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10742/dev-support/hive-personality.sh
 |
| git revision | master / 88d224f |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10742/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Demote 'Pattern' Logging
> 
>
> Key: HIVE-19403
> URL: https://issues.apache.org/jira/browse/HIVE-19403
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.0.0, 2.4.0
>Reporter: BELUGA BEHR
>Assignee: gonglinglei
>Priority: Trivial
>  Labels: noob
> Attachments: HIVE-19403.1.patch
>
>
> In the {{DDLTask}} class, there is some logging that is not helpful to a 
> cluster admin and should be demoted to _debug_ level logging.  In fact, in 
> one place in the code, it already is.
> {code}
> LOG.info("pattern: {}", showDatabasesDesc.getPattern());
> LOG.debug("pattern: {}", pattern);
> LOG.info("pattern: {}", showFuncs.getPattern());
> LOG.info("pattern: {}", showTblStatus.getPattern());
> {code}
> Here is an example... as an admin, I can already see what the pattern is, I 
> do not need this extra logging.  It provides no additional context.
> {code:java|title=Example}
> 2018-05-03 03:08:26,354 INFO  org.apache.hadoop.hive.ql.Driver: 
> [HiveServer2-Background-Pool: Thread-101980]: Executing 
> command(queryId=hive_20180503030808_e53c26ef-2280-4eca-929b-668503105e2e): 
> SHOW TABLE EXTENDED FROM my_db LIKE '*'
> 2018-05-03 03:08:26,355 INFO  hive.ql.exec.DDLTask: 
> [HiveServer2-Background-Pool: Thread-101980]: pattern: *
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19435) Incremental replication cause data loss if a table is dropped followed by create and insert-into with different partition type.

2018-05-07 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-19435:

Description: 
If the incremental dump have drop of partitioned table followed by 
create/insert on non-partitioned table with same name, doesn't replicate the 
data. Explained below.

Let's say we have a partitioned table T1 which was already replicated to target.

DROP_TABLE(T1)->CREATE_TABLE(T1) (Non-partitioned) -> INSERT(T1)(10) 

After REPL LOAD, T1 doesn't have any data.

Same is valid for non-partitioned to partitioned and partition spec mismatch 
case as well.

 

  was:
If the incremental dump have drop of partitioned table followed by 
create/insert on non-partitioned table with same name, doesn't replicate the 
data. Explained below.

Let's say we have a partitioned table T1 which was already replicated to target.

DROP_TABLE(T1)->CREATE_TABLE(T1) (Non-partitioned) -> INSERT(T1)(10) 

After REPL LOAD, T1 doesn't have any data.

Same is valid for no-partitioned to partitioned case as well.

 


> Incremental replication cause data loss if a table is dropped followed by 
> create and insert-into with different partition type.
> ---
>
> Key: HIVE-19435
> URL: https://issues.apache.org/jira/browse/HIVE-19435
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Fix For: 3.0.0, 3.1.0
>
>
> If the incremental dump have drop of partitioned table followed by 
> create/insert on non-partitioned table with same name, doesn't replicate the 
> data. Explained below.
> Let's say we have a partitioned table T1 which was already replicated to 
> target.
> DROP_TABLE(T1)->CREATE_TABLE(T1) (Non-partitioned) -> INSERT(T1)(10) 
> After REPL LOAD, T1 doesn't have any data.
> Same is valid for non-partitioned to partitioned and partition spec mismatch 
> case as well.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19435) Incremental replication cause data loss if a table is dropped followed by create and insert-into with different partition type.

2018-05-07 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-19435:

Attachment: HIVE-19435.01.patch

> Incremental replication cause data loss if a table is dropped followed by 
> create and insert-into with different partition type.
> ---
>
> Key: HIVE-19435
> URL: https://issues.apache.org/jira/browse/HIVE-19435
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-19435.01.patch
>
>
> If the incremental dump have drop of partitioned table followed by 
> create/insert on non-partitioned table with same name, doesn't replicate the 
> data. Explained below.
> Let's say we have a partitioned table T1 which was already replicated to 
> target.
> DROP_TABLE(T1)->CREATE_TABLE(T1) (Non-partitioned) -> INSERT(T1)(10) 
> After REPL LOAD, T1 doesn't have any data.
> Same is valid for non-partitioned to partitioned and partition spec mismatch 
> case as well.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19435) Incremental replication cause data loss if a table is dropped followed by create and insert-into with different partition type.

2018-05-07 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-19435:

Status: Patch Available  (was: In Progress)

Added 01.patch with
 * Check if existing table is valid when generating tasks for incremental 
replication events.
 * If not valid, then create partitions and load table as if new Table.
 * If valid, then verify last repl ID to decide if need overwrite or ignore the 
event.

Request [~thejas], [~maheshk114] to please review the same.

> Incremental replication cause data loss if a table is dropped followed by 
> create and insert-into with different partition type.
> ---
>
> Key: HIVE-19435
> URL: https://issues.apache.org/jira/browse/HIVE-19435
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-19435.01.patch
>
>
> If the incremental dump have drop of partitioned table followed by 
> create/insert on non-partitioned table with same name, doesn't replicate the 
> data. Explained below.
> Let's say we have a partitioned table T1 which was already replicated to 
> target.
> DROP_TABLE(T1)->CREATE_TABLE(T1) (Non-partitioned) -> INSERT(T1)(10) 
> After REPL LOAD, T1 doesn't have any data.
> Same is valid for non-partitioned to partitioned and partition spec mismatch 
> case as well.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18956) AvroSerDe Race Condition

2018-05-07 Thread gonglinglei (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16465693#comment-16465693
 ] 

gonglinglei commented on HIVE-18956:



{code:java}
  @Override
  public void initialize(Configuration configuration, Properties properties) 
throws SerDeException {
...
if(!badSchema) {
  this.avroSerializer = new AvroSerializer();
  this.avroDeserializer = new AvroDeserializer();
}
  }
{code}

It's already fixed in 
[HIVE-18410|https://issues.apache.org/jira/browse/HIVE-18410], since both 
{{AvroSerializer}} and {{AvroDeserializer}} now get instance in {{initialize}}.

> AvroSerDe Race Condition
> 
>
> Key: HIVE-18956
> URL: https://issues.apache.org/jira/browse/HIVE-18956
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers
>Affects Versions: 3.0.0, 2.3.2
>Reporter: BELUGA BEHR
>Priority: Trivial
>
> {code}
>   @Override
>   public Writable serialize(Object o, ObjectInspector objectInspector) throws 
> SerDeException {
> if(badSchema) {
>   throw new BadSchemaException();
> }
> return getSerializer().serialize(o, objectInspector, columnNames, 
> columnTypes, schema);
>   }
>   @Override
>   public Object deserialize(Writable writable) throws SerDeException {
> if(badSchema) {
>   throw new BadSchemaException();
> }
> return getDeserializer().deserialize(columnNames, columnTypes, writable, 
> schema);
>   }
> ...
>   private AvroDeserializer getDeserializer() {
> if(avroDeserializer == null) {
>   avroDeserializer = new AvroDeserializer();
> }
> return avroDeserializer;
>   }
>   private AvroSerializer getSerializer() {
> if(avroSerializer == null) {
>   avroSerializer = new AvroSerializer();
> }
> return avroSerializer;
>   }
> {code}
> {{getDeserializer}} and {{getSerializer}} methods are not thread safe, so 
> neither are {{deserialize}} and {{serialize}} methods.  It probably didn't 
> matter with MapReduce, but now that we have Spark/Tez, it may be an issue.
> You could visualize a scenario where three threads all enter 
> {{getSerializer}} and all see that {{avroSerializer}} is _null_ and create 
> three instances, then they would fight to assign the new object to the 
> {{avroSerializer}} variable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19435) Incremental replication cause data loss if a table is dropped followed by create and insert-into with different partition type.

2018-05-07 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-19435:

Attachment: (was: HIVE-19435.01.patch)

> Incremental replication cause data loss if a table is dropped followed by 
> create and insert-into with different partition type.
> ---
>
> Key: HIVE-19435
> URL: https://issues.apache.org/jira/browse/HIVE-19435
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Fix For: 3.0.0, 3.1.0
>
>
> If the incremental dump have drop of partitioned table followed by 
> create/insert on non-partitioned table with same name, doesn't replicate the 
> data. Explained below.
> Let's say we have a partitioned table T1 which was already replicated to 
> target.
> DROP_TABLE(T1)->CREATE_TABLE(T1) (Non-partitioned) -> INSERT(T1)(10) 
> After REPL LOAD, T1 doesn't have any data.
> Same is valid for non-partitioned to partitioned and partition spec mismatch 
> case as well.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19435) Incremental replication cause data loss if a table is dropped followed by create and insert-into with different partition type.

2018-05-07 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-19435:

Status: Open  (was: Patch Available)

> Incremental replication cause data loss if a table is dropped followed by 
> create and insert-into with different partition type.
> ---
>
> Key: HIVE-19435
> URL: https://issues.apache.org/jira/browse/HIVE-19435
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Fix For: 3.0.0, 3.1.0
>
>
> If the incremental dump have drop of partitioned table followed by 
> create/insert on non-partitioned table with same name, doesn't replicate the 
> data. Explained below.
> Let's say we have a partitioned table T1 which was already replicated to 
> target.
> DROP_TABLE(T1)->CREATE_TABLE(T1) (Non-partitioned) -> INSERT(T1)(10) 
> After REPL LOAD, T1 doesn't have any data.
> Same is valid for non-partitioned to partitioned and partition spec mismatch 
> case as well.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HIVE-18956) AvroSerDe Race Condition

2018-05-07 Thread gonglinglei (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16465693#comment-16465693
 ] 

gonglinglei edited comment on HIVE-18956 at 5/7/18 10:02 AM:
-

{code:java}
  @Override
  public void initialize(Configuration configuration, Properties properties) 
throws SerDeException {
...
if(!badSchema) {
  this.avroSerializer = new AvroSerializer();
  this.avroDeserializer = new AvroDeserializer();
}
  }
{code}

It's already fixed in HIVE-18410, since both {{AvroSerializer}} and 
{{AvroDeserializer}} now get instance in {{initialize}}.


was (Author: gonglinglei):

{code:java}
  @Override
  public void initialize(Configuration configuration, Properties properties) 
throws SerDeException {
...
if(!badSchema) {
  this.avroSerializer = new AvroSerializer();
  this.avroDeserializer = new AvroDeserializer();
}
  }
{code}

It's already fixed in 
[HIVE-18410|https://issues.apache.org/jira/browse/HIVE-18410], since both 
{{AvroSerializer}} and {{AvroDeserializer}} now get instance in {{initialize}}.

> AvroSerDe Race Condition
> 
>
> Key: HIVE-18956
> URL: https://issues.apache.org/jira/browse/HIVE-18956
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers
>Affects Versions: 3.0.0, 2.3.2
>Reporter: BELUGA BEHR
>Priority: Trivial
>
> {code}
>   @Override
>   public Writable serialize(Object o, ObjectInspector objectInspector) throws 
> SerDeException {
> if(badSchema) {
>   throw new BadSchemaException();
> }
> return getSerializer().serialize(o, objectInspector, columnNames, 
> columnTypes, schema);
>   }
>   @Override
>   public Object deserialize(Writable writable) throws SerDeException {
> if(badSchema) {
>   throw new BadSchemaException();
> }
> return getDeserializer().deserialize(columnNames, columnTypes, writable, 
> schema);
>   }
> ...
>   private AvroDeserializer getDeserializer() {
> if(avroDeserializer == null) {
>   avroDeserializer = new AvroDeserializer();
> }
> return avroDeserializer;
>   }
>   private AvroSerializer getSerializer() {
> if(avroSerializer == null) {
>   avroSerializer = new AvroSerializer();
> }
> return avroSerializer;
>   }
> {code}
> {{getDeserializer}} and {{getSerializer}} methods are not thread safe, so 
> neither are {{deserialize}} and {{serialize}} methods.  It probably didn't 
> matter with MapReduce, but now that we have Spark/Tez, it may be an issue.
> You could visualize a scenario where three threads all enter 
> {{getSerializer}} and all see that {{avroSerializer}} is _null_ and create 
> three instances, then they would fight to assign the new object to the 
> {{avroSerializer}} variable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19435) Incremental replication cause data loss if a table is dropped followed by create and insert-into with different partition type.

2018-05-07 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-19435:

Attachment: HIVE-19435.01.patch

> Incremental replication cause data loss if a table is dropped followed by 
> create and insert-into with different partition type.
> ---
>
> Key: HIVE-19435
> URL: https://issues.apache.org/jira/browse/HIVE-19435
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-19435.01.patch
>
>
> If the incremental dump have drop of partitioned table followed by 
> create/insert on non-partitioned table with same name, doesn't replicate the 
> data. Explained below.
> Let's say we have a partitioned table T1 which was already replicated to 
> target.
> DROP_TABLE(T1)->CREATE_TABLE(T1) (Non-partitioned) -> INSERT(T1)(10) 
> After REPL LOAD, T1 doesn't have any data.
> Same is valid for non-partitioned to partitioned and partition spec mismatch 
> case as well.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19403) Demote 'Pattern' Logging

2018-05-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16465731#comment-16465731
 ] 

Hive QA commented on HIVE-19403:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12922210/HIVE-19403.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 35 failed/errored test(s), 14322 tests 
executed
*Failed tests:*
{noformat}
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed 
out) (batchId=217)
TestSequenceFileReadWrite - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby8] (batchId=77)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez2]
 (batchId=156)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[explainuser_4]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] 
(batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_vector_dynpart_hashjoin_1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] 
(batchId=105)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] 
(batchId=96)
org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228)
org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 
(batchId=228)
org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232)
org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate
 (batchId=231)
org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys
 (batchId=231)
org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgress (batchId=235)
org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgressParallel 
(batchId=235)
org.apache.hive.jdbc.TestSSL.testSSLFetchHttp (batchId=239)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesWrite 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryElapsedTime
 (batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryExecutionTime
 (batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerVertexRawInputSplitsNoKill
 (batchId=241)
org.apache.hive.spark.client.rpc.TestRpc.testServerPort (batchId=304)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/10742/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10742/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10742/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 35 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12922210 - PreCommit-HIVE-Build

> Demote 'Pattern' Logging
> 
>
> Key: HIVE-19403
> URL: https://issues.apache.org/jira/browse/HIVE-19403
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.0.0, 2.4.0
>Reporter: BELUGA BEHR
>Assignee: gonglinglei
>Priority: Trivial
>  Labels: noob
> Attachments: HIVE-19403.1.patch
>
>
> In the {{DDLTask}} class, there i

[jira] [Updated] (HIVE-19435) Incremental replication cause data loss if a table is dropped followed by create and insert-into with different partition type.

2018-05-07 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-19435:

Status: Patch Available  (was: Open)

> Incremental replication cause data loss if a table is dropped followed by 
> create and insert-into with different partition type.
> ---
>
> Key: HIVE-19435
> URL: https://issues.apache.org/jira/browse/HIVE-19435
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-19435.01.patch
>
>
> If the incremental dump have drop of partitioned table followed by 
> create/insert on non-partitioned table with same name, doesn't replicate the 
> data. Explained below.
> Let's say we have a partitioned table T1 which was already replicated to 
> target.
> DROP_TABLE(T1)->CREATE_TABLE(T1) (Non-partitioned) -> INSERT(T1)(10) 
> After REPL LOAD, T1 doesn't have any data.
> Same is valid for non-partitioned to partitioned and partition spec mismatch 
> case as well.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19438) Test failure: org.apache.hadoop.hive.llap.daemon.services.impl.TestLlapWebServices.testContextRootUrlRewrite

2018-05-07 Thread Pravin Dsilva (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pravin Dsilva updated HIVE-19438:
-
Description: 
*Error Message*

{code:java}
expected:<200> but was:<500>
{code}


*Stacktrace*

{code:java}
java.lang.AssertionError: expected:<200> but was:<500> at 
org.junit.Assert.fail(Assert.java:88) at 
org.junit.Assert.failNotEquals(Assert.java:743) at 
org.junit.Assert.assertEquals(Assert.java:118) at 
org.junit.Assert.assertEquals(Assert.java:555) at 
org.junit.Assert.assertEquals(Assert.java:542) at 
org.apache.hadoop.hive.llap.daemon.services.impl.TestLlapWebServices.getURLResponseAsString(TestLlapWebServices.java:59)
{code}


  was:
* Error Message*

{code:java}
expected:<200> but was:<500>
{code}


*Stacktrace*

{code:java}
java.lang.AssertionError: expected:<200> but was:<500> at 
org.junit.Assert.fail(Assert.java:88) at 
org.junit.Assert.failNotEquals(Assert.java:743) at 
org.junit.Assert.assertEquals(Assert.java:118) at 
org.junit.Assert.assertEquals(Assert.java:555) at 
org.junit.Assert.assertEquals(Assert.java:542) at 
org.apache.hadoop.hive.llap.daemon.services.impl.TestLlapWebServices.getURLResponseAsString(TestLlapWebServices.java:59)
{code}



> Test failure: 
> org.apache.hadoop.hive.llap.daemon.services.impl.TestLlapWebServices.testContextRootUrlRewrite
> 
>
> Key: HIVE-19438
> URL: https://issues.apache.org/jira/browse/HIVE-19438
> Project: Hive
>  Issue Type: Bug
>  Components: llap, Tests
>Reporter: Pravin Dsilva
>Priority: Major
>
> *Error Message*
> {code:java}
> expected:<200> but was:<500>
> {code}
> *Stacktrace*
> {code:java}
> java.lang.AssertionError: expected:<200> but was:<500> at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotEquals(Assert.java:743) at 
> org.junit.Assert.assertEquals(Assert.java:118) at 
> org.junit.Assert.assertEquals(Assert.java:555) at 
> org.junit.Assert.assertEquals(Assert.java:542) at 
> org.apache.hadoop.hive.llap.daemon.services.impl.TestLlapWebServices.getURLResponseAsString(TestLlapWebServices.java:59)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19438) Test failure: org.apache.hadoop.hive.llap.daemon.services.impl.TestLlapWebServices.testContextRootUrlRewrite

2018-05-07 Thread Pravin Dsilva (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pravin Dsilva updated HIVE-19438:
-
Affects Version/s: 3.1.0

> Test failure: 
> org.apache.hadoop.hive.llap.daemon.services.impl.TestLlapWebServices.testContextRootUrlRewrite
> 
>
> Key: HIVE-19438
> URL: https://issues.apache.org/jira/browse/HIVE-19438
> Project: Hive
>  Issue Type: Bug
>  Components: llap, Tests
>Affects Versions: 3.1.0
>Reporter: Pravin Dsilva
>Priority: Major
>
> *Error Message*
> {code:java}
> expected:<200> but was:<500>
> {code}
> *Stacktrace*
> {code:java}
> java.lang.AssertionError: expected:<200> but was:<500> at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotEquals(Assert.java:743) at 
> org.junit.Assert.assertEquals(Assert.java:118) at 
> org.junit.Assert.assertEquals(Assert.java:555) at 
> org.junit.Assert.assertEquals(Assert.java:542) at 
> org.apache.hadoop.hive.llap.daemon.services.impl.TestLlapWebServices.getURLResponseAsString(TestLlapWebServices.java:59)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19438) Test failure: org.apache.hadoop.hive.llap.daemon.services.impl.TestLlapWebServices.testContextRootUrlRewrite

2018-05-07 Thread Pravin Dsilva (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pravin Dsilva updated HIVE-19438:
-
Component/s: Test

> Test failure: 
> org.apache.hadoop.hive.llap.daemon.services.impl.TestLlapWebServices.testContextRootUrlRewrite
> 
>
> Key: HIVE-19438
> URL: https://issues.apache.org/jira/browse/HIVE-19438
> Project: Hive
>  Issue Type: Bug
>  Components: llap, Test, Tests
>Affects Versions: 3.1.0
>Reporter: Pravin Dsilva
>Priority: Major
>
> *Error Message*
> {code:java}
> expected:<200> but was:<500>
> {code}
> *Stacktrace*
> {code:java}
> java.lang.AssertionError: expected:<200> but was:<500> at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotEquals(Assert.java:743) at 
> org.junit.Assert.assertEquals(Assert.java:118) at 
> org.junit.Assert.assertEquals(Assert.java:555) at 
> org.junit.Assert.assertEquals(Assert.java:542) at 
> org.apache.hadoop.hive.llap.daemon.services.impl.TestLlapWebServices.getURLResponseAsString(TestLlapWebServices.java:59)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19438) Test failure: org.apache.hadoop.hive.llap.daemon.services.impl.TestLlapWebServices.testContextRootUrlRewrite

2018-05-07 Thread Pravin Dsilva (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16465745#comment-16465745
 ] 

Pravin Dsilva commented on HIVE-19438:
--

This test fails only when the hive project is run as a whole. The surefire 
output gives the following:
{code:java}
2018-05-07T03:11:12,138 INFO [main] http.HttpServer: Started HttpServer[llap] 
on port 51570
2018-05-07T03:11:22,352 WARN [llap-web-16] servlet.ServletHandler: Error for 
/index.html
java.lang.NoSuchMethodError: 
javax.servlet.http.HttpServletRequest.isAsyncSupported()Z
 at org.eclipse.jetty.servlet.DefaultServlet.sendData(DefaultServlet.java:937) 
~[jetty-servlet-9.3.8.v20160314.jar:9.3.8.v20160314]
 at org.eclipse.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:527) 
~[jetty-servlet-9.3.8.v20160314.jar:9.3.8.v20160314]
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:689) 
~[servlet-api-2.4.jar:?]
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:802) 
~[servlet-api-2.4.jar:?]{code}

> Test failure: 
> org.apache.hadoop.hive.llap.daemon.services.impl.TestLlapWebServices.testContextRootUrlRewrite
> 
>
> Key: HIVE-19438
> URL: https://issues.apache.org/jira/browse/HIVE-19438
> Project: Hive
>  Issue Type: Bug
>  Components: llap, Test, Tests
>Affects Versions: 3.1.0
>Reporter: Pravin Dsilva
>Priority: Major
>
> *Error Message*
> {code:java}
> expected:<200> but was:<500>
> {code}
> *Stacktrace*
> {code:java}
> java.lang.AssertionError: expected:<200> but was:<500> at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotEquals(Assert.java:743) at 
> org.junit.Assert.assertEquals(Assert.java:118) at 
> org.junit.Assert.assertEquals(Assert.java:555) at 
> org.junit.Assert.assertEquals(Assert.java:542) at 
> org.apache.hadoop.hive.llap.daemon.services.impl.TestLlapWebServices.getURLResponseAsString(TestLlapWebServices.java:59)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19433) HiveJoinPushTransitivePredicatesRule hangs

2018-05-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16465764#comment-16465764
 ] 

Hive QA commented on HIVE-19433:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
52s{color} | {color:blue} ql in master has 2318 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
38s{color} | {color:red} ql: The patch generated 1 new + 8 unchanged - 0 fixed 
= 9 total (was 8) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10743/dev-support/hive-personality.sh
 |
| git revision | master / 88d224f |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10743/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10743/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> HiveJoinPushTransitivePredicatesRule hangs
> --
>
> Key: HIVE-19433
> URL: https://issues.apache.org/jira/browse/HIVE-19433
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-19433.1.patch
>
>
> *Reproducer*
> {code:sql}
> CREATE TABLE `table1`(
>`idp_warehouse_id` bigint,
>`idp_audit_id` bigint,
>`idp_effective_date` date,
>`idp_end_date` date,
>`idp_delete_date` date,
>`pruid` varchar(32),
>`prid` bigint,
>`prtimesheetid` bigint,
>`prassignmentid` bigint,
>`prchargecodeid` bigint,
>`prtypecodeid` bigint,
>`prsequence` bigint,
>`prmodby` varchar(96),
>`prmodtime` timestamp,
>`prrmexported` bigint,
>`prrmckdel` bigint,
>`slice_status` int,
>`role_id` bigint,
>`user_lov1` varchar(30),
>`user_lov2` varchar(30),
>`incident_id` bigint,
>`incident_investment_id` bigint,
>`odf_ss_actuals` bigint,
>`practsum` decimal(38,20));
> CREATE TABLE `table2`(
>`idp_warehouse_id` bigint,
>`idp_audit_id` bigint,
>`idp_effective_date` date,
>`idp_end_date` date,
>`idp_delete_date` date,
>`pruid` varchar(32),
>`prid` bigint,
>`prtimesheetid` bigint,
>  

[jira] [Assigned] (HIVE-7214) Support predicate pushdown for complex data types in ORCFile

2018-05-07 Thread Ashish Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Sharma reassigned HIVE-7214:
---

Assignee: Ashish Sharma

> Support predicate pushdown for complex data types in ORCFile
> 
>
> Key: HIVE-7214
> URL: https://issues.apache.org/jira/browse/HIVE-7214
> Project: Hive
>  Issue Type: Improvement
>  Components: File Formats, ORC
>Reporter: Rohini Palaniswamy
>Assignee: Ashish Sharma
>Priority: Major
>  Labels: ORC
>
> Currently ORCFile does not support predicate pushdown for complex datatypes 
> like map, array and struct while Parquet does. Came across this during 
> discussion of PIG-3760. Our users have a lot of map and struct (tuple in pig) 
> columns and most of the filter conditions are on them. Would be great to have 
> support added for them in ORC



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19433) HiveJoinPushTransitivePredicatesRule hangs

2018-05-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16465810#comment-16465810
 ] 

Hive QA commented on HIVE-19433:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12922201/HIVE-19433.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 34 failed/errored test(s), 14322 tests 
executed
*Failed tests:*
{noformat}
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed 
out) (batchId=217)
TestSequenceFileReadWrite - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez2]
 (batchId=156)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] 
(batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_dynpart_hashjoin_1]
 (batchId=174)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_vector_dynpart_hashjoin_1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] 
(batchId=105)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] 
(batchId=96)
org.apache.hadoop.hive.metastore.client.TestRuntimeStats.testCleanup[Remote] 
(batchId=209)
org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228)
org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 
(batchId=228)
org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232)
org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate
 (batchId=231)
org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys
 (batchId=231)
org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgressParallel 
(batchId=235)
org.apache.hive.jdbc.TestSSL.testSSLFetchHttp (batchId=239)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesWrite 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryElapsedTime
 (batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryExecutionTime
 (batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerVertexRawInputSplitsNoKill
 (batchId=241)
org.apache.hive.spark.client.rpc.TestRpc.testServerPort (batchId=304)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/10743/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10743/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10743/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 34 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12922201 - PreCommit-HIVE-Build

> HiveJoinPushTransitivePredicatesRule hangs
> --
>
> Key: HIVE-19433
> URL: https://issues.apache.org/jira/browse/HIVE-19433
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-19433.1.patch
>
>
> *Reproducer*
> {code:sql}
> CREATE TABLE `table1`(
>`idp_warehouse_id` bigint,
>`idp_audit_id` 

[jira] [Updated] (HIVE-19225) Class cast exception while running certain queries with UDAF like rank on internal struct columns

2018-05-07 Thread Amruth S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amruth S updated HIVE-19225:

Description: 
Certain queries with rank function is causing class cast exception.
{noformat}
Caused by: java.lang.ClassCastException: 
org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct cannot be cast to 
org.apache.hadoop.hive.serde2.io.TimestampWritable
at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableTimestampObjectInspector.getPrimitiveJavaObject(WritableTimestampObjectInspector.java:39)
at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableTimestampObjectInspector.getPrimitiveJavaObject(WritableTimestampObjectInspector.java:25)
at 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:412)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFRank.copyToStandardObject(GenericUDAFRank.java:219)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFRank$GenericUDAFAbstractRankEvaluator.iterate(GenericUDAFRank.java:153)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator.aggregate(GenericUDAFEvaluator.java:192)
at 
org.apache.hadoop.hive.ql.udf.ptf.WindowingTableFunction.processRow(WindowingTableFunction.java:407)
at 
org.apache.hadoop.hive.ql.exec.PTFOperator$PTFInvocation.processRow(PTFOperator.java:325)
at 
org.apache.hadoop.hive.ql.exec.PTFOperator.process(PTFOperator.java:139)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897)
at 
org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95)
at 
org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:236)
... 7 more

2018-03-29 09:28:43,432 INFO [main] org.apache.hadoop.mapred.Task: Runnning 
cleanup for the task
{noformat}
The following changes fixes this.

The evaluator seem to skip the case where the primary obj emitted is struct. 
Modified the code to find the field inside struct
{code:java}
diff --git 
a/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/StandardStructObjectInspector.java
 
b/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/StandardStructObjectInspector.java
index 36a500790a..e7731e99d7 100644
--- 
a/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/StandardStructObjectInspector.java
+++ 
b/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/StandardStructObjectInspector.java
@@ -22,6 +22,7 @@
import java.util.Arrays;
import java.util.List;

+import org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

@@ -171,6 +172,10 @@ public Object getStructFieldData(Object data, StructField 
fieldRef) {
// so we have to do differently.
boolean isArray = data.getClass().isArray();
if (!isArray && !(data instanceof List)) {
+ if (data instanceof LazyBinaryStruct
+ && fieldRef.getFieldObjectInspector().getCategory() == Category.PRIMITIVE) {
+ return ((LazyBinaryStruct) data).getField(((MyField) fieldRef).fieldID);
+ }
if (!warned) {
LOG.warn("Invalid type for struct " + data.getClass());
LOG.warn("ignoring similar errors.");
{code}
Let me know your thoughts

 

BTW, this is the structure to reproduce. 

Launch hive in debug mode
{code:java}
hive --hiveconf hive.root.logger=DEBUG,console;{code}
Run the sample sql below
{code:java}
SET mapreduce.framework.name=local; 

CREATE TABLE `test_class_cast` as select 
named_struct('a','a','b','b','c','c','d','d','e',true,'f','f','g',timestamp(1),'h','h'),
 'i'; 

select `_c0`.c, `_c0`.g, `_c0`.a, rank() over (partition by `_c0`.c order by 
`_c0`.g desc) as rown,`_c0`.f,`_c0`.e from default.test_class_cast where 
`_c0`.f like '%f%' or `_c0`.f like '%f%' {code}
 

  was:
Certain queries with rank function is causing class cast exception.
{noformat}
Caused by: java.lang.ClassCastException: 
org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct cannot be cast to 
org.apache.hadoop.hive.serde2.io.TimestampWritable
at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableTimestampObjectInspector.getPrimitiveJavaObject(WritableTimestampObjectInspector.java:39)
at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableTimestampObjectInspector.getPrimitiveJavaObject(WritableTimestampObjectInspector.java:25)
at 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:412)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFRank.copyToStandardObject(GenericUDAFRank.java:219)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFRank$GenericUDAFAbstractRankEvaluator.iterate(GenericUDAFRank.java:153)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator.aggregate(GenericUDAFEvaluator.java:192)
at 
org.apache.hadoop.hive.ql

[jira] [Updated] (HIVE-19225) Class cast exception while running certain queries with UDAF like rank on internal struct columns

2018-05-07 Thread Amruth S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amruth S updated HIVE-19225:

Description: 
Certain queries with rank function is causing class cast exception.
{noformat}
Caused by: java.lang.ClassCastException: 
org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct cannot be cast to 
org.apache.hadoop.hive.serde2.io.TimestampWritable
at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableTimestampObjectInspector.getPrimitiveJavaObject(WritableTimestampObjectInspector.java:39)
at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableTimestampObjectInspector.getPrimitiveJavaObject(WritableTimestampObjectInspector.java:25)
at 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:412)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFRank.copyToStandardObject(GenericUDAFRank.java:219)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFRank$GenericUDAFAbstractRankEvaluator.iterate(GenericUDAFRank.java:153)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator.aggregate(GenericUDAFEvaluator.java:192)
at 
org.apache.hadoop.hive.ql.udf.ptf.WindowingTableFunction.processRow(WindowingTableFunction.java:407)
at 
org.apache.hadoop.hive.ql.exec.PTFOperator$PTFInvocation.processRow(PTFOperator.java:325)
at 
org.apache.hadoop.hive.ql.exec.PTFOperator.process(PTFOperator.java:139)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897)
at 
org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95)
at 
org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:236)
... 7 more

2018-03-29 09:28:43,432 INFO [main] org.apache.hadoop.mapred.Task: Runnning 
cleanup for the task
{noformat}
The following changes fixes this.

The evaluator seem to skip the case where the primary obj emitted is struct. 
Modified the code to find the field inside struct
{code:java}
diff --git 
a/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/StandardStructObjectInspector.java
 
b/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/StandardStructObjectInspector.java
index 36a500790a..e7731e99d7 100644
--- 
a/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/StandardStructObjectInspector.java
+++ 
b/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/StandardStructObjectInspector.java
@@ -22,6 +22,7 @@
import java.util.Arrays;
import java.util.List;

+import org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

@@ -171,6 +172,10 @@ public Object getStructFieldData(Object data, StructField 
fieldRef) {
// so we have to do differently.
boolean isArray = data.getClass().isArray();
if (!isArray && !(data instanceof List)) {
+ if (data instanceof LazyBinaryStruct
+ && fieldRef.getFieldObjectInspector().getCategory() == Category.PRIMITIVE) {
+ return ((LazyBinaryStruct) data).getField(((MyField) fieldRef).fieldID);
+ }
if (!warned) {
LOG.warn("Invalid type for struct " + data.getClass());
LOG.warn("ignoring similar errors.");
{code}
Let me know your thoughts

 

BTW, this is the structure to reproduce. 

Launch hive in debug mode
{code:java}
hive --hiveconf hive.root.logger=DEBUG,console;{code}
Run the sample sql below
{code:java}
SET mapreduce.framework.name=local; 

CREATE TABLE `test_class_cast` as select 
named_struct('a','a','b','b','c','c','d','d','e',true,'f','f','g',timestamp(1),'h','h'),
 'i'; 

select `_c0`.c, `_c0`.g, `_c0`.a, rank() over (partition by `_c0`.c order by 
`_c0`.g desc) as rown,`_c0`.f,`_c0`.e from default.test_class_cast where 
`_c0`.f like '%f%' or `_c0`.f like '%f%' {code}
Fails with the exception

 
{code:java}
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error 
while processing row (tag=0) 
{"key":{"reducesinkkey0":"c","reducesinkkey1":"1970-01-01 
05:30:00.001"},"value":{"_col0":{"a":"a","b":"b","c":"c","d":"d","e":true,"f":"f","g":"1970-01-01
 05:30:00.001","h":"h"}}}
 at org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:245) 
~[hive-exec-2.3.2.fk.7.jar:2.3.2.fk.7]
 at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:444) 
~[hadoop-mapreduce-client-core-2.6.0.2.2.0.0-2041.jar:?]
 at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392) 
~[hadoop-mapreduce-client-core-2.6.0.2.2.0.0-2041.jar:?]
 at 
org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319)
 ~[hadoop-mapreduce-client-common-2.6.0.2.2.0.0-2041.jar:?]
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[?:1.8.0_92]
 at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_92]
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 

[jira] [Commented] (HIVE-19225) Class cast exception while running certain queries with UDAF like rank on internal struct columns

2018-05-07 Thread Amruth S (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16465818#comment-16465818
 ] 

Amruth S commented on HIVE-19225:
-

[~kgyrtkirk], Have updated the description with sample data set to reproduce on 
hive 2.3.2

> Class cast exception while running certain queries with UDAF like rank on 
> internal struct columns
> -
>
> Key: HIVE-19225
> URL: https://issues.apache.org/jira/browse/HIVE-19225
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.3.2
>Reporter: Amruth S
>Assignee: Amruth S
>Priority: Major
> Attachments: HIVE-19225.patch
>
>
>  
> *To reproduce : [tag - 2.3.2]*
> Launch hive in debug mode
> {code:java}
> hive --hiveconf hive.root.logger=DEBUG,console;{code}
> Run the sample sql below
> {code:java}
> SET mapreduce.framework.name=local; 
> CREATE TABLE `test_class_cast` as select 
> named_struct('a','a','b','b','c','c','d','d','e',true,'f','f','g',timestamp(1),'h','h'),
>  'i'; 
> select `_c0`.c, `_c0`.g, `_c0`.a, rank() over (partition by `_c0`.c order by 
> `_c0`.g desc) as rown,`_c0`.f,`_c0`.e from default.test_class_cast where 
> `_c0`.f like '%f%' or `_c0`.f like '%f%' {code}
> Should fail with the exception 
> {code:java}
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row (tag=0) 
> {"key":{"reducesinkkey0":"c","reducesinkkey1":"1970-01-01 
> 05:30:00.001"},"value":{"_col0":{"a":"a","b":"b","c":"c","d":"d","e":true,"f":"f","g":"1970-01-01
>  05:30:00.001","h":"h"}}}
>  at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:245) 
> ~[hive-exec-2.3.2.fk.7.jar:2.3.2.fk.7]
>  at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:444) 
> ~[hadoop-mapreduce-client-core-2.6.0.2.2.0.0-2041.jar:?]
>  at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392) 
> ~[hadoop-mapreduce-client-core-2.6.0.2.2.0.0-2041.jar:?]
>  at 
> org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319)
>  ~[hadoop-mapreduce-client-common-2.6.0.2.2.0.0-2041.jar:?]
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[?:1.8.0_92]
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_92]
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[?:1.8.0_92]
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[?:1.8.0_92]
>  at java.lang.Thread.run(Thread.java:745) ~[?:1.8.0_92]
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct cannot be cast to 
> org.apache.hadoop.hive.serde2.io.TimestampWritable
>  at 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableTimestampObjectInspector.getPrimitiveJavaObject(WritableTimestampObjectInspector.java:39)
>  ~[hive-exec-2.3.2.fk.7.jar:2.3.2.fk.7]
>  at 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableTimestampObjectInspector.getPrimitiveJavaObject(WritableTimestampObjectInspector.java:25)
>  ~[hive-exec-2.3.2.fk.7.jar:2.3.2.fk.7]
>  at 
> org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:412)
>  ~[hive-exec-2.3.2.fk.7.jar:2.3.2.fk.7]
>  at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDAFRank.copyToStandardObject(GenericUDAFRank.java:219)
>  ~[hive-exec-2.3.2.fk.7.jar:2.3.2.fk.7]{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19225) Class cast exception while running certain queries with UDAF like rank on internal struct columns

2018-05-07 Thread Amruth S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amruth S updated HIVE-19225:

Description: 
 

*To reproduce : [tag - 2.3.2]*

Launch hive in debug mode
{code:java}
hive --hiveconf hive.root.logger=DEBUG,console;{code}
Run the sample sql below
{code:java}
SET mapreduce.framework.name=local; 

CREATE TABLE `test_class_cast` as select 
named_struct('a','a','b','b','c','c','d','d','e',true,'f','f','g',timestamp(1),'h','h'),
 'i'; 

select `_c0`.c, `_c0`.g, `_c0`.a, rank() over (partition by `_c0`.c order by 
`_c0`.g desc) as rown,`_c0`.f,`_c0`.e from default.test_class_cast where 
`_c0`.f like '%f%' or `_c0`.f like '%f%' {code}
Should fail with the exception 
{code:java}
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error 
while processing row (tag=0) 
{"key":{"reducesinkkey0":"c","reducesinkkey1":"1970-01-01 
05:30:00.001"},"value":{"_col0":{"a":"a","b":"b","c":"c","d":"d","e":true,"f":"f","g":"1970-01-01
 05:30:00.001","h":"h"}}}
 at org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:245) 
~[hive-exec-2.3.2.fk.7.jar:2.3.2.fk.7]
 at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:444) 
~[hadoop-mapreduce-client-core-2.6.0.2.2.0.0-2041.jar:?]
 at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392) 
~[hadoop-mapreduce-client-core-2.6.0.2.2.0.0-2041.jar:?]
 at 
org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319)
 ~[hadoop-mapreduce-client-common-2.6.0.2.2.0.0-2041.jar:?]
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[?:1.8.0_92]
 at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_92]
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[?:1.8.0_92]
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
~[?:1.8.0_92]
 at java.lang.Thread.run(Thread.java:745) ~[?:1.8.0_92]
Caused by: java.lang.ClassCastException: 
org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct cannot be cast to 
org.apache.hadoop.hive.serde2.io.TimestampWritable
 at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableTimestampObjectInspector.getPrimitiveJavaObject(WritableTimestampObjectInspector.java:39)
 ~[hive-exec-2.3.2.fk.7.jar:2.3.2.fk.7]
 at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableTimestampObjectInspector.getPrimitiveJavaObject(WritableTimestampObjectInspector.java:25)
 ~[hive-exec-2.3.2.fk.7.jar:2.3.2.fk.7]
 at 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:412)
 ~[hive-exec-2.3.2.fk.7.jar:2.3.2.fk.7]
 at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFRank.copyToStandardObject(GenericUDAFRank.java:219)
 ~[hive-exec-2.3.2.fk.7.jar:2.3.2.fk.7]{code}

  was:
Certain queries with rank function is causing class cast exception.
{noformat}
Caused by: java.lang.ClassCastException: 
org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct cannot be cast to 
org.apache.hadoop.hive.serde2.io.TimestampWritable
at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableTimestampObjectInspector.getPrimitiveJavaObject(WritableTimestampObjectInspector.java:39)
at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableTimestampObjectInspector.getPrimitiveJavaObject(WritableTimestampObjectInspector.java:25)
at 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:412)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFRank.copyToStandardObject(GenericUDAFRank.java:219)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFRank$GenericUDAFAbstractRankEvaluator.iterate(GenericUDAFRank.java:153)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator.aggregate(GenericUDAFEvaluator.java:192)
at 
org.apache.hadoop.hive.ql.udf.ptf.WindowingTableFunction.processRow(WindowingTableFunction.java:407)
at 
org.apache.hadoop.hive.ql.exec.PTFOperator$PTFInvocation.processRow(PTFOperator.java:325)
at 
org.apache.hadoop.hive.ql.exec.PTFOperator.process(PTFOperator.java:139)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897)
at 
org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95)
at 
org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:236)
... 7 more

2018-03-29 09:28:43,432 INFO [main] org.apache.hadoop.mapred.Task: Runnning 
cleanup for the task
{noformat}
The following changes fixes this.

The evaluator seem to skip the case where the primary obj emitted is struct. 
Modified the code to find the field inside struct
{code:java}
diff --git 
a/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/StandardStructObjectInspector.java
 
b/serde/src/java/org/apache/h

[jira] [Commented] (HIVE-18977) Listing partitions returns different results with JDO and direct SQL

2018-05-07 Thread Peter Vary (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16465836#comment-16465836
 ] 

Peter Vary commented on HIVE-18977:
---

+1

> Listing partitions returns different results with JDO and direct SQL
> 
>
> Key: HIVE-18977
> URL: https://issues.apache.org/jira/browse/HIVE-18977
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 3.0.0
>Reporter: Alan Gates
>Assignee: Marta Kuczora
>Priority: Major
> Attachments: HIVE-18977.1.patch
>
>
> Some of the tests in TestListPartitions fail when using JDO instead of direct 
> SQL.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19423) REPL LOAD creates staging directory in source dump directory instead of table data location

2018-05-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16465837#comment-16465837
 ] 

Hive QA commented on HIVE-19423:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
9s{color} | {color:blue} ql in master has 2318 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10744/dev-support/hive-personality.sh
 |
| git revision | master / 88d224f |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10744/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> REPL LOAD creates staging directory in source dump directory instead of table 
> data location
> ---
>
> Key: HIVE-19423
> URL: https://issues.apache.org/jira/browse/HIVE-19423
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, HiveServer2, repl
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: Hive, Repl, pull-request-available
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-19423.01.patch, HIVE-19423.02.patch
>
>
> REPL LOAD creates staging directory in source dump directory instead of table 
> data location. In case of replication from on-perm to cloud it can create 
> problem. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18977) Listing partitions returns different results with JDO and direct SQL

2018-05-07 Thread Marta Kuczora (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16465835#comment-16465835
 ] 

Marta Kuczora commented on HIVE-18977:
--

The test failures are not related to the patch.

> Listing partitions returns different results with JDO and direct SQL
> 
>
> Key: HIVE-18977
> URL: https://issues.apache.org/jira/browse/HIVE-18977
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 3.0.0
>Reporter: Alan Gates
>Assignee: Marta Kuczora
>Priority: Major
> Attachments: HIVE-18977.1.patch
>
>
> Some of the tests in TestListPartitions fail when using JDO instead of direct 
> SQL.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19423) REPL LOAD creates staging directory in source dump directory instead of table data location

2018-05-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16465894#comment-16465894
 ] 

Hive QA commented on HIVE-19423:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/1291/HIVE-19423.02.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 33 failed/errored test(s), 14322 tests 
executed
*Failed tests:*
{noformat}
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed 
out) (batchId=217)
TestSequenceFileReadWrite - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez2]
 (batchId=156)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_dynpart_hashjoin_1]
 (batchId=174)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_stats]
 (batchId=159)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] 
(batchId=105)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] 
(batchId=96)
org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228)
org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 
(batchId=228)
org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232)
org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate
 (batchId=231)
org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys
 (batchId=231)
org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgress (batchId=235)
org.apache.hive.jdbc.TestSSL.testSSLFetchHttp (batchId=239)
org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveAndKill 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesWrite 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryElapsedTime
 (batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryExecutionTime
 (batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerVertexRawInputSplitsNoKill
 (batchId=241)
org.apache.hive.spark.client.rpc.TestRpc.testServerPort (batchId=304)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/10744/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10744/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10744/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 33 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 1291 - PreCommit-HIVE-Build

> REPL LOAD creates staging directory in source dump directory instead of table 
> data location
> ---
>
> Key: HIVE-19423
> URL: https://issues.apache.org/jira/browse/HIVE-19423
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, HiveServer2, repl
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: Hive, Repl, pull-request-available
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-19423.01.patch, HIVE-19423.02.patch
>
>

[jira] [Commented] (HIVE-19388) ClassCastException during VectorMapJoinCommonOperator initialization

2018-05-07 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16465911#comment-16465911
 ] 

Rui Li commented on HIVE-19388:
---

[~vihangk1], thanks for fixing this. The change looks good. +1
As for your observation about {{spark_vectorized_dynamic_partition_pruning.q}}, 
seems that's indeed another bug. The task fails during MapWork initialization. 
When we retry the task, we retrieve the MapWork from cache. At this point, some 
operator's state is {{State.INIT}}, although the previous initialization 
actually failed. So initialization is skipped and the task somehow finishes 
successfully. I think one way to fix it is to clear the work cache when 
initialization fails. I've created HIVE-19439 to track that.

> ClassCastException during VectorMapJoinCommonOperator initialization
> 
>
> Key: HIVE-19388
> URL: https://issues.apache.org/jira/browse/HIVE-19388
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.1, 2.2.0, 3.0.0, 2.3.2, 3.1.0
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Major
> Attachments: HIVE-19388.01.patch, HIVE-19388.02.patch
>
>
> I see the following exceptions when I a mapjoin operator is being initialized 
> on Hive-on-Spark and when vectorization is turned on.
> This happens when the hashTable is empty. The code in 
> {{MapJoinTableContainerSerDe#getDefaultEmptyContainer}} method returns a 
> HashMapWrapper while the VectorMapJoinOperator expects a 
> {{MapJoinBytesTableContainer}} when {{hive.mapjoin.optimized.hashtable}} is 
> set to true.
> {noformat}
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hive.ql.exec.persistence.HashMapWrapper cannot be cast to 
> org.apache.hadoop.hive.ql.exec.persistence.MapJoinTableContainerDirectAccess
>  at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.optimized.VectorMapJoinOptimizedHashTable.(VectorMapJoinOptimizedHashTable.java:92)
>  ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.optimized.VectorMapJoinOptimizedHashMap.(VectorMapJoinOptimizedHashMap.java:127)
>  ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.optimized.VectorMapJoinOptimizedStringHashMap.(VectorMapJoinOptimizedStringHashMap.java:60)
>  ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.optimized.VectorMapJoinOptimizedCreateHashTable.createHashTable(VectorMapJoinOptimizedCreateHashTable.java:80)
>  ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinCommonOperator.setUpHashTable(VectorMapJoinCommonOperator.java:485)
>  ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinCommonOperator.completeInitializationOp(VectorMapJoinCommonOperator.java:461)
>  ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.exec.Operator.completeInitialization(Operator.java:471)
>  ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
>  at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:401) 
> ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
>  at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:574) 
> ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:526) 
> ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
>  at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:387) 
> ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.init(SparkMapRecordHandler.java:109)
>  ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
>  ... 16 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19436) NullPointerException while getting block info

2018-05-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16465921#comment-16465921
 ] 

Hive QA commented on HIVE-19436:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
1s{color} | {color:blue} ql in master has 2318 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
34s{color} | {color:red} ql: The patch generated 2 new + 33 unchanged - 2 fixed 
= 35 total (was 35) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10745/dev-support/hive-personality.sh
 |
| git revision | master / 88d224f |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10745/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10745/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> NullPointerException while getting block info
> -
>
> Key: HIVE-19436
> URL: https://issues.apache.org/jira/browse/HIVE-19436
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.3.2
>Reporter: Amruth S
>Assignee: Amruth S
>Priority: Major
> Attachments: HIVE-19436.patch
>
>
> From hive 2.3.2, there are cases where block info object comes out to be null 
> (src/java/org/apache/hadoop/hive/ql/exec/persistence/PTFRowContainer.java)
> Comes in this code path
>  
> {code:java}
> if ( blockInfos.size() > 0 ) {
>  InputSplit[] inputSplits = getInputSplits();
>  FileSplit fS = null;
>  BlockInfo bI = blockInfos.get(0);
> ...
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19212) Fix findbugs yetus pre-commit checks

2018-05-07 Thread Peter Vary (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16465973#comment-16465973
 ] 

Peter Vary commented on HIVE-19212:
---

Thanks for the patch [~stakiar]! Good to have this online :D

> Fix findbugs yetus pre-commit checks
> 
>
> Key: HIVE-19212
> URL: https://issues.apache.org/jira/browse/HIVE-19212
> Project: Hive
>  Issue Type: Sub-task
>  Components: Testing Infrastructure
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-19212.1.patch, HIVE-19212.2.patch, 
> HIVE-19212.3.patch
>
>
> Follow up from HIVE-18883, the committed patch isn't working and Findbugs is 
> still not working.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19429) Investigate alternative technologies like docker containers to increase parallelism

2018-05-07 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16465976#comment-16465976
 ] 

Alan Gates commented on HIVE-19429:
---

{quote}How much memory does your machine have?
{quote}
256G
{quote}I could not find a way to get the test results for the failed test.
{quote}
Yeah, I have not gotten to that part yet.  It should be easy enough to change 
the ResulstsAnalyzer to grab that information as well.  It may require 
revivifying the container to obtain the logs.  Though it will be better if we 
can teach the container to print the logs of failed tests so that "docker logs" 
will automatically get them in the first pass.
{quote}I think the existing batching logic is better than the one you have 
since we don't have to hardcode the directory names. The existing batching 
logic is much more customizable with regards to the batch sizes of individual 
CliDrivers.
{quote}
I don't like that I have the directory names etc. hard coded in the code.  At 
the very least this should be in configuration.  I have completely rewritten 
the MvnCommandFactory at least twice.  Every time I tried to get more general 
though it got insanely complicated.  Which leads me to the conclusion that 
rather than making this code much smarter, we should make the tests much 
simpler.  We should not have to read two config files to figure out which 
qfiles to run with which tests.  Ideally we could figure out a way to surface 
qfiles as individual tests rather than all buried in one test.  I have some 
thoughts on how to achieve this, but it's longer term.  Also, I haven't found 
the flexibility of different batch sizes worth the effort.  One size fits all 
isn't perfect but seems to be good enough.
{quote}I think it would be useful to run these containers in a cluster so that 
we can support multiple patches a time to speed up the testing.
{quote}
Definitely.  I happen to have a beefy machine handy, but that isn't the general 
case.  I designed it to support multiple container providers so it should be 
easy to write a ContainerClient that supports Yarn or Kubernetes instead of 
simple Docker.
{quote}Also, not sure if there is a way to run command on an existing docker 
container so that we can re-use deployed containers.
{quote}
I am not a Docker expert, but I think this is an anti-pattern.  Spinning up a 
new container is very fast and very low cost.  To reuse a container you either 
have to build it as a standing service that can keep taking requests (which is 
much more complex that running a simple test command), or turn the container 
into an image and then start a new container on that image (so you are starting 
a new container anyway).  Both of these are much more heavyweight than just 
starting a new container.  Occasionally we may be forced to restart the 
container to get information out of it (like in the case of getting logs from 
failed tests).

> Investigate alternative technologies like docker containers to increase 
> parallelism
> ---
>
> Key: HIVE-19429
> URL: https://issues.apache.org/jira/browse/HIVE-19429
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Vihang Karajgaonkar
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19436) NullPointerException while getting block info

2018-05-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16465978#comment-16465978
 ] 

Hive QA commented on HIVE-19436:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/1297/HIVE-19436.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 32 failed/errored test(s), 14322 tests 
executed
*Failed tests:*
{noformat}
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed 
out) (batchId=217)
TestSequenceFileReadWrite - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez2]
 (batchId=156)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_vector_dynpart_hashjoin_1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] 
(batchId=105)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] 
(batchId=96)
org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228)
org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 
(batchId=228)
org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232)
org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate
 (batchId=231)
org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys
 (batchId=231)
org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgress (batchId=235)
org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgressParallel 
(batchId=235)
org.apache.hive.jdbc.TestSSL.testSSLFetchHttp (batchId=239)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesWrite 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryElapsedTime
 (batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryExecutionTime
 (batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerVertexRawInputSplitsNoKill
 (batchId=241)
org.apache.hive.spark.client.rpc.TestRpc.testServerPort (batchId=304)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/10745/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10745/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10745/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 32 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 1297 - PreCommit-HIVE-Build

> NullPointerException while getting block info
> -
>
> Key: HIVE-19436
> URL: https://issues.apache.org/jira/browse/HIVE-19436
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.3.2
>Reporter: Amruth S
>Assignee: Amruth S
>Priority: Major
> Attachments: HIVE-19436.patch
>
>
> From hive 2.3.2, there are cases where block info object comes out to be null 
> (src/java/org/apache/hadoop/hive/ql/exec/persistence/PTFRowContainer.java)
> Comes in this code path
>  
> {code:java}
> if ( blockInfos.size() > 0 ) {
>  InputSplit[] inputSplits = getInputSplits();
>  FileSplit fS = null;
>  BlockInfo bI = 

[jira] [Commented] (HIVE-19225) Class cast exception while running certain queries with UDAF like rank on internal struct columns

2018-05-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16465995#comment-16465995
 ] 

Hive QA commented on HIVE-19225:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
40s{color} | {color:blue} serde in master has 190 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10746/dev-support/hive-personality.sh
 |
| git revision | master / 88d224f |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: serde U: serde |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10746/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Class cast exception while running certain queries with UDAF like rank on 
> internal struct columns
> -
>
> Key: HIVE-19225
> URL: https://issues.apache.org/jira/browse/HIVE-19225
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.3.2
>Reporter: Amruth S
>Assignee: Amruth S
>Priority: Major
> Attachments: HIVE-19225.patch
>
>
>  
> *To reproduce : [tag - 2.3.2]*
> Launch hive in debug mode
> {code:java}
> hive --hiveconf hive.root.logger=DEBUG,console;{code}
> Run the sample sql below
> {code:java}
> SET mapreduce.framework.name=local; 
> CREATE TABLE `test_class_cast` as select 
> named_struct('a','a','b','b','c','c','d','d','e',true,'f','f','g',timestamp(1),'h','h'),
>  'i'; 
> select `_c0`.c, `_c0`.g, `_c0`.a, rank() over (partition by `_c0`.c order by 
> `_c0`.g desc) as rown,`_c0`.f,`_c0`.e from default.test_class_cast where 
> `_c0`.f like '%f%' or `_c0`.f like '%f%' {code}
> Should fail with the exception 
> {code:java}
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row (tag=0) 
> {"key":{"reducesinkkey0":"c","reducesinkkey1":"1970-01-01 
> 05:30:00.001"},"value":{"_col0":{"a":"a","b":"b","c":"c","d":"d","e":true,"f":"f","g":"1970-01-01
>  05:30:00.001","h":"h"}}}
>  at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:245) 
> ~[hive-exec-2.3.2.fk.7.jar:2.3.2.fk.7]
>  at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:444) 
> ~[hadoop-mapreduce

[jira] [Commented] (HIVE-19439) MapWork shouldn't be reused when Spark task fails during initialization

2018-05-07 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16466029#comment-16466029
 ] 

Vihang Karajgaonkar commented on HIVE-19439:


copying this from HIVE-19388 for reference:

{quote}
As for your observation about spark_vectorized_dynamic_partition_pruning.q, 
seems that's indeed another bug. The task fails during MapWork initialization. 
When we retry the task, we retrieve the MapWork from cache. At this point, some 
operator's state is State.INIT, although the previous initialization actually 
failed. So initialization is skipped and the task somehow finishes 
successfully. I think one way to fix it is to clear the work cache when 
initialization fails.
{quote}

Hi [~lirui] Can you please point me to the code which retries the task? Thanks!

> MapWork shouldn't be reused when Spark task fails during initialization
> ---
>
> Key: HIVE-19439
> URL: https://issues.apache.org/jira/browse/HIVE-19439
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Rui Li
>Priority: Major
>
> Issue identified in HIVE-19388. When a Spark task fails during initializing 
> the map operator, the task is retried with the same MapWork retrieved from 
> cache. This can be problematic because the MapWork may be partially 
> initialized, e.g. some operators are already in INIT state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19225) Class cast exception while running certain queries with UDAF like rank on internal struct columns

2018-05-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16466090#comment-16466090
 ] 

Hive QA commented on HIVE-19225:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/1299/HIVE-19225.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 32 failed/errored test(s), 14322 tests 
executed
*Failed tests:*
{noformat}
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed 
out) (batchId=217)
TestSequenceFileReadWrite - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez2]
 (batchId=156)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] 
(batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_dynpart_hashjoin_1]
 (batchId=174)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] 
(batchId=105)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] 
(batchId=96)
org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228)
org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 
(batchId=228)
org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232)
org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate
 (batchId=231)
org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys
 (batchId=231)
org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgress (batchId=235)
org.apache.hive.jdbc.TestSSL.testSSLFetchHttp (batchId=239)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesWrite 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryElapsedTime
 (batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryExecutionTime
 (batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerVertexRawInputSplitsNoKill
 (batchId=241)
org.apache.hive.spark.client.rpc.TestRpc.testServerPort (batchId=304)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/10746/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10746/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10746/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 32 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 1299 - PreCommit-HIVE-Build

> Class cast exception while running certain queries with UDAF like rank on 
> internal struct columns
> -
>
> Key: HIVE-19225
> URL: https://issues.apache.org/jira/browse/HIVE-19225
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.3.2
>Reporter: Amruth S
>Assignee: Amruth S
>Priority: Major
> Attachments: HIVE-19225.patch
>
>
>  
> *To reproduce : [tag - 2.3.2]*
> Launch hive in debug mode
> {code:java}
> hive --hiveconf hive.root.logger=DEBUG,console;{code}
> Run the sample sql below
> {code:java}
> SET mapreduce.framework.name=local; 
>

[jira] [Commented] (HIVE-18497) JDBC connection parameter to control socket read and connect timeout

2018-05-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16466110#comment-16466110
 ] 

Hive QA commented on HIVE-18497:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
28s{color} | {color:blue} jdbc in master has 17 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
11s{color} | {color:red} jdbc: The patch generated 1 new + 47 unchanged - 0 
fixed = 48 total (was 47) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10747/dev-support/hive-personality.sh
 |
| git revision | master / 88d224f |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10747/yetus/diff-checkstyle-jdbc.txt
 |
| modules | C: jdbc U: jdbc |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10747/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> JDBC connection parameter to control socket read and connect timeout
> 
>
> Key: HIVE-18497
> URL: https://issues.apache.org/jira/browse/HIVE-18497
> Project: Hive
>  Issue Type: Improvement
>  Components: Clients, JDBC
>Reporter: Amruth S
>Assignee: Amruth S
>Priority: Minor
> Attachments: HIVE-18497.patch
>
>
> Hive server failures are making the JDBC client get stuck in socketRead.
> Users should be able to configure socket read timeout to fail fast in case of 
> server failures.
> *Proposed solution*
> Add a Jdbc connection param 
> *hive.client.read.socket.timeoutmillis*
> This can control the socket read timeout, connect timeout in TCP as well as 
> HTTP mode.
> Let me know your thoughts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19161) Add authorizations to information schema

2018-05-07 Thread Daniel Dai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-19161:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.1.0
   3.0.0
   Status: Resolved  (was: Patch Available)

Patch pushed to both master and branch-3.

> Add authorizations to information schema
> 
>
> Key: HIVE-19161
> URL: https://issues.apache.org/jira/browse/HIVE-19161
> Project: Hive
>  Issue Type: Improvement
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-19161.1.patch, HIVE-19161.10.patch, 
> HIVE-19161.11.patch, HIVE-19161.12.patch, HIVE-19161.13.patch, 
> HIVE-19161.14.patch, HIVE-19161.15.patch, HIVE-19161.2.patch, 
> HIVE-19161.3.patch, HIVE-19161.4.patch, HIVE-19161.5.patch, 
> HIVE-19161.6.patch, HIVE-19161.7.patch, HIVE-19161.8.patch, HIVE-19161.9.patch
>
>
> We need to control the access of information schema so user can only query 
> the information authorized to.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19306) Arrow batch serializer

2018-05-07 Thread Eric Wohlstadter (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16466188#comment-16466188
 ] 

Eric Wohlstadter commented on HIVE-19306:
-

+1 lgtm

> Arrow batch serializer
> --
>
> Key: HIVE-19306
> URL: https://issues.apache.org/jira/browse/HIVE-19306
> Project: Hive
>  Issue Type: Task
>  Components: Serializers/Deserializers
>Reporter: Eric Wohlstadter
>Assignee: Teddy Choi
>Priority: Major
> Attachments: HIVE-19306.2.patch
>
>
> Leverage the ThriftJDBCBinarySerDe code path that already exists in 
> SemanticAnalyzer/FileSinkOperator to create a serializer that batches rows 
> into Arrow vector batches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19423) REPL LOAD creates staging directory in source dump directory instead of table data location

2018-05-07 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16466190#comment-16466190
 ] 

Sankar Hariappan commented on HIVE-19423:
-

+1, 02.patch looks good to me

> REPL LOAD creates staging directory in source dump directory instead of table 
> data location
> ---
>
> Key: HIVE-19423
> URL: https://issues.apache.org/jira/browse/HIVE-19423
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, HiveServer2, repl
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: Hive, Repl, pull-request-available
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-19423.01.patch, HIVE-19423.02.patch
>
>
> REPL LOAD creates staging directory in source dump directory instead of table 
> data location. In case of replication from on-perm to cloud it can create 
> problem. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19423) REPL LOAD creates staging directory in source dump directory instead of table data location

2018-05-07 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16466207#comment-16466207
 ] 

Sankar Hariappan commented on HIVE-19423:
-

Patch committed to master and branch-3.

Thanks for the contribution [~maheshk114]!

> REPL LOAD creates staging directory in source dump directory instead of table 
> data location
> ---
>
> Key: HIVE-19423
> URL: https://issues.apache.org/jira/browse/HIVE-19423
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, HiveServer2, repl
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: Hive, Repl, pull-request-available
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-19423.01.patch, HIVE-19423.02.patch
>
>
> REPL LOAD creates staging directory in source dump directory instead of table 
> data location. In case of replication from on-perm to cloud it can create 
> problem. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19423) REPL LOAD creates staging directory in source dump directory instead of table data location

2018-05-07 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-19423:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> REPL LOAD creates staging directory in source dump directory instead of table 
> data location
> ---
>
> Key: HIVE-19423
> URL: https://issues.apache.org/jira/browse/HIVE-19423
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, HiveServer2, repl
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: Hive, Repl, pull-request-available
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-19423.01.patch, HIVE-19423.02.patch
>
>
> REPL LOAD creates staging directory in source dump directory instead of table 
> data location. In case of replication from on-perm to cloud it can create 
> problem. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19248) REPL LOAD couldn't copy file from source CM path and also doesn't throw error if file copy fails.

2018-05-07 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-19248:

Status: Open  (was: Patch Available)

> REPL LOAD couldn't copy file from source CM path and also doesn't throw error 
> if file copy fails.
> -
>
> Key: HIVE-19248
> URL: https://issues.apache.org/jira/browse/HIVE-19248
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-19248.01.patch
>
>
> Hive replication uses Hadoop distcp to copy files from primary to replica 
> warehouse. If the HDFS block size is different across clusters, it cause file 
> copy failures.
> {code:java}
> 2018-04-09 14:32:06,690 ERROR [main] 
> org.apache.hadoop.tools.mapred.CopyMapper: Failure in copying 
> hdfs://chelsea/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/000259_0 to 
> hdfs://marilyn/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/.hive-staging_hive_2018-04-09_14-30-45_723_7153496419225102220-2/-ext-10001/000259_0
> java.io.IOException: File copy failed: 
> hdfs://chelsea/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/000259_0 
> --> 
> hdfs://marilyn/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/.hive-staging_hive_2018-04-09_14-30-45_723_7153496419225102220-2/-ext-10001/000259_0
>  at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:299)
>  at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:266)
>  at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:52)
>  at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
>  at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>  at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:170)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
>  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:164)
> Caused by: java.io.IOException: Couldn't run retriable-command: Copying 
> hdfs://chelsea/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/000259_0 to 
> hdfs://marilyn/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/.hive-staging_hive_2018-04-09_14-30-45_723_7153496419225102220-2/-ext-10001/000259_0
>  at 
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
>  at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:296)
>  ... 10 more
> Caused by: java.io.IOException: Check-sum mismatch between 
> hdfs://chelsea/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/000259_0 
> and 
> hdfs://marilyn/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/.hive-staging_hive_2018-04-09_14-30-45_723_7153496419225102220-2/-ext-10001/.distcp.tmp.attempt_1522833620762_4416_m_00_0.
>  Source and target differ in block-size. Use -pb to preserve block-sizes 
> during copy. Alternatively, skip checksum-checks altogether, using -skipCrc. 
> (NOTE: By skipping checksums, one runs the risk of masking data-corruption 
> during file-transfer.)
>  at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.compareCheckSums(RetriableFileCopyCommand.java:212)
>  at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:130)
>  at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:99)
>  at 
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
>  ... 11 more
> {code}
> Distcp failed as the CM path for the file doesn't point to source file 
> system. So, it is needed to get the qualified cm root URI as part of files 
> listed in dump.
> Also, REPL LOAD returns success even if distcp jobs failed.
> CopyUtils.doCopyRetry doesn't throw error if copy failed even after maximum 
> attempts. 
> So, need to perform 2 things.
>  # If copy of multiple files fail for some reason, then retry with same set 
> of files again but need to set CM path if original source file is missing or 
> modified based on checksum. Let distcp to skip the properly copied files. 
> FileUtil.copy will always overwrite the files.
>  # If source path is moved to CM path, then delete the incorrectly copied 
> files.
>  # If copy fails for maximum attempt, then throw error.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19248) REPL LOAD couldn't copy file from source CM path and also doesn't throw error if file copy fails.

2018-05-07 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-19248:

Attachment: HIVE-19248.02.patch

> REPL LOAD couldn't copy file from source CM path and also doesn't throw error 
> if file copy fails.
> -
>
> Key: HIVE-19248
> URL: https://issues.apache.org/jira/browse/HIVE-19248
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-19248.01.patch, HIVE-19248.02.patch
>
>
> Hive replication uses Hadoop distcp to copy files from primary to replica 
> warehouse. If the HDFS block size is different across clusters, it cause file 
> copy failures.
> {code:java}
> 2018-04-09 14:32:06,690 ERROR [main] 
> org.apache.hadoop.tools.mapred.CopyMapper: Failure in copying 
> hdfs://chelsea/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/000259_0 to 
> hdfs://marilyn/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/.hive-staging_hive_2018-04-09_14-30-45_723_7153496419225102220-2/-ext-10001/000259_0
> java.io.IOException: File copy failed: 
> hdfs://chelsea/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/000259_0 
> --> 
> hdfs://marilyn/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/.hive-staging_hive_2018-04-09_14-30-45_723_7153496419225102220-2/-ext-10001/000259_0
>  at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:299)
>  at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:266)
>  at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:52)
>  at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
>  at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>  at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:170)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
>  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:164)
> Caused by: java.io.IOException: Couldn't run retriable-command: Copying 
> hdfs://chelsea/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/000259_0 to 
> hdfs://marilyn/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/.hive-staging_hive_2018-04-09_14-30-45_723_7153496419225102220-2/-ext-10001/000259_0
>  at 
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
>  at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:296)
>  ... 10 more
> Caused by: java.io.IOException: Check-sum mismatch between 
> hdfs://chelsea/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/000259_0 
> and 
> hdfs://marilyn/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/.hive-staging_hive_2018-04-09_14-30-45_723_7153496419225102220-2/-ext-10001/.distcp.tmp.attempt_1522833620762_4416_m_00_0.
>  Source and target differ in block-size. Use -pb to preserve block-sizes 
> during copy. Alternatively, skip checksum-checks altogether, using -skipCrc. 
> (NOTE: By skipping checksums, one runs the risk of masking data-corruption 
> during file-transfer.)
>  at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.compareCheckSums(RetriableFileCopyCommand.java:212)
>  at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:130)
>  at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:99)
>  at 
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
>  ... 11 more
> {code}
> Distcp failed as the CM path for the file doesn't point to source file 
> system. So, it is needed to get the qualified cm root URI as part of files 
> listed in dump.
> Also, REPL LOAD returns success even if distcp jobs failed.
> CopyUtils.doCopyRetry doesn't throw error if copy failed even after maximum 
> attempts. 
> So, need to perform 2 things.
>  # If copy of multiple files fail for some reason, then retry with same set 
> of files again but need to set CM path if original source file is missing or 
> modified based on checksum. Let distcp to skip the properly copied files. 
> FileUtil.copy will always overwrite the files.
>  # If source path is moved to CM path, then delete the incorrectly copied 
> files.
>  # If copy fails for maximum attempt, then throw error.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19248) REPL LOAD couldn't copy file from source CM path and also doesn't throw error if file copy fails.

2018-05-07 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-19248:

Status: Patch Available  (was: Open)

Attached 02.patch with fixes for review comments from [~maheshk114].

> REPL LOAD couldn't copy file from source CM path and also doesn't throw error 
> if file copy fails.
> -
>
> Key: HIVE-19248
> URL: https://issues.apache.org/jira/browse/HIVE-19248
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-19248.01.patch, HIVE-19248.02.patch
>
>
> Hive replication uses Hadoop distcp to copy files from primary to replica 
> warehouse. If the HDFS block size is different across clusters, it cause file 
> copy failures.
> {code:java}
> 2018-04-09 14:32:06,690 ERROR [main] 
> org.apache.hadoop.tools.mapred.CopyMapper: Failure in copying 
> hdfs://chelsea/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/000259_0 to 
> hdfs://marilyn/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/.hive-staging_hive_2018-04-09_14-30-45_723_7153496419225102220-2/-ext-10001/000259_0
> java.io.IOException: File copy failed: 
> hdfs://chelsea/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/000259_0 
> --> 
> hdfs://marilyn/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/.hive-staging_hive_2018-04-09_14-30-45_723_7153496419225102220-2/-ext-10001/000259_0
>  at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:299)
>  at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:266)
>  at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:52)
>  at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
>  at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>  at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:170)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
>  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:164)
> Caused by: java.io.IOException: Couldn't run retriable-command: Copying 
> hdfs://chelsea/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/000259_0 to 
> hdfs://marilyn/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/.hive-staging_hive_2018-04-09_14-30-45_723_7153496419225102220-2/-ext-10001/000259_0
>  at 
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
>  at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:296)
>  ... 10 more
> Caused by: java.io.IOException: Check-sum mismatch between 
> hdfs://chelsea/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/000259_0 
> and 
> hdfs://marilyn/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/.hive-staging_hive_2018-04-09_14-30-45_723_7153496419225102220-2/-ext-10001/.distcp.tmp.attempt_1522833620762_4416_m_00_0.
>  Source and target differ in block-size. Use -pb to preserve block-sizes 
> during copy. Alternatively, skip checksum-checks altogether, using -skipCrc. 
> (NOTE: By skipping checksums, one runs the risk of masking data-corruption 
> during file-transfer.)
>  at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.compareCheckSums(RetriableFileCopyCommand.java:212)
>  at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:130)
>  at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:99)
>  at 
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
>  ... 11 more
> {code}
> Distcp failed as the CM path for the file doesn't point to source file 
> system. So, it is needed to get the qualified cm root URI as part of files 
> listed in dump.
> Also, REPL LOAD returns success even if distcp jobs failed.
> CopyUtils.doCopyRetry doesn't throw error if copy failed even after maximum 
> attempts. 
> So, need to perform 2 things.
>  # If copy of multiple files fail for some reason, then retry with same set 
> of files again but need to set CM path if original source file is missing or 
> modified based on checksum. Let distcp to skip the properly copied files. 
> FileUtil.copy will always overwrite the files.
>  # If source path is moved to CM path, then delete the incorrectly copied 
> files.
>  # If copy fails for maximum attempt, then throw error.
>  



--
This message was sent by Atlassian JIRA

[jira] [Updated] (HIVE-19306) Arrow batch serializer

2018-05-07 Thread Eric Wohlstadter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Wohlstadter updated HIVE-19306:

Attachment: HIVE-19306.3.patch

> Arrow batch serializer
> --
>
> Key: HIVE-19306
> URL: https://issues.apache.org/jira/browse/HIVE-19306
> Project: Hive
>  Issue Type: Task
>  Components: Serializers/Deserializers
>Reporter: Eric Wohlstadter
>Assignee: Teddy Choi
>Priority: Major
> Attachments: HIVE-19306.2.patch, HIVE-19306.3.patch
>
>
> Leverage the ThriftJDBCBinarySerDe code path that already exists in 
> SemanticAnalyzer/FileSinkOperator to create a serializer that batches rows 
> into Arrow vector batches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19306) Arrow batch serializer

2018-05-07 Thread Eric Wohlstadter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Wohlstadter updated HIVE-19306:

Status: Open  (was: Patch Available)

> Arrow batch serializer
> --
>
> Key: HIVE-19306
> URL: https://issues.apache.org/jira/browse/HIVE-19306
> Project: Hive
>  Issue Type: Task
>  Components: Serializers/Deserializers
>Reporter: Eric Wohlstadter
>Assignee: Teddy Choi
>Priority: Major
> Attachments: HIVE-19306.2.patch, HIVE-19306.3.patch
>
>
> Leverage the ThriftJDBCBinarySerDe code path that already exists in 
> SemanticAnalyzer/FileSinkOperator to create a serializer that batches rows 
> into Arrow vector batches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19306) Arrow batch serializer

2018-05-07 Thread Eric Wohlstadter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Wohlstadter updated HIVE-19306:

Status: Patch Available  (was: Open)

> Arrow batch serializer
> --
>
> Key: HIVE-19306
> URL: https://issues.apache.org/jira/browse/HIVE-19306
> Project: Hive
>  Issue Type: Task
>  Components: Serializers/Deserializers
>Reporter: Eric Wohlstadter
>Assignee: Teddy Choi
>Priority: Major
> Attachments: HIVE-19306.2.patch, HIVE-19306.3.patch
>
>
> Leverage the ThriftJDBCBinarySerDe code path that already exists in 
> SemanticAnalyzer/FileSinkOperator to create a serializer that batches rows 
> into Arrow vector batches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-19440) Make StorageBasedAuthorizer work with information schema

2018-05-07 Thread Daniel Dai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai reassigned HIVE-19440:
-


> Make StorageBasedAuthorizer work with information schema
> 
>
> Key: HIVE-19440
> URL: https://issues.apache.org/jira/browse/HIVE-19440
> Project: Hive
>  Issue Type: Improvement
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
>
> With HIVE-19161, Hive information schema works with external authorizer (such 
> as ranger). However, we also need to make StorageBasedAuthorizer 
> synchronization work as it is also widely use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18497) JDBC connection parameter to control socket read and connect timeout

2018-05-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16466239#comment-16466239
 ] 

Hive QA commented on HIVE-18497:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12922232/HIVE-18497.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 33 failed/errored test(s), 14322 tests 
executed
*Failed tests:*
{noformat}
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed 
out) (batchId=217)
TestSequenceFileReadWrite - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez2]
 (batchId=156)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[explainuser_4]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_dynpart_hashjoin_1]
 (batchId=174)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_stats]
 (batchId=159)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] 
(batchId=105)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] 
(batchId=96)
org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228)
org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 
(batchId=228)
org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232)
org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate
 (batchId=231)
org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys
 (batchId=231)
org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgress (batchId=235)
org.apache.hive.jdbc.TestSSL.testSSLFetchHttp (batchId=239)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesWrite 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryElapsedTime
 (batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryExecutionTime
 (batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerVertexRawInputSplitsNoKill
 (batchId=241)
org.apache.hive.spark.client.rpc.TestRpc.testServerPort (batchId=304)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/10747/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10747/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10747/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 33 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12922232 - PreCommit-HIVE-Build

> JDBC connection parameter to control socket read and connect timeout
> 
>
> Key: HIVE-18497
> URL: https://issues.apache.org/jira/browse/HIVE-18497
> Project: Hive
>  Issue Type: Improvement
>  Components: Clients, JDBC
>Reporter: Amruth S
>Assignee: Amruth S
>Priority: Minor
> Attachments: HIVE-18497.patch
>
>
> Hive server failures are making the JDBC client get stuck in socketRead.
> Users should be able to configure socket read timeout to fail fast in case of 
> server failures.
> *Proposed so

[jira] [Commented] (HIVE-19435) Incremental replication cause data loss if a table is dropped followed by create and insert-into with different partition type.

2018-05-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16466240#comment-16466240
 ] 

Hive QA commented on HIVE-19435:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12922254/HIVE-19435.01.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/10748/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10748/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10748/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2018-05-07 17:45:25.781
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-10748/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2018-05-07 17:45:25.784
+ cd apache-github-source-source
+ git fetch origin
>From https://github.com/apache/hive
   88d224f..04f5c60  master -> origin/master
   b6469e1..6e2a85a  branch-3   -> origin/branch-3
+ git reset --hard HEAD
HEAD is now at 88d224f HIVE-19344 : Change default value of 
msck.repair.batch.size (Vihang Karajgaonkar reviewed by Sahil Takiar)
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is behind 'origin/master' by 2 commits, and can be fast-forwarded.
  (use "git pull" to update your local branch)
+ git reset --hard origin/master
HEAD is now at 04f5c60 HIVE-19423 : REPL LOAD creates staging directory in 
source dump directory instead of table data location
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2018-05-07 17:45:28.319
+ rm -rf ../yetus_PreCommit-HIVE-Build-10748
+ mkdir ../yetus_PreCommit-HIVE-Build-10748
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-10748
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-10748/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: 
a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenariosAcrossInstances.java:
 does not exist in index
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/parse/ImportSemanticAnalyzer.java: does 
not exist in index
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/parse/ImportSemanticAnalyzer.java:1079
Falling back to three-way merge...
Applied patch to 
'ql/src/java/org/apache/hadoop/hive/ql/parse/ImportSemanticAnalyzer.java' with 
conflicts.
Going to apply patch with: git apply -p1
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/parse/ImportSemanticAnalyzer.java:1079
Falling back to three-way merge...
Applied patch to 
'ql/src/java/org/apache/hadoop/hive/ql/parse/ImportSemanticAnalyzer.java' with 
conflicts.
U ql/src/java/org/apache/hadoop/hive/ql/parse/ImportSemanticAnalyzer.java
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12922254 - PreCommit-HIVE-Build

> Incremental replication cause data loss if a table is dropped followed by 
> create and insert-into with different partition type.
> ---
>
> Key: HIVE-19435
> URL: https://issues.apache.org/jira/browse/HIVE-19435
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-19435.01.patch
>
>
> If the incremental dump have drop of p

[jira] [Updated] (HIVE-19421) Upgrade versions of Jetty and Jackson

2018-05-07 Thread Janaki Lahorani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Janaki Lahorani updated HIVE-19421:
---
Attachment: HIVE-19421.2.patch

> Upgrade versions of Jetty and Jackson
> -
>
> Key: HIVE-19421
> URL: https://issues.apache.org/jira/browse/HIVE-19421
> Project: Hive
>  Issue Type: Improvement
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-19421.1.patch, HIVE-19421.2.patch
>
>
> Move Jackson up to 2.9.5
> Move Jetty up to 9.3.20.v20170531



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19435) Incremental replication cause data loss if a table is dropped followed by create and insert-into with different partition type.

2018-05-07 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-19435:

Status: Open  (was: Patch Available)

> Incremental replication cause data loss if a table is dropped followed by 
> create and insert-into with different partition type.
> ---
>
> Key: HIVE-19435
> URL: https://issues.apache.org/jira/browse/HIVE-19435
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-19435.01.patch
>
>
> If the incremental dump have drop of partitioned table followed by 
> create/insert on non-partitioned table with same name, doesn't replicate the 
> data. Explained below.
> Let's say we have a partitioned table T1 which was already replicated to 
> target.
> DROP_TABLE(T1)->CREATE_TABLE(T1) (Non-partitioned) -> INSERT(T1)(10) 
> After REPL LOAD, T1 doesn't have any data.
> Same is valid for non-partitioned to partitioned and partition spec mismatch 
> case as well.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19435) Incremental replication cause data loss if a table is dropped followed by create and insert-into with different partition type.

2018-05-07 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-19435:

Attachment: HIVE-19435.02.patch

> Incremental replication cause data loss if a table is dropped followed by 
> create and insert-into with different partition type.
> ---
>
> Key: HIVE-19435
> URL: https://issues.apache.org/jira/browse/HIVE-19435
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-19435.01.patch, HIVE-19435.02.patch
>
>
> If the incremental dump have drop of partitioned table followed by 
> create/insert on non-partitioned table with same name, doesn't replicate the 
> data. Explained below.
> Let's say we have a partitioned table T1 which was already replicated to 
> target.
> DROP_TABLE(T1)->CREATE_TABLE(T1) (Non-partitioned) -> INSERT(T1)(10) 
> After REPL LOAD, T1 doesn't have any data.
> Same is valid for non-partitioned to partitioned and partition spec mismatch 
> case as well.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19435) Incremental replication cause data loss if a table is dropped followed by create and insert-into with different partition type.

2018-05-07 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-19435:

Status: Patch Available  (was: Open)

Added 02.patch after rebasing with master.

> Incremental replication cause data loss if a table is dropped followed by 
> create and insert-into with different partition type.
> ---
>
> Key: HIVE-19435
> URL: https://issues.apache.org/jira/browse/HIVE-19435
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-19435.01.patch, HIVE-19435.02.patch
>
>
> If the incremental dump have drop of partitioned table followed by 
> create/insert on non-partitioned table with same name, doesn't replicate the 
> data. Explained below.
> Let's say we have a partitioned table T1 which was already replicated to 
> target.
> DROP_TABLE(T1)->CREATE_TABLE(T1) (Non-partitioned) -> INSERT(T1)(10) 
> After REPL LOAD, T1 doesn't have any data.
> Same is valid for non-partitioned to partitioned and partition spec mismatch 
> case as well.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HIVE-19011) Druid Storage Handler returns conflicting results for Qtest druidmini_dynamic_partition.q

2018-05-07 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra resolved HIVE-19011.
---
Resolution: Invalid

> Druid Storage Handler returns conflicting results for Qtest 
> druidmini_dynamic_partition.q
> -
>
> Key: HIVE-19011
> URL: https://issues.apache.org/jira/browse/HIVE-19011
> Project: Hive
>  Issue Type: Sub-task
>  Components: Druid integration
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Blocker
>
> This git diff shows the conflicting results
> {code}
> diff --git 
> a/ql/src/test/results/clientpositive/druid/druidmini_dynamic_partition.q.out 
> b/ql/src/test/results/clientpositive/druid/druidmini_dynamic_partition.q.out
> index 714778ebfc..cea9b7535c 100644
> --- 
> a/ql/src/test/results/clientpositive/druid/druidmini_dynamic_partition.q.out
> +++ 
> b/ql/src/test/results/clientpositive/druid/druidmini_dynamic_partition.q.out
> @@ -243,7 +243,7 @@ POSTHOOK: query: SELECT  sum(cint), max(cbigint),  
> sum(cbigint), max(cint) FROM
>  POSTHOOK: type: QUERY
>  POSTHOOK: Input: default@druid_partitioned_table
>  POSTHOOK: Output: hdfs://### HDFS PATH ###
> -1408069801800  4139540644  10992545287 165393120
> +1408069801800  3272553822  10992545287 -648527473
>  PREHOOK: query: SELECT  sum(cint), max(cbigint),  sum(cbigint), max(cint) 
> FROM druid_partitioned_table_0
>  PREHOOK: type: QUERY
>  PREHOOK: Input: default@druid_partitioned_table_0
> @@ -429,7 +429,7 @@ POSTHOOK: query: SELECT sum(cint), max(cbigint),  
> sum(cbigint), max(cint) FROM d
>  POSTHOOK: type: QUERY
>  POSTHOOK: Input: default@druid_partitioned_table
>  POSTHOOK: Output: hdfs://### HDFS PATH ###
> -2857395071862  4139540644  -1661313883124  885815256
> +2857395071862  3728054572  -1661313883124  71894663
>  PREHOOK: query: EXPLAIN INSERT OVERWRITE TABLE druid_partitioned_table
>SELECT cast (`ctimestamp1` as timestamp with local time zone) as `__time`,
>  cstring1,
> @@ -566,7 +566,7 @@ POSTHOOK: query: SELECT sum(cint), max(cbigint),  
> sum(cbigint), max(cint) FROM d
>  POSTHOOK: type: QUERY
>  POSTHOOK: Input: default@druid_partitioned_table
>  POSTHOOK: Output: hdfs://### HDFS PATH ###
> -1408069801800  7115092987  10992545287 1232243564
> +1408069801800  4584782821  10992545287 -1808876374
>  PREHOOK: query: SELECT  sum(cint), max(cbigint),  sum(cbigint), max(cint) 
> FROM druid_partitioned_table_0
>  PREHOOK: type: QUERY
>  PREHOOK: Input: default@druid_partitioned_table_0
> @@ -659,7 +659,7 @@ POSTHOOK: query: SELECT sum(cint), max(cbigint),  
> sum(cbigint), max(cint) FROM d
>  POSTHOOK: type: QUERY
>  POSTHOOK: Input: default@druid_partitioned_table
>  POSTHOOK: Output: hdfs://### HDFS PATH ###
> -1408069801800  7115092987  10992545287 1232243564
> +1408069801800  4584782821  10992545287 -1808876374
>  PREHOOK: query: EXPLAIN SELECT  sum(cint), max(cbigint),  sum(cbigint), 
> max(cint)  FROM druid_max_size_partition
>  PREHOOK: type: QUERY
>  POSTHOOK: query: EXPLAIN SELECT  sum(cint), max(cbigint),  sum(cbigint), 
> max(cint)  FROM druid_max_size_partition
> @@ -758,7 +758,7 @@ POSTHOOK: query: SELECT sum(cint), max(cbigint),  
> sum(cbigint), max(cint) FROM d
>  POSTHOOK: type: QUERY
>  POSTHOOK: Input: default@druid_partitioned_table
>  POSTHOOK: Output: hdfs://### HDFS PATH ###
> -1408069801800  7115092987  10992545287 1232243564
> +1408069801800  4584782821  10992545287 -1808876374
>  PREHOOK: query: DROP TABLE druid_partitioned_table_0
>  PREHOOK: type: DROPTABLE
>  PREHOOK: Input: default@druid_partitioned_table_0
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19011) Druid Storage Handler returns conflicting results for Qtest druidmini_dynamic_partition.q

2018-05-07 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16466261#comment-16466261
 ] 

slim bouguerra commented on HIVE-19011:
---

This is not an issue actually. After debugging found out that min and max 
return random results since dynamic partitioning leads to different segments 
with different rolled up columns.

> Druid Storage Handler returns conflicting results for Qtest 
> druidmini_dynamic_partition.q
> -
>
> Key: HIVE-19011
> URL: https://issues.apache.org/jira/browse/HIVE-19011
> Project: Hive
>  Issue Type: Sub-task
>  Components: Druid integration
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Blocker
>
> This git diff shows the conflicting results
> {code}
> diff --git 
> a/ql/src/test/results/clientpositive/druid/druidmini_dynamic_partition.q.out 
> b/ql/src/test/results/clientpositive/druid/druidmini_dynamic_partition.q.out
> index 714778ebfc..cea9b7535c 100644
> --- 
> a/ql/src/test/results/clientpositive/druid/druidmini_dynamic_partition.q.out
> +++ 
> b/ql/src/test/results/clientpositive/druid/druidmini_dynamic_partition.q.out
> @@ -243,7 +243,7 @@ POSTHOOK: query: SELECT  sum(cint), max(cbigint),  
> sum(cbigint), max(cint) FROM
>  POSTHOOK: type: QUERY
>  POSTHOOK: Input: default@druid_partitioned_table
>  POSTHOOK: Output: hdfs://### HDFS PATH ###
> -1408069801800  4139540644  10992545287 165393120
> +1408069801800  3272553822  10992545287 -648527473
>  PREHOOK: query: SELECT  sum(cint), max(cbigint),  sum(cbigint), max(cint) 
> FROM druid_partitioned_table_0
>  PREHOOK: type: QUERY
>  PREHOOK: Input: default@druid_partitioned_table_0
> @@ -429,7 +429,7 @@ POSTHOOK: query: SELECT sum(cint), max(cbigint),  
> sum(cbigint), max(cint) FROM d
>  POSTHOOK: type: QUERY
>  POSTHOOK: Input: default@druid_partitioned_table
>  POSTHOOK: Output: hdfs://### HDFS PATH ###
> -2857395071862  4139540644  -1661313883124  885815256
> +2857395071862  3728054572  -1661313883124  71894663
>  PREHOOK: query: EXPLAIN INSERT OVERWRITE TABLE druid_partitioned_table
>SELECT cast (`ctimestamp1` as timestamp with local time zone) as `__time`,
>  cstring1,
> @@ -566,7 +566,7 @@ POSTHOOK: query: SELECT sum(cint), max(cbigint),  
> sum(cbigint), max(cint) FROM d
>  POSTHOOK: type: QUERY
>  POSTHOOK: Input: default@druid_partitioned_table
>  POSTHOOK: Output: hdfs://### HDFS PATH ###
> -1408069801800  7115092987  10992545287 1232243564
> +1408069801800  4584782821  10992545287 -1808876374
>  PREHOOK: query: SELECT  sum(cint), max(cbigint),  sum(cbigint), max(cint) 
> FROM druid_partitioned_table_0
>  PREHOOK: type: QUERY
>  PREHOOK: Input: default@druid_partitioned_table_0
> @@ -659,7 +659,7 @@ POSTHOOK: query: SELECT sum(cint), max(cbigint),  
> sum(cbigint), max(cint) FROM d
>  POSTHOOK: type: QUERY
>  POSTHOOK: Input: default@druid_partitioned_table
>  POSTHOOK: Output: hdfs://### HDFS PATH ###
> -1408069801800  7115092987  10992545287 1232243564
> +1408069801800  4584782821  10992545287 -1808876374
>  PREHOOK: query: EXPLAIN SELECT  sum(cint), max(cbigint),  sum(cbigint), 
> max(cint)  FROM druid_max_size_partition
>  PREHOOK: type: QUERY
>  POSTHOOK: query: EXPLAIN SELECT  sum(cint), max(cbigint),  sum(cbigint), 
> max(cint)  FROM druid_max_size_partition
> @@ -758,7 +758,7 @@ POSTHOOK: query: SELECT sum(cint), max(cbigint),  
> sum(cbigint), max(cint) FROM d
>  POSTHOOK: type: QUERY
>  POSTHOOK: Input: default@druid_partitioned_table
>  POSTHOOK: Output: hdfs://### HDFS PATH ###
> -1408069801800  7115092987  10992545287 1232243564
> +1408069801800  4584782821  10992545287 -1808876374
>  PREHOOK: query: DROP TABLE druid_partitioned_table_0
>  PREHOOK: type: DROPTABLE
>  PREHOOK: Input: default@druid_partitioned_table_0
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-14388) Add number of rows inserted message after insert command in Beeline

2018-05-07 Thread Bharathkrishna Guruvayoor Murali (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharathkrishna Guruvayoor Murali updated HIVE-14388:

Attachment: HIVE-14388.08.patch

> Add number of rows inserted message after insert command in Beeline
> ---
>
> Key: HIVE-14388
> URL: https://issues.apache.org/jira/browse/HIVE-14388
> Project: Hive
>  Issue Type: Improvement
>  Components: Beeline
>Reporter: Vihang Karajgaonkar
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Minor
> Attachments: HIVE-14388-WIP.patch, HIVE-14388.02.patch, 
> HIVE-14388.03.patch, HIVE-14388.05.patch, HIVE-14388.06.patch, 
> HIVE-14388.07.patch, HIVE-14388.08.patch
>
>
> Currently, when you run insert command on beeline, it returns a message 
> saying "No rows affected .."
> A better and more intuitive msg would be "xxx rows inserted (26.068 seconds)"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-19441) Add support for float aggregator and use LLAP test Driver

2018-05-07 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra reassigned HIVE-19441:
-


> Add support for float aggregator and use LLAP test Driver
> -
>
> Key: HIVE-19441
> URL: https://issues.apache.org/jira/browse/HIVE-19441
> Project: Hive
>  Issue Type: Bug
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
>
> Adding support to the float kind aggregator.
> Use LLAP as test Driver to reduce execution time of tests from about 2 hours 
> to 15 min:
> Although this patches unveiling an issue with timezone, maybe it is fixed by 
> [~jcamachorodriguez] upcoming set of patches.
>  
> Before
> {code}
> [INFO] Executed tasks
> [INFO]
> [INFO] --- maven-compiler-plugin:3.6.1:testCompile (default-testCompile) @ 
> hive-it-qfile ---
> [INFO] Compiling 21 source files to 
> /Users/sbouguerra/Hdev/hive/itests/qtest/target/test-classes
> [INFO]
> [INFO] --- maven-surefire-plugin:2.21.0:test (default-test) @ hive-it-qfile 
> ---
> [INFO]
> [INFO] ---
> [INFO] T E S T S
> [INFO] ---
> [INFO] Running org.apache.hadoop.hive.cli.TestMiniDruidCliDriver
> [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 6,654.117 s - in org.apache.hadoop.hive.cli.TestMiniDruidCliDriver
> [INFO]
> [INFO] Results:
> [INFO]
> [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0
> [INFO]
> [INFO] 
> 
> [INFO] BUILD SUCCESS
> [INFO] 
> 
> [INFO] Total time: 01:51 h
> [INFO] Finished at: 2018-05-04T12:43:19-07:00
> [INFO] 
> 
> {code}
> After
> {code}
> INFO] Executed tasks
> [INFO]
> [INFO] --- maven-compiler-plugin:3.6.1:testCompile (default-testCompile) @ 
> hive-it-qfile ---
> [INFO] Compiling 22 source files to 
> /Users/sbouguerra/Hdev/hive/itests/qtest/target/test-classes
> [INFO]
> [INFO] --- maven-surefire-plugin:2.21.0:test (default-test) @ hive-it-qfile 
> ---
> [INFO]
> [INFO] ---
> [INFO] T E S T S
> [INFO] ---
> [INFO] Running org.apache.hadoop.hive.cli.TestMiniDruidCliDriver
> [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 907.167 s - in org.apache.hadoop.hive.cli.TestMiniDruidCliDriver
> [INFO]
> [INFO] Results:
> [INFO]
> [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0
> [INFO]
> [INFO] 
> 
> [INFO] BUILD SUCCESS
> [INFO] 
> 
> [INFO] Total time: 15:31 min
> [INFO] Finished at: 2018-05-04T13:15:11-07:00
> [INFO] 
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19441) Add support for float aggregator and use LLAP test Driver

2018-05-07 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated HIVE-19441:
--
Component/s: Druid integration

> Add support for float aggregator and use LLAP test Driver
> -
>
> Key: HIVE-19441
> URL: https://issues.apache.org/jira/browse/HIVE-19441
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
>
> Adding support to the float kind aggregator.
> Use LLAP as test Driver to reduce execution time of tests from about 2 hours 
> to 15 min:
> Although this patches unveiling an issue with timezone, maybe it is fixed by 
> [~jcamachorodriguez] upcoming set of patches.
>  
> Before
> {code}
> [INFO] Executed tasks
> [INFO]
> [INFO] --- maven-compiler-plugin:3.6.1:testCompile (default-testCompile) @ 
> hive-it-qfile ---
> [INFO] Compiling 21 source files to 
> /Users/sbouguerra/Hdev/hive/itests/qtest/target/test-classes
> [INFO]
> [INFO] --- maven-surefire-plugin:2.21.0:test (default-test) @ hive-it-qfile 
> ---
> [INFO]
> [INFO] ---
> [INFO] T E S T S
> [INFO] ---
> [INFO] Running org.apache.hadoop.hive.cli.TestMiniDruidCliDriver
> [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 6,654.117 s - in org.apache.hadoop.hive.cli.TestMiniDruidCliDriver
> [INFO]
> [INFO] Results:
> [INFO]
> [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0
> [INFO]
> [INFO] 
> 
> [INFO] BUILD SUCCESS
> [INFO] 
> 
> [INFO] Total time: 01:51 h
> [INFO] Finished at: 2018-05-04T12:43:19-07:00
> [INFO] 
> 
> {code}
> After
> {code}
> INFO] Executed tasks
> [INFO]
> [INFO] --- maven-compiler-plugin:3.6.1:testCompile (default-testCompile) @ 
> hive-it-qfile ---
> [INFO] Compiling 22 source files to 
> /Users/sbouguerra/Hdev/hive/itests/qtest/target/test-classes
> [INFO]
> [INFO] --- maven-surefire-plugin:2.21.0:test (default-test) @ hive-it-qfile 
> ---
> [INFO]
> [INFO] ---
> [INFO] T E S T S
> [INFO] ---
> [INFO] Running org.apache.hadoop.hive.cli.TestMiniDruidCliDriver
> [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 907.167 s - in org.apache.hadoop.hive.cli.TestMiniDruidCliDriver
> [INFO]
> [INFO] Results:
> [INFO]
> [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0
> [INFO]
> [INFO] 
> 
> [INFO] BUILD SUCCESS
> [INFO] 
> 
> [INFO] Total time: 15:31 min
> [INFO] Finished at: 2018-05-04T13:15:11-07:00
> [INFO] 
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19441) Add support for float aggregator and use LLAP test Driver

2018-05-07 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated HIVE-19441:
--
Status: Patch Available  (was: Open)

> Add support for float aggregator and use LLAP test Driver
> -
>
> Key: HIVE-19441
> URL: https://issues.apache.org/jira/browse/HIVE-19441
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
>
> Adding support to the float kind aggregator.
> Use LLAP as test Driver to reduce execution time of tests from about 2 hours 
> to 15 min:
> Although this patches unveiling an issue with timezone, maybe it is fixed by 
> [~jcamachorodriguez] upcoming set of patches.
>  
> Before
> {code}
> [INFO] Executed tasks
> [INFO]
> [INFO] --- maven-compiler-plugin:3.6.1:testCompile (default-testCompile) @ 
> hive-it-qfile ---
> [INFO] Compiling 21 source files to 
> /Users/sbouguerra/Hdev/hive/itests/qtest/target/test-classes
> [INFO]
> [INFO] --- maven-surefire-plugin:2.21.0:test (default-test) @ hive-it-qfile 
> ---
> [INFO]
> [INFO] ---
> [INFO] T E S T S
> [INFO] ---
> [INFO] Running org.apache.hadoop.hive.cli.TestMiniDruidCliDriver
> [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 6,654.117 s - in org.apache.hadoop.hive.cli.TestMiniDruidCliDriver
> [INFO]
> [INFO] Results:
> [INFO]
> [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0
> [INFO]
> [INFO] 
> 
> [INFO] BUILD SUCCESS
> [INFO] 
> 
> [INFO] Total time: 01:51 h
> [INFO] Finished at: 2018-05-04T12:43:19-07:00
> [INFO] 
> 
> {code}
> After
> {code}
> INFO] Executed tasks
> [INFO]
> [INFO] --- maven-compiler-plugin:3.6.1:testCompile (default-testCompile) @ 
> hive-it-qfile ---
> [INFO] Compiling 22 source files to 
> /Users/sbouguerra/Hdev/hive/itests/qtest/target/test-classes
> [INFO]
> [INFO] --- maven-surefire-plugin:2.21.0:test (default-test) @ hive-it-qfile 
> ---
> [INFO]
> [INFO] ---
> [INFO] T E S T S
> [INFO] ---
> [INFO] Running org.apache.hadoop.hive.cli.TestMiniDruidCliDriver
> [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 907.167 s - in org.apache.hadoop.hive.cli.TestMiniDruidCliDriver
> [INFO]
> [INFO] Results:
> [INFO]
> [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0
> [INFO]
> [INFO] 
> 
> [INFO] BUILD SUCCESS
> [INFO] 
> 
> [INFO] Total time: 15:31 min
> [INFO] Finished at: 2018-05-04T13:15:11-07:00
> [INFO] 
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19435) Incremental replication cause data loss if a table is dropped followed by create and insert-into with different partition type.

2018-05-07 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-19435:

Status: Patch Available  (was: Open)

> Incremental replication cause data loss if a table is dropped followed by 
> create and insert-into with different partition type.
> ---
>
> Key: HIVE-19435
> URL: https://issues.apache.org/jira/browse/HIVE-19435
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-19435.01.patch, HIVE-19435.02.patch
>
>
> If the incremental dump have drop of partitioned table followed by 
> create/insert on non-partitioned table with same name, doesn't replicate the 
> data. Explained below.
> Let's say we have a partitioned table T1 which was already replicated to 
> target.
> DROP_TABLE(T1)->CREATE_TABLE(T1) (Non-partitioned) -> INSERT(T1)(10) 
> After REPL LOAD, T1 doesn't have any data.
> Same is valid for non-partitioned to partitioned and partition spec mismatch 
> case as well.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19435) Incremental replication cause data loss if a table is dropped followed by create and insert-into with different partition type.

2018-05-07 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-19435:

Attachment: HIVE-19435.02.patch

> Incremental replication cause data loss if a table is dropped followed by 
> create and insert-into with different partition type.
> ---
>
> Key: HIVE-19435
> URL: https://issues.apache.org/jira/browse/HIVE-19435
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-19435.01.patch, HIVE-19435.02.patch
>
>
> If the incremental dump have drop of partitioned table followed by 
> create/insert on non-partitioned table with same name, doesn't replicate the 
> data. Explained below.
> Let's say we have a partitioned table T1 which was already replicated to 
> target.
> DROP_TABLE(T1)->CREATE_TABLE(T1) (Non-partitioned) -> INSERT(T1)(10) 
> After REPL LOAD, T1 doesn't have any data.
> Same is valid for non-partitioned to partitioned and partition spec mismatch 
> case as well.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19441) Add support for float aggregator and use LLAP test Driver

2018-05-07 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated HIVE-19441:
--
Attachment: HIVE-19441.patch

> Add support for float aggregator and use LLAP test Driver
> -
>
> Key: HIVE-19441
> URL: https://issues.apache.org/jira/browse/HIVE-19441
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Attachments: HIVE-19441.patch
>
>
> Adding support to the float kind aggregator.
> Use LLAP as test Driver to reduce execution time of tests from about 2 hours 
> to 15 min:
> Although this patches unveiling an issue with timezone, maybe it is fixed by 
> [~jcamachorodriguez] upcoming set of patches.
>  
> Before
> {code}
> [INFO] Executed tasks
> [INFO]
> [INFO] --- maven-compiler-plugin:3.6.1:testCompile (default-testCompile) @ 
> hive-it-qfile ---
> [INFO] Compiling 21 source files to 
> /Users/sbouguerra/Hdev/hive/itests/qtest/target/test-classes
> [INFO]
> [INFO] --- maven-surefire-plugin:2.21.0:test (default-test) @ hive-it-qfile 
> ---
> [INFO]
> [INFO] ---
> [INFO] T E S T S
> [INFO] ---
> [INFO] Running org.apache.hadoop.hive.cli.TestMiniDruidCliDriver
> [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 6,654.117 s - in org.apache.hadoop.hive.cli.TestMiniDruidCliDriver
> [INFO]
> [INFO] Results:
> [INFO]
> [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0
> [INFO]
> [INFO] 
> 
> [INFO] BUILD SUCCESS
> [INFO] 
> 
> [INFO] Total time: 01:51 h
> [INFO] Finished at: 2018-05-04T12:43:19-07:00
> [INFO] 
> 
> {code}
> After
> {code}
> INFO] Executed tasks
> [INFO]
> [INFO] --- maven-compiler-plugin:3.6.1:testCompile (default-testCompile) @ 
> hive-it-qfile ---
> [INFO] Compiling 22 source files to 
> /Users/sbouguerra/Hdev/hive/itests/qtest/target/test-classes
> [INFO]
> [INFO] --- maven-surefire-plugin:2.21.0:test (default-test) @ hive-it-qfile 
> ---
> [INFO]
> [INFO] ---
> [INFO] T E S T S
> [INFO] ---
> [INFO] Running org.apache.hadoop.hive.cli.TestMiniDruidCliDriver
> [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 907.167 s - in org.apache.hadoop.hive.cli.TestMiniDruidCliDriver
> [INFO]
> [INFO] Results:
> [INFO]
> [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0
> [INFO]
> [INFO] 
> 
> [INFO] BUILD SUCCESS
> [INFO] 
> 
> [INFO] Total time: 15:31 min
> [INFO] Finished at: 2018-05-04T13:15:11-07:00
> [INFO] 
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19435) Incremental replication cause data loss if a table is dropped followed by create and insert-into with different partition type.

2018-05-07 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-19435:

Attachment: (was: HIVE-19435.02.patch)

> Incremental replication cause data loss if a table is dropped followed by 
> create and insert-into with different partition type.
> ---
>
> Key: HIVE-19435
> URL: https://issues.apache.org/jira/browse/HIVE-19435
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-19435.01.patch, HIVE-19435.02.patch
>
>
> If the incremental dump have drop of partitioned table followed by 
> create/insert on non-partitioned table with same name, doesn't replicate the 
> data. Explained below.
> Let's say we have a partitioned table T1 which was already replicated to 
> target.
> DROP_TABLE(T1)->CREATE_TABLE(T1) (Non-partitioned) -> INSERT(T1)(10) 
> After REPL LOAD, T1 doesn't have any data.
> Same is valid for non-partitioned to partitioned and partition spec mismatch 
> case as well.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19435) Incremental replication cause data loss if a table is dropped followed by create and insert-into with different partition type.

2018-05-07 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-19435:

Status: Open  (was: Patch Available)

> Incremental replication cause data loss if a table is dropped followed by 
> create and insert-into with different partition type.
> ---
>
> Key: HIVE-19435
> URL: https://issues.apache.org/jira/browse/HIVE-19435
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-19435.01.patch, HIVE-19435.02.patch
>
>
> If the incremental dump have drop of partitioned table followed by 
> create/insert on non-partitioned table with same name, doesn't replicate the 
> data. Explained below.
> Let's say we have a partitioned table T1 which was already replicated to 
> target.
> DROP_TABLE(T1)->CREATE_TABLE(T1) (Non-partitioned) -> INSERT(T1)(10) 
> After REPL LOAD, T1 doesn't have any data.
> Same is valid for non-partitioned to partitioned and partition spec mismatch 
> case as well.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19441) Add support for float aggregator and use LLAP test Driver

2018-05-07 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated HIVE-19441:
--
Issue Type: Improvement  (was: Bug)

> Add support for float aggregator and use LLAP test Driver
> -
>
> Key: HIVE-19441
> URL: https://issues.apache.org/jira/browse/HIVE-19441
> Project: Hive
>  Issue Type: Improvement
>  Components: Druid integration
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Attachments: HIVE-19441.patch
>
>
> Adding support to the float kind aggregator.
> Use LLAP as test Driver to reduce execution time of tests from about 2 hours 
> to 15 min:
> Although this patches unveiling an issue with timezone, maybe it is fixed by 
> [~jcamachorodriguez] upcoming set of patches.
>  
> Before
> {code}
> [INFO] Executed tasks
> [INFO]
> [INFO] --- maven-compiler-plugin:3.6.1:testCompile (default-testCompile) @ 
> hive-it-qfile ---
> [INFO] Compiling 21 source files to 
> /Users/sbouguerra/Hdev/hive/itests/qtest/target/test-classes
> [INFO]
> [INFO] --- maven-surefire-plugin:2.21.0:test (default-test) @ hive-it-qfile 
> ---
> [INFO]
> [INFO] ---
> [INFO] T E S T S
> [INFO] ---
> [INFO] Running org.apache.hadoop.hive.cli.TestMiniDruidCliDriver
> [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 6,654.117 s - in org.apache.hadoop.hive.cli.TestMiniDruidCliDriver
> [INFO]
> [INFO] Results:
> [INFO]
> [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0
> [INFO]
> [INFO] 
> 
> [INFO] BUILD SUCCESS
> [INFO] 
> 
> [INFO] Total time: 01:51 h
> [INFO] Finished at: 2018-05-04T12:43:19-07:00
> [INFO] 
> 
> {code}
> After
> {code}
> INFO] Executed tasks
> [INFO]
> [INFO] --- maven-compiler-plugin:3.6.1:testCompile (default-testCompile) @ 
> hive-it-qfile ---
> [INFO] Compiling 22 source files to 
> /Users/sbouguerra/Hdev/hive/itests/qtest/target/test-classes
> [INFO]
> [INFO] --- maven-surefire-plugin:2.21.0:test (default-test) @ hive-it-qfile 
> ---
> [INFO]
> [INFO] ---
> [INFO] T E S T S
> [INFO] ---
> [INFO] Running org.apache.hadoop.hive.cli.TestMiniDruidCliDriver
> [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 907.167 s - in org.apache.hadoop.hive.cli.TestMiniDruidCliDriver
> [INFO]
> [INFO] Results:
> [INFO]
> [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0
> [INFO]
> [INFO] 
> 
> [INFO] BUILD SUCCESS
> [INFO] 
> 
> [INFO] Total time: 15:31 min
> [INFO] Finished at: 2018-05-04T13:15:11-07:00
> [INFO] 
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19441) Add support for float aggregator and use LLAP test Driver

2018-05-07 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16466284#comment-16466284
 ] 

slim bouguerra commented on HIVE-19441:
---

[~ashutoshc] can you please take a look

> Add support for float aggregator and use LLAP test Driver
> -
>
> Key: HIVE-19441
> URL: https://issues.apache.org/jira/browse/HIVE-19441
> Project: Hive
>  Issue Type: Improvement
>  Components: Druid integration
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Attachments: HIVE-19441.patch
>
>
> Adding support to the float kind aggregator.
> Use LLAP as test Driver to reduce execution time of tests from about 2 hours 
> to 15 min:
> Although this patches unveiling an issue with timezone, maybe it is fixed by 
> [~jcamachorodriguez] upcoming set of patches.
>  
> Before
> {code}
> [INFO] Executed tasks
> [INFO]
> [INFO] --- maven-compiler-plugin:3.6.1:testCompile (default-testCompile) @ 
> hive-it-qfile ---
> [INFO] Compiling 21 source files to 
> /Users/sbouguerra/Hdev/hive/itests/qtest/target/test-classes
> [INFO]
> [INFO] --- maven-surefire-plugin:2.21.0:test (default-test) @ hive-it-qfile 
> ---
> [INFO]
> [INFO] ---
> [INFO] T E S T S
> [INFO] ---
> [INFO] Running org.apache.hadoop.hive.cli.TestMiniDruidCliDriver
> [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 6,654.117 s - in org.apache.hadoop.hive.cli.TestMiniDruidCliDriver
> [INFO]
> [INFO] Results:
> [INFO]
> [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0
> [INFO]
> [INFO] 
> 
> [INFO] BUILD SUCCESS
> [INFO] 
> 
> [INFO] Total time: 01:51 h
> [INFO] Finished at: 2018-05-04T12:43:19-07:00
> [INFO] 
> 
> {code}
> After
> {code}
> INFO] Executed tasks
> [INFO]
> [INFO] --- maven-compiler-plugin:3.6.1:testCompile (default-testCompile) @ 
> hive-it-qfile ---
> [INFO] Compiling 22 source files to 
> /Users/sbouguerra/Hdev/hive/itests/qtest/target/test-classes
> [INFO]
> [INFO] --- maven-surefire-plugin:2.21.0:test (default-test) @ hive-it-qfile 
> ---
> [INFO]
> [INFO] ---
> [INFO] T E S T S
> [INFO] ---
> [INFO] Running org.apache.hadoop.hive.cli.TestMiniDruidCliDriver
> [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 907.167 s - in org.apache.hadoop.hive.cli.TestMiniDruidCliDriver
> [INFO]
> [INFO] Results:
> [INFO]
> [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0
> [INFO]
> [INFO] 
> 
> [INFO] BUILD SUCCESS
> [INFO] 
> 
> [INFO] Total time: 15:31 min
> [INFO] Finished at: 2018-05-04T13:15:11-07:00
> [INFO] 
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18748) Rename table impacts the ACID behaviour as table names are not updated in meta-tables.

2018-05-07 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman reassigned HIVE-18748:
-

Assignee: Eugene Koifman  (was: Sankar Hariappan)

> Rename table impacts the ACID behaviour as table names are not updated in 
> meta-tables.
> --
>
> Key: HIVE-18748
> URL: https://issues.apache.org/jira/browse/HIVE-18748
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Eugene Koifman
>Priority: Major
>  Labels: ACID, DDL
> Fix For: 3.1.0
>
>
> ACID implementation uses metatables such as TXN_COMPONENTS, 
> COMPLETED_TXN_COMPONENTS, COMPACTION_QUEUE, COMPLETED_COMPCTION_QUEUE etc to 
> manage ACID operations.
> Per table write ID implementation (HIVE-18192) introduces couple of 
> metatables such as NEXT_WRITE_ID and TXN_TO_WRITE_ID to manage write ids 
> allocated per table.
> Now, when we rename any tables, it is necessary to update the corresponding 
> table names in these metatables as well. Otherwise, ACID table operations 
> won't work properly.
> Since, this change is significant and have other side-effects, we propose to 
> disable rename tables on ACID tables until a fix is figured out.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18748) Rename table impacts the ACID behaviour as table names are not updated in meta-tables.

2018-05-07 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-18748:
--
Priority: Critical  (was: Major)

> Rename table impacts the ACID behaviour as table names are not updated in 
> meta-tables.
> --
>
> Key: HIVE-18748
> URL: https://issues.apache.org/jira/browse/HIVE-18748
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Eugene Koifman
>Priority: Critical
>  Labels: ACID, DDL
> Fix For: 3.1.0
>
>
> ACID implementation uses metatables such as TXN_COMPONENTS, 
> COMPLETED_TXN_COMPONENTS, COMPACTION_QUEUE, COMPLETED_COMPCTION_QUEUE etc to 
> manage ACID operations.
> Per table write ID implementation (HIVE-18192) introduces couple of 
> metatables such as NEXT_WRITE_ID and TXN_TO_WRITE_ID to manage write ids 
> allocated per table.
> Now, when we rename any tables, it is necessary to update the corresponding 
> table names in these metatables as well. Otherwise, ACID table operations 
> won't work properly.
> Since, this change is significant and have other side-effects, we propose to 
> disable rename tables on ACID tables until a fix is figured out.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19267) Create/Replicate ACID Write event

2018-05-07 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19267:
---
Target Version/s:   (was: 3.0.0)

> Create/Replicate ACID Write event
> -
>
> Key: HIVE-19267
> URL: https://issues.apache.org/jira/browse/HIVE-19267
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl, Transactions
>Affects Versions: 3.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: ACID, DR, pull-request-available, replication
> Attachments: HIVE-19267.01.patch, HIVE-19267.02.patch, 
> HIVE-19267.03.patch, HIVE-19267.04.patch
>
>
>  
> h1. Replicate ACID write Events
>  * Create new EVENT_WRITE event with related message format to log the write 
> operations with in a txn along with data associated.
>  * Log this event when perform any writes (insert into, insert overwrite, 
> load table, delete, update, merge, truncate) on table/partition.
>  * If a single MERGE/UPDATE/INSERT/DELETE statement operates on multiple 
> partitions, then need to log one event per partition.
>  * DbNotificationListener should log this type of event to special metastore 
> table named "MTxnWriteNotificationLog".
>  * This table should maintain a map of txn ID against list of 
> tables/partitions written by given txn.
>  * The entry for a given txn should be removed by the cleaner thread that 
> removes the expired events from EventNotificationTable.
> h1. Replicate Commit Txn operation (with writes)
> Add new EVENT_COMMIT_TXN to log the metadata/data of all tables/partitions 
> modified within the txn.
> *Source warehouse:*
>  * This event should read the EVENT_WRITEs from "MTxnWriteNotificationLog" 
> metastore table to consolidate the list of tables/partitions modified within 
> this txn scope.
>  * Based on the list of tables/partitions modified and table Write ID, need 
> to compute the list of delta files added by this txn.
>  * Repl dump should read this message and dump the metadata and delta files 
> list.
> *Target warehouse:*
>  * Ensure snapshot isolation at target for on-going read txns which shouldn't 
> view the data replicated from committed txn. (Ensured with open and allocate 
> write ID events).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19267) Create/Replicate ACID Write event

2018-05-07 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19267:
---
Fix Version/s: (was: 3.0.0)

> Create/Replicate ACID Write event
> -
>
> Key: HIVE-19267
> URL: https://issues.apache.org/jira/browse/HIVE-19267
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl, Transactions
>Affects Versions: 3.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: ACID, DR, pull-request-available, replication
> Attachments: HIVE-19267.01.patch, HIVE-19267.02.patch, 
> HIVE-19267.03.patch, HIVE-19267.04.patch
>
>
>  
> h1. Replicate ACID write Events
>  * Create new EVENT_WRITE event with related message format to log the write 
> operations with in a txn along with data associated.
>  * Log this event when perform any writes (insert into, insert overwrite, 
> load table, delete, update, merge, truncate) on table/partition.
>  * If a single MERGE/UPDATE/INSERT/DELETE statement operates on multiple 
> partitions, then need to log one event per partition.
>  * DbNotificationListener should log this type of event to special metastore 
> table named "MTxnWriteNotificationLog".
>  * This table should maintain a map of txn ID against list of 
> tables/partitions written by given txn.
>  * The entry for a given txn should be removed by the cleaner thread that 
> removes the expired events from EventNotificationTable.
> h1. Replicate Commit Txn operation (with writes)
> Add new EVENT_COMMIT_TXN to log the metadata/data of all tables/partitions 
> modified within the txn.
> *Source warehouse:*
>  * This event should read the EVENT_WRITEs from "MTxnWriteNotificationLog" 
> metastore table to consolidate the list of tables/partitions modified within 
> this txn scope.
>  * Based on the list of tables/partitions modified and table Write ID, need 
> to compute the list of delta files added by this txn.
>  * Repl dump should read this message and dump the metadata and delta files 
> list.
> *Target warehouse:*
>  * Ensure snapshot isolation at target for on-going read txns which shouldn't 
> view the data replicated from committed txn. (Ensured with open and allocate 
> write ID events).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19267) Create/Replicate ACID Write event

2018-05-07 Thread Vineet Garg (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16466292#comment-16466292
 ] 

Vineet Garg commented on HIVE-19267:


Pushing this out of 3.0.0 and deferring it to next release.

> Create/Replicate ACID Write event
> -
>
> Key: HIVE-19267
> URL: https://issues.apache.org/jira/browse/HIVE-19267
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl, Transactions
>Affects Versions: 3.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: ACID, DR, pull-request-available, replication
> Attachments: HIVE-19267.01.patch, HIVE-19267.02.patch, 
> HIVE-19267.03.patch, HIVE-19267.04.patch
>
>
>  
> h1. Replicate ACID write Events
>  * Create new EVENT_WRITE event with related message format to log the write 
> operations with in a txn along with data associated.
>  * Log this event when perform any writes (insert into, insert overwrite, 
> load table, delete, update, merge, truncate) on table/partition.
>  * If a single MERGE/UPDATE/INSERT/DELETE statement operates on multiple 
> partitions, then need to log one event per partition.
>  * DbNotificationListener should log this type of event to special metastore 
> table named "MTxnWriteNotificationLog".
>  * This table should maintain a map of txn ID against list of 
> tables/partitions written by given txn.
>  * The entry for a given txn should be removed by the cleaner thread that 
> removes the expired events from EventNotificationTable.
> h1. Replicate Commit Txn operation (with writes)
> Add new EVENT_COMMIT_TXN to log the metadata/data of all tables/partitions 
> modified within the txn.
> *Source warehouse:*
>  * This event should read the EVENT_WRITEs from "MTxnWriteNotificationLog" 
> metastore table to consolidate the list of tables/partitions modified within 
> this txn scope.
>  * Based on the list of tables/partitions modified and table Write ID, need 
> to compute the list of delta files added by this txn.
>  * Repl dump should read this message and dump the metadata and delta files 
> list.
> *Target warehouse:*
>  * Ensure snapshot isolation at target for on-going read txns which shouldn't 
> view the data replicated from committed txn. (Ensured with open and allocate 
> write ID events).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19418) add background stats updater similar to compactor

2018-05-07 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16466290#comment-16466290
 ] 

Sergey Shelukhin commented on HIVE-19418:
-

 HIVE-19442 for phase 4

> add background stats updater similar to compactor
> -
>
> Key: HIVE-19418
> URL: https://issues.apache.org/jira/browse/HIVE-19418
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
>
> There's a JIRA HIVE-19416 to add snapshot version to stats for MM/ACID tables 
> to make them usable in a transaction without breaking ACID (for metadata-only 
> optimization). However, stats for ACID tables can still become unusable if 
> e.g. two parallel inserts run - neither sees the data written by the other, 
> so after both finish, the snapshots on either set of stats won't match the 
> current snapshot and the stats will be unusable.
> Additionally, for ACID and non-ACID tables alike, a lot of the stats, with 
> some exceptions like numRows, cannot be aggregated (i.e. you cannot combine 
> ndvs from two inserts), and for ACID even less can be aggregated (you cannot 
> derive min/max if some rows are deleted but you don't scan the rest of the 
> dataset).
> Therefore we will add background logic to metastore (similar to, and 
> partially inside, the ACID compactor) to update stats.
> It will have 3 modes of operation.
> 1) Off.
> 2) Update only the stats that exist but are out of date (generating stats can 
> be expensive, so if the user is only analyzing a subset of tables it should 
> be able to only update that subset). We can simply look at existing stats and 
> only analyze for the relevant partitions and columns.
> 3) On: 2 + create stats for all tables and columns missing stats.
> There will also be a table parameter to skip stats update. 
> In phase 1, the process will operate outside of compactor, and run analyze 
> command on the table. The analyze command will automatically save the stats 
> with ACID snapshot information if needed, based on HIVE-19416, so we don't 
> need to do any special state management and this will work for all table 
> types. However it's also more expensive.
> In phase 2, we can explore adding stats collection during MM compaction that 
> uses a temp table. If we don't have open writers during major compaction (so 
> we overwrite all of the data), the temp table stats can simply be copied over 
> to the main table with correct snapshot information, saving us a table scan.
> In phase 3, we can add custom stats collection logic to full ACID compactor 
> that is not query based, the same way as we'd do for (2). Alternatively we 
> can wait for ACID compactor to become query based and just reuse (2).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19340) Disable timeout of transactions opened by replication task at target cluster

2018-05-07 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19340:
---
Fix Version/s: (was: 3.0.0)
   3.1.0

> Disable timeout of transactions opened by replication task at target cluster
> 
>
> Key: HIVE-19340
> URL: https://issues.apache.org/jira/browse/HIVE-19340
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl, Transactions
>Affects Versions: 3.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: ACID, DR, pull-request-available, replication
> Attachments: HIVE-19340.01.patch
>
>
> The transactions opened by applying EVENT_OPEN_TXN should never be aborted 
> automatically due to time-out. Aborting of transaction started by replication 
> task may leads to inconsistent state at target which needs additional 
> overhead to clean-up. So, it is proposed to mark the transactions opened by 
> replication task as special ones and shouldn't be aborted if heart beat is 
> lost. This helps to ensure all ABORT and COMMIT events will always find the 
> corresponding txn at target to operate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19340) Disable timeout of transactions opened by replication task at target cluster

2018-05-07 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19340:
---
Target Version/s:   (was: 3.0.0)

> Disable timeout of transactions opened by replication task at target cluster
> 
>
> Key: HIVE-19340
> URL: https://issues.apache.org/jira/browse/HIVE-19340
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl, Transactions
>Affects Versions: 3.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: ACID, DR, pull-request-available, replication
> Attachments: HIVE-19340.01.patch
>
>
> The transactions opened by applying EVENT_OPEN_TXN should never be aborted 
> automatically due to time-out. Aborting of transaction started by replication 
> task may leads to inconsistent state at target which needs additional 
> overhead to clean-up. So, it is proposed to mark the transactions opened by 
> replication task as special ones and shouldn't be aborted if heart beat is 
> lost. This helps to ensure all ABORT and COMMIT events will always find the 
> corresponding txn at target to operate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19340) Disable timeout of transactions opened by replication task at target cluster

2018-05-07 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19340:
---
Fix Version/s: (was: 3.1.0)

> Disable timeout of transactions opened by replication task at target cluster
> 
>
> Key: HIVE-19340
> URL: https://issues.apache.org/jira/browse/HIVE-19340
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl, Transactions
>Affects Versions: 3.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: ACID, DR, pull-request-available, replication
> Attachments: HIVE-19340.01.patch
>
>
> The transactions opened by applying EVENT_OPEN_TXN should never be aborted 
> automatically due to time-out. Aborting of transaction started by replication 
> task may leads to inconsistent state at target which needs additional 
> overhead to clean-up. So, it is proposed to mark the transactions opened by 
> replication task as special ones and shouldn't be aborted if heart beat is 
> lost. This helps to ensure all ABORT and COMMIT events will always find the 
> corresponding txn at target to operate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   >