[jira] [Updated] (HIVE-15880) Allow insert overwrite and truncate table query to use auto.purge table property

2017-04-01 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-15880:
--
Labels: TODOC2.3  (was: )

> Allow insert overwrite and truncate table query to use auto.purge table 
> property
> 
>
> Key: HIVE-15880
> URL: https://issues.apache.org/jira/browse/HIVE-15880
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>  Labels: TODOC2.3
> Fix For: 2.3.0, 3.0.0
>
> Attachments: HIVE-15880.01.patch, HIVE-15880.02.patch, 
> HIVE-15880.03.patch, HIVE-15880.04.patch, HIVE-15880.05.patch, 
> HIVE-15880.06.patch
>
>
> It seems inconsistent that auto.purge property is not considered when we do a 
> INSERT OVERWRITE while it is when we do a DROP TABLE
> Drop table doesn't move table data to Trash when auto.purge is set to true
> {noformat}
> > create table temp(col1 string, col2 string);
> No rows affected (0.064 seconds)
> > alter table temp set tblproperties('auto.purge'='true');
> No rows affected (0.083 seconds)
> > insert into temp values ('test', 'test'), ('test2', 'test2');
> No rows affected (25.473 seconds)
> # hdfs dfs -ls /user/hive/warehouse/temp
> Found 1 items
> -rwxrwxrwt   3 hive hive 22 2017-02-09 13:03 
> /user/hive/warehouse/temp/00_0
> #
> > drop table temp;
> No rows affected (0.242 seconds)
> # hdfs dfs -ls /user/hive/warehouse/temp
> ls: `/user/hive/warehouse/temp': No such file or directory
> #
> # sudo -u hive hdfs dfs -ls /user/hive/.Trash/Current/user/hive/warehouse
> #
> {noformat}
> INSERT OVERWRITE query moves the table data to Trash even when auto.purge is 
> set to true
> {noformat}
> > create table temp(col1 string, col2 string);
> > alter table temp set tblproperties('auto.purge'='true');
> > insert into temp values ('test', 'test'), ('test2', 'test2');
> # hdfs dfs -ls /user/hive/warehouse/temp
> Found 1 items
> -rwxrwxrwt   3 hive hive 22 2017-02-09 13:07 
> /user/hive/warehouse/temp/00_0
> #
> > insert overwrite table temp select * from dummy;
> # hdfs dfs -ls /user/hive/warehouse/temp
> Found 1 items
> -rwxrwxrwt   3 hive hive 26 2017-02-09 13:08 
> /user/hive/warehouse/temp/00_0
> # sudo -u hive hdfs dfs -ls /user/hive/.Trash/Current/user/hive/warehouse
> Found 1 items
> drwx--   - hive hive  0 2017-02-09 13:08 
> /user/hive/.Trash/Current/user/hive/warehouse/temp
> #
> {noformat}
> While move operations are not very costly on HDFS it could be significant 
> overhead on slow FileSystems like S3. This could improve the performance of 
> {{INSERT OVERWRITE TABLE}} queries especially when there are large number of 
> partitions on tables located on S3 should the user wish to set auto.purge 
> property to true
> Similarly {{TRUNCATE TABLE}} query on a table with {{auto.purge}} property 
> set true should not move the data to Trash



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-15923) Hive default partition causes errors in get partitions

2017-04-01 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15952523#comment-15952523
 ] 

Pengcheng Xiong commented on HIVE-15923:


LGTM+1. Just one follow up question/problem. On one hand, we are saying
{code}
DEFAULTPARTITIONNAME("hive.exec.default.partition.name", 
"__HIVE_DEFAULT_PARTITION__",
"The default partition name in case the dynamic partition column value 
is null/empty string or any other values that cannot be escaped. \n" +
"This value must not contain any special character used in HDFS URI 
(e.g., ':', '%', '/' etc). \n" +
"The user has to be aware that the dynamic partition value should not 
contain this value to avoid confusions."),
{code}
On the other hand, user can give a query like
{code}
alter table ptestfilter drop partition(c != '__HIVE_DEFAULT_PARTITION__');
{code}

I think this is inconsistent. Users should not be allowed to give a query like 
that. They should give a query like
{code}
alter table ptestfilter drop partition(c is not null);
{code}

> Hive default partition causes errors in get partitions
> --
>
> Key: HIVE-15923
> URL: https://issues.apache.org/jira/browse/HIVE-15923
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Blocker
> Fix For: 2.3.0
>
> Attachments: HIVE-15923.01.patch, HIVE-15923.02.patch, 
> HIVE-15923.03.patch, HIVE-15923.patch
>
>
> This is the ORM error, direct SQL fails too before that, with a similar error.
> {noformat}
> 2017-02-14T17:45:11,158 ERROR [09fdd887-0164-4f55-97e9-4ba147d962be main] 
> metastore.ObjectStore:java.lang.ClassCastException: 
> org.apache.hadoop.hive.ql.plan.ExprNodeConstantDefaultDesc cannot be cast to 
> java.lang.Long
> at 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.JavaLongObjectInspector.get(JavaLongObjectInspector.java:40)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorUtils.getDouble(PrimitiveObjectInspectorUtils.java:801)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]at 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorConverter$DoubleConverter.convert(P
> rimitiveObjectInspectorConverter.java:240) 
> ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPEqualOrGreaterThan.evaluate(GenericUDFOPEqualOrGreaterThan.java:145)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFBetween.evaluate(GenericUDFBetween.java:57)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator._evaluate(ExprNodeGenericFuncEvaluator.java:187)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:80)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator$DeferredExprObject.get(ExprNodeGenericFuncEvaluator.java:88)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPAnd.evaluate(GenericUDFOPAnd.java:63)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator._evaluate(ExprNodeGenericFuncEvaluator.java:187)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:80)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:68)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.optimizer.ppr.PartExprEvalUtils.evaluateExprOnPart(PartExprEvalUtils.java:126)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-15923) Hive default partition causes errors in get partitions

2017-04-01 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15952477#comment-15952477
 ] 

Pengcheng Xiong commented on HIVE-15923:


OK. i am taking a look right now.

> Hive default partition causes errors in get partitions
> --
>
> Key: HIVE-15923
> URL: https://issues.apache.org/jira/browse/HIVE-15923
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Blocker
> Fix For: 2.3.0
>
> Attachments: HIVE-15923.01.patch, HIVE-15923.02.patch, 
> HIVE-15923.03.patch, HIVE-15923.patch
>
>
> This is the ORM error, direct SQL fails too before that, with a similar error.
> {noformat}
> 2017-02-14T17:45:11,158 ERROR [09fdd887-0164-4f55-97e9-4ba147d962be main] 
> metastore.ObjectStore:java.lang.ClassCastException: 
> org.apache.hadoop.hive.ql.plan.ExprNodeConstantDefaultDesc cannot be cast to 
> java.lang.Long
> at 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.JavaLongObjectInspector.get(JavaLongObjectInspector.java:40)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorUtils.getDouble(PrimitiveObjectInspectorUtils.java:801)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]at 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorConverter$DoubleConverter.convert(P
> rimitiveObjectInspectorConverter.java:240) 
> ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPEqualOrGreaterThan.evaluate(GenericUDFOPEqualOrGreaterThan.java:145)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFBetween.evaluate(GenericUDFBetween.java:57)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator._evaluate(ExprNodeGenericFuncEvaluator.java:187)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:80)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator$DeferredExprObject.get(ExprNodeGenericFuncEvaluator.java:88)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPAnd.evaluate(GenericUDFOPAnd.java:63)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator._evaluate(ExprNodeGenericFuncEvaluator.java:187)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:80)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:68)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.optimizer.ppr.PartExprEvalUtils.evaluateExprOnPart(PartExprEvalUtils.java:126)
>  ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HIVE-10161) LLAP: ORC file contains compression buffers larger than bufferSize (OR reader has a bug)

2017-04-01 Thread Harish (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15952474#comment-15952474
 ] 

Harish edited comment on HIVE-10161 at 4/2/17 12:21 AM:


[~sershe] I am having same issue in Hive 1.2.1. Is this issue fixed in 1.2.1 or 
later version.
Scenario.
 I have Partitioned Hive table created in one cluster (ORC). I copied the ORC 
files from this cluster to Azure Data lake using Azure CLI. Once copy is done 
then i have created external table using the SAME DDL from  the source 
Cluster/Hive. After repairing the table when i query few partitions i get same 
error. Can you help me on this?.

Hadoop version : 3.0 alpha 2



Caused by: java.lang.IllegalArgumentException: Buffer size too small. size = 
262144 needed = 7200075
at 
org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.readHeader(InStream.java:193)
at 
org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.read(InStream.java:238)
at java.io.InputStream.read(InputStream.java:101)
at com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:737)
at com.google.protobuf.CodedInputStream.isAtEnd(CodedInputStream.java:701)
at com.google.protobuf.CodedInputStream.readTag(CodedInputStream.java:99)
at 
org.apache.hadoop.hive.ql.io.orc.OrcProto$StripeFooter.(OrcProto.java:10661)
at 
org.apache.hadoop.hive.ql.io.orc.OrcProto$StripeFooter.(OrcProto.java:10625)
at 
org.apache.hadoop.hive.ql.io.orc.OrcProto$StripeFooter$1.parsePartialFrom(OrcProto.java:10730)
at 
org.apache.hadoop.hive.ql.io.orc.OrcProto$StripeFooter$1.parsePartialFrom(OrcProto.java:10725)
at com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:200)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:217)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:223)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:49)
at 
org.apache.hadoop.hive.ql.io.orc.OrcProto$StripeFooter.parseFrom(OrcProto.java:10937)
at 
org.apache.hadoop.hive.ql.io.orc.MetadataReader.readStripeFooter(MetadataReader.java:113)
at 
org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.readStripeFooter(RecordReaderImpl.java:228)
at 
org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.beginReadStripe(RecordReaderImpl.java:805)
at 
org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.readStripe(RecordReaderImpl.java:776)
at 
org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.advanceStripe(RecordReaderImpl.java:986)
at 
org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.advanceToNextRow(RecordReaderImpl.java:1019)
at 
org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.next(RecordReaderImpl.java:1042)
at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:170)
at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:144)
at 
org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:350)




was (Author: harishk15):
[~sershe] I am having same issue in Hive 1.2.1. Is this issue fixed in 1.2.1 or 
later version.
Scenario.
 I have Partitioned Hive table created in one cluster (ORC). I copied the ORC 
files from this cluster to Azure Data lake using Azure CLI. Once copy is done 
then i have created external table using the SAME DDL from  the source 
Cluster/Hive. After repairing the table when i query few partitions i get same 
error. Can you help me on this?.

Hadoop version : 3.0 alpha 2



> LLAP: ORC file contains compression buffers larger than bufferSize (OR reader 
> has a bug)
> 
>
> Key: HIVE-10161
> URL: https://issues.apache.org/jira/browse/HIVE-10161
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: llap
>Reporter: Gopal V
>Assignee: Sergey Shelukhin
> Fix For: llap
>
>
> The EncodedReaderImpl will die when reading from the cache, when reading data 
> written by the regular ORC writer 
> {code}
> Caused by: java.io.IOException: java.lang.IllegalArgumentException: Buffer 
> size too small. size = 262144 needed = 3919246
> at 
> org.apache.hadoop.hive.llap.io.api.impl.LlapInputFormat$LlapRecordReader.rethrowErrorIfAny(LlapInputFormat.java:249)
> at 
> org.apache.hadoop.hive.llap.io.api.impl.LlapInputFormat$LlapRecordReader.nextCvb(LlapInputFormat.java:201)
> at 
> org.apache.hadoop.hive.llap.io.api.impl.LlapInputFormat$LlapRecordReader.next(LlapInputFormat.java:140)
> at 
> org.apache.hadoop.hive.llap.io.api.impl.LlapInputFormat$LlapRecordReader.next(LlapInputFormat.java:96)
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:350)
> ... 22 more
> Caused by: java.lang.IllegalArgumentException: Buffer size too small. size = 
> 

[jira] [Commented] (HIVE-10161) LLAP: ORC file contains compression buffers larger than bufferSize (OR reader has a bug)

2017-04-01 Thread Harish (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15952474#comment-15952474
 ] 

Harish commented on HIVE-10161:
---

[~sershe] I am having same issue in Hive 1.2.1. Is this issue fixed in 1.2.1 or 
later version.
Scenario.
 I have Partitioned Hive table created in one cluster (ORC). I copied the ORC 
files from this cluster to Azure Data lake using Azure CLI. Once copy is done 
then i have created external table using the SAME DDL from  the source 
Cluster/Hive. After repairing the table when i query few partitions i get same 
error. Can you help me on this?.

Hadoop version : 3.0 alpha 2



> LLAP: ORC file contains compression buffers larger than bufferSize (OR reader 
> has a bug)
> 
>
> Key: HIVE-10161
> URL: https://issues.apache.org/jira/browse/HIVE-10161
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: llap
>Reporter: Gopal V
>Assignee: Sergey Shelukhin
> Fix For: llap
>
>
> The EncodedReaderImpl will die when reading from the cache, when reading data 
> written by the regular ORC writer 
> {code}
> Caused by: java.io.IOException: java.lang.IllegalArgumentException: Buffer 
> size too small. size = 262144 needed = 3919246
> at 
> org.apache.hadoop.hive.llap.io.api.impl.LlapInputFormat$LlapRecordReader.rethrowErrorIfAny(LlapInputFormat.java:249)
> at 
> org.apache.hadoop.hive.llap.io.api.impl.LlapInputFormat$LlapRecordReader.nextCvb(LlapInputFormat.java:201)
> at 
> org.apache.hadoop.hive.llap.io.api.impl.LlapInputFormat$LlapRecordReader.next(LlapInputFormat.java:140)
> at 
> org.apache.hadoop.hive.llap.io.api.impl.LlapInputFormat$LlapRecordReader.next(LlapInputFormat.java:96)
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:350)
> ... 22 more
> Caused by: java.lang.IllegalArgumentException: Buffer size too small. size = 
> 262144 needed = 3919246
> at 
> org.apache.hadoop.hive.ql.io.orc.InStream.addOneCompressionBuffer(InStream.java:780)
> at 
> org.apache.hadoop.hive.ql.io.orc.InStream.uncompressStream(InStream.java:628)
> at 
> org.apache.hadoop.hive.ql.io.orc.EncodedReaderImpl.readEncodedColumns(EncodedReaderImpl.java:309)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:278)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:48)
> at 
> org.apache.hadoop.hive.common.CallableWithNdc.call(CallableWithNdc.java:37)
> ... 4 more
> ]], Vertex failed as one or more tasks failed. failedTasks:1, Vertex 
> vertex_1424502260528_1945_1_00 [Map 1] killed/failed due to:null]
> {code}
> Turning off hive.llap.io.enabled makes the error go away.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-14550) HiveServer2: enable ThriftJDBCBinarySerde use by default

2017-04-01 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15952441#comment-15952441
 ] 

Pengcheng Xiong commented on HIVE-14550:


[~ziyangz], can u finish this by tomorrow? It is now blocking 2.3 release.

> HiveServer2: enable ThriftJDBCBinarySerde use by default
> 
>
> Key: HIVE-14550
> URL: https://issues.apache.org/jira/browse/HIVE-14550
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, JDBC, ODBC
>Affects Versions: 2.1.0
>Reporter: Vaibhav Gumashta
>Assignee: Ziyang Zhao
>Priority: Blocker
> Attachments: HIVE-14550.1.patch, HIVE-14550.2.patch
>
>
> We've covered all items in HIVE-12427 and created HIVE-14549 for part2 of the 
> effort. Before closing the umbrella jira, we should enable this feature by 
> default.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-14550) HiveServer2: enable ThriftJDBCBinarySerde use by default

2017-04-01 Thread Ziyang Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15952437#comment-15952437
 ] 

Ziyang Zhao commented on HIVE-14550:


[~pxiong]Hello, I am currently working on this issue.

> HiveServer2: enable ThriftJDBCBinarySerde use by default
> 
>
> Key: HIVE-14550
> URL: https://issues.apache.org/jira/browse/HIVE-14550
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, JDBC, ODBC
>Affects Versions: 2.1.0
>Reporter: Vaibhav Gumashta
>Assignee: Ziyang Zhao
>Priority: Blocker
> Attachments: HIVE-14550.1.patch, HIVE-14550.2.patch
>
>
> We've covered all items in HIVE-12427 and created HIVE-14549 for part2 of the 
> effort. Before closing the umbrella jira, we should enable this feature by 
> default.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16346) inheritPerms should be conditional based on the target filesystem

2017-04-01 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15952433#comment-15952433
 ] 

Pengcheng Xiong commented on HIVE-16346:


Hello, I am deferring this to Hive 3.0 as we are going to cut the first RC and 
it is not marked as blocker. Please feel free to commit to the branch if this 
can be resolved before the release.


> inheritPerms should be conditional based on the target filesystem
> -
>
> Key: HIVE-16346
> URL: https://issues.apache.org/jira/browse/HIVE-16346
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>
> Right now, a lot of the logic in {{Hive.java}} attempts to set permissions of 
> different files that have been moved / copied. This is only triggered if 
> {{hive.warehouse.subdir.inherit.perms}} is set to true.
> However, on blobstores such as S3, there is no concept of file permissions so 
> these calls are unnecessary and can could a performance impact.
> One solution would be to set {{hive.warehouse.subdir.inherit.perms}} to 
> false, but this would be a global change that affects an entire HS2 instance. 
> So HDFS tables will no longer have permissions inheritance.
> A better solution would be to make the inheritance of permissions conditional 
> on the target filesystem.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16340) Allow Kerberos + SSL connections to HMS

2017-04-01 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-16340:
---
Target Version/s: 3.0.0  (was: 2.3.0, 3.0.0)

> Allow Kerberos + SSL connections to HMS
> ---
>
> Key: HIVE-16340
> URL: https://issues.apache.org/jira/browse/HIVE-16340
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
> Attachments: HIVE-16340.1.patch
>
>
> It should be possible to connect to HMS with Kerberos authentication and SSL 
> enabled, at the same time.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16340) Allow Kerberos + SSL connections to HMS

2017-04-01 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15952434#comment-15952434
 ] 

Pengcheng Xiong commented on HIVE-16340:


Hello, I am deferring this to Hive 3.0 as we are going to cut the first RC and 
it is not marked as blocker. Please feel free to commit to the branch if this 
can be resolved before the release.


> Allow Kerberos + SSL connections to HMS
> ---
>
> Key: HIVE-16340
> URL: https://issues.apache.org/jira/browse/HIVE-16340
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
> Attachments: HIVE-16340.1.patch
>
>
> It should be possible to connect to HMS with Kerberos authentication and SSL 
> enabled, at the same time.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16346) inheritPerms should be conditional based on the target filesystem

2017-04-01 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-16346:
---
Target Version/s: 3.0.0  (was: 2.3.0, 3.0.0)

> inheritPerms should be conditional based on the target filesystem
> -
>
> Key: HIVE-16346
> URL: https://issues.apache.org/jira/browse/HIVE-16346
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>
> Right now, a lot of the logic in {{Hive.java}} attempts to set permissions of 
> different files that have been moved / copied. This is only triggered if 
> {{hive.warehouse.subdir.inherit.perms}} is set to true.
> However, on blobstores such as S3, there is no concept of file permissions so 
> these calls are unnecessary and can could a performance impact.
> One solution would be to set {{hive.warehouse.subdir.inherit.perms}} to 
> false, but this would be a global change that affects an entire HS2 instance. 
> So HDFS tables will no longer have permissions inheritance.
> A better solution would be to make the inheritance of permissions conditional 
> on the target filesystem.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-14550) HiveServer2: enable ThriftJDBCBinarySerde use by default

2017-04-01 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15952432#comment-15952432
 ] 

Pengcheng Xiong commented on HIVE-14550:


[~vgumashta], are you working on this? Could u finish this by tomorrow? Thanks.

> HiveServer2: enable ThriftJDBCBinarySerde use by default
> 
>
> Key: HIVE-14550
> URL: https://issues.apache.org/jira/browse/HIVE-14550
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, JDBC, ODBC
>Affects Versions: 2.1.0
>Reporter: Vaibhav Gumashta
>Assignee: Ziyang Zhao
>Priority: Blocker
> Attachments: HIVE-14550.1.patch, HIVE-14550.2.patch
>
>
> We've covered all items in HIVE-12427 and created HIVE-14549 for part2 of the 
> effort. Before closing the umbrella jira, we should enable this feature by 
> default.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16349) Enable DDL statement for non-native tables

2017-04-01 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15952423#comment-15952423
 ] 

Thejas M Nair commented on HIVE-16349:
--

Can you also add details to the description about the kind of operations that 
should be allowed on non native tables ?
Clearly, some operations like those on partitions don't make sense on non 
native tables.


> Enable DDL statement for non-native tables
> --
>
> Key: HIVE-16349
> URL: https://issues.apache.org/jira/browse/HIVE-16349
> Project: Hive
>  Issue Type: New Feature
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-16349.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16349) Enable DDL statement for non-native tables

2017-04-01 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15952421#comment-15952421
 ] 

Thejas M Nair commented on HIVE-16349:
--

Most of the alter statements don't make sense for non-native tables.
I don't think we should allow selectively the set of alter statements that make 
sense for non native tables. Something on that lines is done for views in the 
same function.

> Enable DDL statement for non-native tables
> --
>
> Key: HIVE-16349
> URL: https://issues.apache.org/jira/browse/HIVE-16349
> Project: Hive
>  Issue Type: New Feature
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-16349.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-15396) Basic Stats are not collected when for managed tables with LOCATION specified

2017-04-01 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15952393#comment-15952393
 ] 

Hive QA commented on HIVE-15396:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12861605/HIVE-15396.7.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 10564 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_if_expr]
 (batchId=141)
org.apache.hadoop.hive.ql.io.orc.TestNewInputOutputFormat.testNewInputFormatPruning
 (batchId=255)
org.apache.hive.hcatalog.api.TestHCatClient.testTransportFailure (batchId=173)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4510/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4510/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4510/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12861605 - PreCommit-HIVE-Build

> Basic Stats are not collected when for managed tables with LOCATION specified
> -
>
> Key: HIVE-15396
> URL: https://issues.apache.org/jira/browse/HIVE-15396
> Project: Hive
>  Issue Type: Bug
>  Components: Statistics
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
> Attachments: HIVE-15396.1.patch, HIVE-15396.2.patch, 
> HIVE-15396.3.patch, HIVE-15396.4.patch, HIVE-15396.5.patch, 
> HIVE-15396.6.patch, HIVE-15396.7.patch
>
>
> Basic stats are not collected when a managed table is created with a 
> specified {{LOCATION}} clause.
> {code}
> 0: jdbc:hive2://localhost:1> create table hdfs_1 (col int);
> 0: jdbc:hive2://localhost:1> describe formatted hdfs_1;
> +---++-+
> |   col_name| data_type   
>|   comment   |
> +---++-+
> | # col_name| data_type   
>| comment |
> |   | NULL
>| NULL|
> | col   | int 
>| |
> |   | NULL
>| NULL|
> | # Detailed Table Information  | NULL
>| NULL|
> | Database: | default 
>| NULL|
> | Owner:| anonymous   
>| NULL|
> | CreateTime:   | Wed Mar 22 18:09:19 PDT 2017
>| NULL|
> | LastAccessTime:   | UNKNOWN 
>| NULL|
> | Retention:| 0   
>| NULL|
> | Location: | file:/warehouse/hdfs_1 | NULL   
>  |
> | Table Type:   | MANAGED_TABLE   
>| NULL|
> | Table Parameters: | NULL
>| NULL|
> |   | COLUMN_STATS_ACCURATE   
>| {\"BASIC_STATS\":\"true\"}  |
> |   | numFiles
>| 0   |
> |   | numRows 
>| 0   |
> |   | rawDataSize 
>| 0   |
> |   | totalSize   
>| 0   |
> |   | transient_lastDdlTime   
>| 1490231359  |
> |   | NULL 

[jira] [Updated] (HIVE-15396) Basic Stats are not collected when for managed tables with LOCATION specified

2017-04-01 Thread Sahil Takiar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-15396:

Attachment: HIVE-15396.7.patch

> Basic Stats are not collected when for managed tables with LOCATION specified
> -
>
> Key: HIVE-15396
> URL: https://issues.apache.org/jira/browse/HIVE-15396
> Project: Hive
>  Issue Type: Bug
>  Components: Statistics
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
> Attachments: HIVE-15396.1.patch, HIVE-15396.2.patch, 
> HIVE-15396.3.patch, HIVE-15396.4.patch, HIVE-15396.5.patch, 
> HIVE-15396.6.patch, HIVE-15396.7.patch
>
>
> Basic stats are not collected when a managed table is created with a 
> specified {{LOCATION}} clause.
> {code}
> 0: jdbc:hive2://localhost:1> create table hdfs_1 (col int);
> 0: jdbc:hive2://localhost:1> describe formatted hdfs_1;
> +---++-+
> |   col_name| data_type   
>|   comment   |
> +---++-+
> | # col_name| data_type   
>| comment |
> |   | NULL
>| NULL|
> | col   | int 
>| |
> |   | NULL
>| NULL|
> | # Detailed Table Information  | NULL
>| NULL|
> | Database: | default 
>| NULL|
> | Owner:| anonymous   
>| NULL|
> | CreateTime:   | Wed Mar 22 18:09:19 PDT 2017
>| NULL|
> | LastAccessTime:   | UNKNOWN 
>| NULL|
> | Retention:| 0   
>| NULL|
> | Location: | file:/warehouse/hdfs_1 | NULL   
>  |
> | Table Type:   | MANAGED_TABLE   
>| NULL|
> | Table Parameters: | NULL
>| NULL|
> |   | COLUMN_STATS_ACCURATE   
>| {\"BASIC_STATS\":\"true\"}  |
> |   | numFiles
>| 0   |
> |   | numRows 
>| 0   |
> |   | rawDataSize 
>| 0   |
> |   | totalSize   
>| 0   |
> |   | transient_lastDdlTime   
>| 1490231359  |
> |   | NULL
>| NULL|
> | # Storage Information | NULL
>| NULL|
> | SerDe Library:| 
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe | NULL 
>|
> | InputFormat:  | org.apache.hadoop.mapred.TextInputFormat
>| NULL|
> | OutputFormat: | 
> org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat | NULL 
>|
> | Compressed:   | No  
>| NULL|
> | Num Buckets:  | -1  
>| NULL|
> | Bucket Columns:   | []  
>| NULL|
> | Sort Columns: | []  
>| NULL|
> | Storage Desc Params:  | NULL
>| NULL|
> |   | 

[jira] [Updated] (HIVE-16339) QTestUtil pattern masking should only partially mask paths

2017-04-01 Thread Sahil Takiar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-16339:

Status: Open  (was: Patch Available)

> QTestUtil pattern masking should only partially mask paths
> --
>
> Key: HIVE-16339
> URL: https://issues.apache.org/jira/browse/HIVE-16339
> Project: Hive
>  Issue Type: Improvement
>  Components: Test
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
> Attachments: HIVE-16339.1.patch
>
>
> QTestUtil will mask an entire like in .q.out files if it sees any of the 
> target mask patterns. This seems unnecessary for patterns such as "pfile:", 
> "file:", and "hdfs:" which are targeted towards masking file paths.
> Just because a line in .q.out contains a path doesn't mean the entire line 
> should be masked. The line could contain useful information. It would be 
> better if just the file path could be masked.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16277) Exchange Partition between filesystems throws "IllegalArgumentException Wrong FS"

2017-04-01 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15952341#comment-15952341
 ] 

Sahil Takiar commented on HIVE-16277:
-

[~vihangk1], [~spena], [~mohitsabharwal] anyone chance someone could take a 
look at this? Right now, I'm looking for some feedback on the approach. I still 
have some code cleanup + unit testing to do.

I'm trying to add support for exchanging of partitions across filesystems. I 
introduced a new HiveMetaStore method call {{exchange_partitions_metadata}} 
which only exchanges the partition metadata in HMS, but doesn't actually move 
the data. This is different from the existing {{exchange_partitions}} method in 
HMS which both renames the partition on the physical filesystem and switches 
the metadata.

I wanted to move the actual renaming of directories to {{Hive.java}} - in the 
case where a folder needs to be moved cross-filesystem, the partition data 
needs to be copied. Doing this in HMS doesn't sound like the right approach, as 
copying the data could take hours, depending on the size of the partition.

> Exchange Partition between filesystems throws "IllegalArgumentException Wrong 
> FS"
> -
>
> Key: HIVE-16277
> URL: https://issues.apache.org/jira/browse/HIVE-16277
> Project: Hive
>  Issue Type: Bug
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
> Attachments: HIVE-16277.1.patch, HIVE-16277.2.patch, 
> HIVE-16277.3.patch, HIVE-16277.4.patch
>
>
> The following query: {{alter table s3_tbl exchange partition (country='USA') 
> with table hdfs_tbl}} fails with the following exception:
> {code}
> Error: org.apache.hive.service.cli.HiveSQLException: Error while processing 
> statement: FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Got exception: 
> java.lang.IllegalArgumentException Wrong FS: 
> s3a://[bucket]/table/country=USA, expected: file:///)
>   at 
> org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:379)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:256)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:347)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:361)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
> MetaException(message:Got exception: java.lang.IllegalArgumentException Wrong 
> FS: s3a://[bucket]/table/country=USA, expected: file:///)
>   at 
> org.apache.hadoop.hive.ql.metadata.Hive.exchangeTablePartitions(Hive.java:3553)
>   at 
> org.apache.hadoop.hive.ql.exec.DDLTask.exchangeTablePartition(DDLTask.java:4691)
>   at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:570)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2182)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1838)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1525)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1236)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1231)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:254)
>   ... 11 more
> Caused by: MetaException(message:Got exception: 
> java.lang.IllegalArgumentException Wrong FS: 
> s3a://[bucket]/table/country=USA, expected: file:///)
>   at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.logAndThrowMetaException(MetaStoreUtils.java:1387)
>   at 
> org.apache.hadoop.hive.metastore.Warehouse.renameDir(Warehouse.java:208)
>   at 
> org.apache.hadoop.hive.metastore.Warehouse.renameDir(Warehouse.java:200)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.exchange_partitions(HiveMetaStore.java:2967)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 

[jira] [Commented] (HIVE-15007) Hive 1.2.2 release planning

2017-04-01 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15952332#comment-15952332
 ] 

Hive QA commented on HIVE-15007:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12861603/HIVE-15007-branch-1.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 133 failed/errored test(s), 7897 tests 
executed
*Failed tests:*
{noformat}
TestAdminUser - did not produce a TEST-*.xml file (likely timed out) 
(batchId=339)
TestAuthorizationPreEventListener - did not produce a TEST-*.xml file (likely 
timed out) (batchId=370)
TestAuthzApiEmbedAuthorizerInEmbed - did not produce a TEST-*.xml file (likely 
timed out) (batchId=349)
TestAuthzApiEmbedAuthorizerInRemote - did not produce a TEST-*.xml file (likely 
timed out) (batchId=355)
TestBeeLineWithArgs - did not produce a TEST-*.xml file (likely timed out) 
(batchId=377)
TestCLIAuthzSessionContext - did not produce a TEST-*.xml file (likely timed 
out) (batchId=393)
TestClientSideAuthorizationProvider - did not produce a TEST-*.xml file (likely 
timed out) (batchId=369)
TestCompactor - did not produce a TEST-*.xml file (likely timed out) 
(batchId=359)
TestCreateUdfEntities - did not produce a TEST-*.xml file (likely timed out) 
(batchId=358)
TestCustomAuthentication - did not produce a TEST-*.xml file (likely timed out) 
(batchId=378)
TestDBTokenStore - did not produce a TEST-*.xml file (likely timed out) 
(batchId=324)
TestDDLWithRemoteMetastoreSecondNamenode - did not produce a TEST-*.xml file 
(likely timed out) (batchId=357)
TestDynamicSerDe - did not produce a TEST-*.xml file (likely timed out) 
(batchId=327)
TestEmbeddedHiveMetaStore - did not produce a TEST-*.xml file (likely timed 
out) (batchId=336)
TestEmbeddedThriftBinaryCLIService - did not produce a TEST-*.xml file (likely 
timed out) (batchId=381)
TestFilterHooks - did not produce a TEST-*.xml file (likely timed out) 
(batchId=331)
TestFolderPermissions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=364)
TestHS2AuthzContext - did not produce a TEST-*.xml file (likely timed out) 
(batchId=396)
TestHS2AuthzSessionContext - did not produce a TEST-*.xml file (likely timed 
out) (batchId=397)
TestHS2ImpersonationWithRemoteMS - did not produce a TEST-*.xml file (likely 
timed out) (batchId=385)
TestHiveAuthorizerCheckInvocation - did not produce a TEST-*.xml file (likely 
timed out) (batchId=373)
TestHiveAuthorizerShowFilters - did not produce a TEST-*.xml file (likely timed 
out) (batchId=372)
TestHiveHistory - did not produce a TEST-*.xml file (likely timed out) 
(batchId=375)
TestHiveMetaStoreTxns - did not produce a TEST-*.xml file (likely timed out) 
(batchId=351)
TestHiveMetaStoreWithEnvironmentContext - did not produce a TEST-*.xml file 
(likely timed out) (batchId=341)
TestHiveMetaTool - did not produce a TEST-*.xml file (likely timed out) 
(batchId=354)
TestHiveServer2 - did not produce a TEST-*.xml file (likely timed out) 
(batchId=399)
TestHiveServer2SessionTimeout - did not produce a TEST-*.xml file (likely timed 
out) (batchId=400)
TestHiveSessionImpl - did not produce a TEST-*.xml file (likely timed out) 
(batchId=382)
TestHs2Hooks - did not produce a TEST-*.xml file (likely timed out) 
(batchId=356)
TestHs2HooksWithMiniKdc - did not produce a TEST-*.xml file (likely timed out) 
(batchId=428)
TestJdbcDriver2 - did not produce a TEST-*.xml file (likely timed out) 
(batchId=387)
TestJdbcMetadataApiAuth - did not produce a TEST-*.xml file (likely timed out) 
(batchId=398)
TestJdbcWithLocalClusterSpark - did not produce a TEST-*.xml file (likely timed 
out) (batchId=392)
TestJdbcWithMiniHS2 - did not produce a TEST-*.xml file (likely timed out) 
(batchId=389)
TestJdbcWithMiniKdc - did not produce a TEST-*.xml file (likely timed out) 
(batchId=425)
TestJdbcWithMiniKdcCookie - did not produce a TEST-*.xml file (likely timed 
out) (batchId=424)
TestJdbcWithMiniKdcSQLAuthBinary - did not produce a TEST-*.xml file (likely 
timed out) (batchId=422)
TestJdbcWithMiniKdcSQLAuthHttp - did not produce a TEST-*.xml file (likely 
timed out) (batchId=427)
TestJdbcWithMiniMr - did not produce a TEST-*.xml file (likely timed out) 
(batchId=388)
TestJdbcWithSQLAuthUDFBlacklist - did not produce a TEST-*.xml file (likely 
timed out) (batchId=394)
TestJdbcWithSQLAuthorization - did not produce a TEST-*.xml file (likely timed 
out) (batchId=395)
TestLocationQueries - did not produce a TEST-*.xml file (likely timed out) 
(batchId=362)
TestMTQueries - did not produce a TEST-*.xml file (likely timed out) 
(batchId=360)
TestMarkPartition - did not produce a TEST-*.xml file (likely timed out) 
(batchId=348)
TestMarkPartitionRemote - did not produce a TEST-*.xml file (likely timed out) 
(batchId=352)
TestMetaStoreAuthorization - did not produce a TEST-*.xml file (likely timed 
out) 

[jira] [Updated] (HIVE-16336) Rename hive.spark.use.file.size.for.mapjoin to hive.spark.use.ts.stats.for.mapjoin

2017-04-01 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HIVE-16336:

   Resolution: Fixed
Fix Version/s: 2.3.0
   2.2.0
   2.1.2
   2.0.2
   Status: Resolved  (was: Patch Available)

Pushed to master. Thanks [~lirui] and [~leftylev] for reviewing.

> Rename hive.spark.use.file.size.for.mapjoin to 
> hive.spark.use.ts.stats.for.mapjoin
> --
>
> Key: HIVE-16336
> URL: https://issues.apache.org/jira/browse/HIVE-16336
> Project: Hive
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Chao Sun
>Assignee: Chao Sun
> Fix For: 2.0.2, 2.1.2, 2.2.0, 2.3.0
>
> Attachments: HIVE-16336.0.patch, HIVE-16336.1.patch
>
>
> The name {{hive.spark.use.file.size.for.mapjoin}} is confusing. It indicates 
> that HoS uses file size for mapjoin but in fact it still uses (in-memory) 
> data size. We should change it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16348) HoS query is canceled but error message shows RPC is closed

2017-04-01 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HIVE-16348:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.2.0
   Status: Resolved  (was: Patch Available)

The test failures are not related. Thanks Xuefu for the review. Pushed to the 
master branch.

> HoS query is canceled but error message shows RPC is closed
> ---
>
> Key: HIVE-16348
> URL: https://issues.apache.org/jira/browse/HIVE-16348
> Project: Hive
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Minor
> Fix For: 2.2.0
>
> Attachments: HIVE-16348.1.patch
>
>
> When a HoS query is interrupted in getting app id, it keeps trying to get 
> status till timedout, and return some RPC is closed error message which is 
> misleading.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-15880) Allow insert overwrite and truncate table query to use auto.purge table property

2017-04-01 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang updated HIVE-15880:
---
   Resolution: Fixed
Fix Version/s: 3.0.0
   2.3.0
   Status: Resolved  (was: Patch Available)

Committed to 2.3.0 & 3.0.0. Thanks [~vihangk1] for the patch.

> Allow insert overwrite and truncate table query to use auto.purge table 
> property
> 
>
> Key: HIVE-15880
> URL: https://issues.apache.org/jira/browse/HIVE-15880
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
> Fix For: 2.3.0, 3.0.0
>
> Attachments: HIVE-15880.01.patch, HIVE-15880.02.patch, 
> HIVE-15880.03.patch, HIVE-15880.04.patch, HIVE-15880.05.patch, 
> HIVE-15880.06.patch
>
>
> It seems inconsistent that auto.purge property is not considered when we do a 
> INSERT OVERWRITE while it is when we do a DROP TABLE
> Drop table doesn't move table data to Trash when auto.purge is set to true
> {noformat}
> > create table temp(col1 string, col2 string);
> No rows affected (0.064 seconds)
> > alter table temp set tblproperties('auto.purge'='true');
> No rows affected (0.083 seconds)
> > insert into temp values ('test', 'test'), ('test2', 'test2');
> No rows affected (25.473 seconds)
> # hdfs dfs -ls /user/hive/warehouse/temp
> Found 1 items
> -rwxrwxrwt   3 hive hive 22 2017-02-09 13:03 
> /user/hive/warehouse/temp/00_0
> #
> > drop table temp;
> No rows affected (0.242 seconds)
> # hdfs dfs -ls /user/hive/warehouse/temp
> ls: `/user/hive/warehouse/temp': No such file or directory
> #
> # sudo -u hive hdfs dfs -ls /user/hive/.Trash/Current/user/hive/warehouse
> #
> {noformat}
> INSERT OVERWRITE query moves the table data to Trash even when auto.purge is 
> set to true
> {noformat}
> > create table temp(col1 string, col2 string);
> > alter table temp set tblproperties('auto.purge'='true');
> > insert into temp values ('test', 'test'), ('test2', 'test2');
> # hdfs dfs -ls /user/hive/warehouse/temp
> Found 1 items
> -rwxrwxrwt   3 hive hive 22 2017-02-09 13:07 
> /user/hive/warehouse/temp/00_0
> #
> > insert overwrite table temp select * from dummy;
> # hdfs dfs -ls /user/hive/warehouse/temp
> Found 1 items
> -rwxrwxrwt   3 hive hive 26 2017-02-09 13:08 
> /user/hive/warehouse/temp/00_0
> # sudo -u hive hdfs dfs -ls /user/hive/.Trash/Current/user/hive/warehouse
> Found 1 items
> drwx--   - hive hive  0 2017-02-09 13:08 
> /user/hive/.Trash/Current/user/hive/warehouse/temp
> #
> {noformat}
> While move operations are not very costly on HDFS it could be significant 
> overhead on slow FileSystems like S3. This could improve the performance of 
> {{INSERT OVERWRITE TABLE}} queries especially when there are large number of 
> partitions on tables located on S3 should the user wish to set auto.purge 
> property to true
> Similarly {{TRUNCATE TABLE}} query on a table with {{auto.purge}} property 
> set true should not move the data to Trash



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16308) PreExecutePrinter and PostExecutePrinter should log to INFO level instead of ERROR

2017-04-01 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang updated HIVE-16308:
---
   Resolution: Fixed
Fix Version/s: 3.0.0
   2.3.0
   Status: Resolved  (was: Patch Available)

Committed to 2.3.0 & 3.0.0. Thanks [~stakiar].

> PreExecutePrinter and PostExecutePrinter should log to INFO level instead of 
> ERROR
> --
>
> Key: HIVE-16308
> URL: https://issues.apache.org/jira/browse/HIVE-16308
> Project: Hive
>  Issue Type: Bug
>  Components: Test
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
> Fix For: 2.3.0, 3.0.0
>
> Attachments: HIVE-16308.1.patch
>
>
> Many of the pre and post hook printers log info at the ERROR level, which is 
> confusing since they aren't errors. They should log to the INFO level.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-15007) Hive 1.2.2 release planning

2017-04-01 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-15007:

Attachment: HIVE-15007-branch-1.2.patch

The last few runs had failed because I had updated poms with 1.2.2 release 
pointers.

> Hive 1.2.2 release planning
> ---
>
> Key: HIVE-15007
> URL: https://issues.apache.org/jira/browse/HIVE-15007
> Project: Hive
>  Issue Type: Task
>Affects Versions: 1.2.1
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-15007-branch-1.1.patch, 
> HIVE-15007-branch-1.2.patch, HIVE-15007-branch-1.2.patch, 
> HIVE-15007-branch-1.2.patch, HIVE-15007-branch-1.2.patch, 
> HIVE-15007-branch-1.2.patch, HIVE-15007-branch-1.2.patch, 
> HIVE-15007-branch-1.2.patch, HIVE-15007-branch-1.2.patch, 
> HIVE-15007-branch-1.2.patch, HIVE-15007-branch-1.patch
>
>
> Discussed with [~spena] about triggering unit test runs for 1.2.2 release and 
> creating a patch which will trigger precommits looks like a good way.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16233) llap: Query failed with AllocatorOutOfMemoryException

2017-04-01 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15952300#comment-15952300
 ] 

Sergey Shelukhin commented on HIVE-16233:
-

I realized randomly it won't work for the case when no half sized buffer is 
available. Luckily the TODOs in the test have a TODO for a test for that. Would 
need another level of recursion at the bottom if we failed to find anything 
and/or if we failed to lock.

> llap: Query failed with AllocatorOutOfMemoryException
> -
>
> Key: HIVE-16233
> URL: https://issues.apache.org/jira/browse/HIVE-16233
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Siddharth Seth
>Assignee: Sergey Shelukhin
> Attachments: HIVE-16233.WIP.patch
>
>
> {code}
> TaskAttempt 5 failed, info=[Error: Error while running task ( failure ) : 
> attempt_1488231257387_2288_25_05_56_5:java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: 
> java.io.IOException: 
> org.apache.hadoop.hive.common.io.Allocator$AllocatorOutOfMemoryException: 
> Failed to allocate 262144; at 0 out of 1
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:211)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:168)
> at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:370)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at 
> org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:110)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.io.IOException: java.io.IOException: 
> org.apache.hadoop.hive.common.io.Allocator$AllocatorOutOfMemoryException: 
> Failed to allocate 262144; at 0 out of 1
> at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:74)
> at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:419)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:185)
> ... 15 more
> Caused by: java.io.IOException: java.io.IOException: 
> org.apache.hadoop.hive.common.io.Allocator$AllocatorOutOfMemoryException: 
> Failed to allocate 262144; at 0 out of 1
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:365)
> at 
> org.apache.hadoop.hive.ql.io.HiveRecordReader.doNext(HiveRecordReader.java:79)
> at 
> org.apache.hadoop.hive.ql.io.HiveRecordReader.doNext(HiveRecordReader.java:33)
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:116)
> at 
> org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:151)
> at 
> org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:116)
> at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:62)
> ... 17 more
> Caused by: java.io.IOException: 
> org.apache.hadoop.hive.common.io.Allocator$AllocatorOutOfMemoryException: 
> Failed to allocate 262144; at 0 out of 1
> at 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.readEncodedColumns(EncodedReaderImpl.java:425)
> at 
> 

[jira] [Updated] (HIVE-16351) Hive confused by CR/LFs

2017-04-01 Thread Daniel Doubrovkine (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Doubrovkine updated HIVE-16351:
--
Description: 
>From https://github.com/rcongiu/Hive-JSON-Serde/issues/65

This happens with both JSON and MongoDB connector Serde, so I don't believe 
this is a Serde bug.

Using 
http://www.congiu.net/hive-json-serde/1.3.6/cdh4/json-serde-1.3.6-jar-with-dependencies.jar
 placed into /usr/local/Cellar/apache-hive-1.2.1/lib

A dummy test.json with a CR/LF

{code}
$ cat /tmp/test.json
{"text":"foo\nbar","number":123}

$ hadoop fs -mkdir /user/data

$ hive
hive> CREATE DATABASE test;

hive> CREATE EXTERNAL TABLE test ( text string )
> ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
> LOCATION '/user/data';

hive> SELECT * FROM test;

foo
bar 123
NULLNULL
{code}

You can see how that's totally wrong, there's only one row of data.

  was:
>From https://github.com/rcongiu/Hive-JSON-Serde/issues/65

This happens with both JSON and MongoDB connector Serde, so I don't believe 
this is a Serde bug.

Using 
http://www.congiu.net/hive-json-serde/1.3.6/cdh4/json-serde-1.3.6-jar-with-dependencies.jar
 placed into /usr/local/Cellar/apache-hive-1.2.1/lib

A dummy test.json with a CR/LF

```
$ cat /tmp/test.json
{"text":"foo\nbar","number":123}

$ hadoop fs -mkdir /user/data

$ hive
hive> CREATE DATABASE test;

hive> CREATE EXTERNAL TABLE test ( text string )
> ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
> LOCATION '/user/data';

hive> SELECT * FROM test;

foo
bar 123
NULLNULL
```

You can see how that's totally wrong, there's only one row of data.


> Hive confused by CR/LFs
> ---
>
> Key: HIVE-16351
> URL: https://issues.apache.org/jira/browse/HIVE-16351
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Serializers/Deserializers
>Affects Versions: 1.2.1
> Environment: Hadoop 2.7.3
>Reporter: Daniel Doubrovkine
>
> From https://github.com/rcongiu/Hive-JSON-Serde/issues/65
> This happens with both JSON and MongoDB connector Serde, so I don't believe 
> this is a Serde bug.
> Using 
> http://www.congiu.net/hive-json-serde/1.3.6/cdh4/json-serde-1.3.6-jar-with-dependencies.jar
>  placed into /usr/local/Cellar/apache-hive-1.2.1/lib
> A dummy test.json with a CR/LF
> {code}
> $ cat /tmp/test.json
> {"text":"foo\nbar","number":123}
> $ hadoop fs -mkdir /user/data
> $ hive
> hive> CREATE DATABASE test;
> hive> CREATE EXTERNAL TABLE test ( text string )
> > ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
> > LOCATION '/user/data';
> hive> SELECT * FROM test;
> foo
> bar   123
> NULL  NULL
> {code}
> You can see how that's totally wrong, there's only one row of data.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16316) Prepare master branch for 3.0.0 development.

2017-04-01 Thread Naveen Gangam (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15952228#comment-15952228
 ] 

Naveen Gangam commented on HIVE-16316:
--

Test failures are unrelated. [~pxiong] [~owen.omalley] could you take a quick 
look so I could commit this? Thanks

> Prepare master branch for 3.0.0 development.
> 
>
> Key: HIVE-16316
> URL: https://issues.apache.org/jira/browse/HIVE-16316
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Attachments: HIVE-16316.patch
>
>
> master branch is now being used for 3.0.0 development. The build files will 
> need to reflect this change.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16350) No Exception when tried to alter table location with invalid uri

2017-04-01 Thread anubhav tarar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

anubhav tarar updated HIVE-16350:
-
Description: 
I tried to alter a table location with invalid uri it do not give me any 
exception

here are logs

hive> alter table hivetable set LOCATION 'loc:home/knoldus/Desktop'
> ;
OK
but at time of insert it gives me exception

hive> insert into hivetable values(1,2);
FAILED: IllegalStateException Error getting FileSystem for 
loc:/home/knoldus/Desktop: java.io.IOException: No FileSystem for scheme: loc

should give exception at time of alter table command

  was:
I tried to alter a table location with invalid uri it don not give me any 
exception

here are logs

hive> alter table hivetable set LOCATION 'loc:home/knoldus/Desktop'
> ;
OK
but at time of insert it gives me exception

hive> insert into hivetable values(1,2);
FAILED: IllegalStateException Error getting FileSystem for 
loc:/home/knoldus/Desktop: java.io.IOException: No FileSystem for scheme: loc


> No Exception when tried to alter table location with invalid uri
> 
>
> Key: HIVE-16350
> URL: https://issues.apache.org/jira/browse/HIVE-16350
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 1.2.1
>Reporter: anubhav tarar
>Assignee: anubhav tarar
>Priority: Trivial
>
> I tried to alter a table location with invalid uri it do not give me any 
> exception
> here are logs
> hive> alter table hivetable set LOCATION 'loc:home/knoldus/Desktop'
> > ;
> OK
> but at time of insert it gives me exception
> hive> insert into hivetable values(1,2);
> FAILED: IllegalStateException Error getting FileSystem for 
> loc:/home/knoldus/Desktop: java.io.IOException: No FileSystem for scheme: loc
> should give exception at time of alter table command



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HIVE-16350) No Exception when tried to alter table location with invalid uri

2017-04-01 Thread anubhav tarar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

anubhav tarar reassigned HIVE-16350:



> No Exception when tried to alter table location with invalid uri
> 
>
> Key: HIVE-16350
> URL: https://issues.apache.org/jira/browse/HIVE-16350
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 1.2.1
>Reporter: anubhav tarar
>Assignee: anubhav tarar
>Priority: Trivial
>
> I tried to alter a table location with invalid uri it don not give me any 
> exception
> here are logs
> hive> alter table hivetable set LOCATION 'loc:home/knoldus/Desktop'
> > ;
> OK
> but at time of insert it gives me exception
> hive> insert into hivetable values(1,2);
> FAILED: IllegalStateException Error getting FileSystem for 
> loc:/home/knoldus/Desktop: java.io.IOException: No FileSystem for scheme: loc



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-12636) Ensure that all queries (with DbTxnManager) run in a transaction

2017-04-01 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15952136#comment-15952136
 ] 

Hive QA commented on HIVE-12636:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12861570/HIVE-12636.01.patch

{color:green}SUCCESS:{color} +1 due to 8 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 26 failed/errored test(s), 10539 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] 
(batchId=50)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_4] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[dbtxnmgr_abort] 
(batchId=40)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[dbtxnmgr_showlocks] 
(batchId=73)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=59)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=72)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[subquery_exists_explain_rewrite]
 (batchId=43)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[subquery_in_explain_rewrite]
 (batchId=4)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
 (batchId=135)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_3] 
(batchId=94)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] 
(batchId=94)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[dbtxnmgr_nodblock]
 (batchId=86)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[dbtxnmgr_nodbunlock]
 (batchId=87)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[dbtxnmgr_notablelock]
 (batchId=86)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[dbtxnmgr_notableunlock]
 (batchId=87)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[update_no_such_table]
 (batchId=86)
org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveOperationType.checkHiveOperationTypeMatch
 (batchId=271)
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.majorCompactAfterAbort 
(batchId=208)
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.majorCompactWhileStreaming
 (batchId=208)
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.majorCompactWhileStreamingForSplitUpdate
 (batchId=208)
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.minorCompactAfterAbort 
(batchId=208)
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.minorCompactWhileStreaming
 (batchId=208)
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.minorCompactWhileStreamingWithSplitUpdate
 (batchId=208)
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.testMinorCompactionForSplitUpdateWithInsertsAndDeletes
 (batchId=208)
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.testMinorCompactionForSplitUpdateWithOnlyInserts
 (batchId=208)
org.apache.hive.hcatalog.api.TestHCatClient.testTransportFailure (batchId=172)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4508/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4508/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4508/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 26 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12861570 - PreCommit-HIVE-Build

> Ensure that all queries (with DbTxnManager) run in a transaction
> 
>
> Key: HIVE-12636
> URL: https://issues.apache.org/jira/browse/HIVE-12636
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Critical
> Attachments: HIVE-12636.01.patch
>
>
> Assuming Hive is using DbTxnManager
> Currently (as of this writing only auto commit mode is supported), only 
> queries that write to an Acid table start a transaction.
> Read-only queries don't open a txn but still acquire locks.
> This makes internal structures confusing/odd.
> The are constantly 2 code paths to deal with which is inconvenient and error 
> prone.
> Also, a txn id is convenient "handle" for all locks/resources within a txn.
> Doing thing would mean the client no longer needs to track locks that it 
> acquired.  This enables further improvements to metastore side of Acid.
> # add metastore call to openTxn() and acquireLocks() in a single call.  this 
> it to make sure perf doesn't degrade for read-only 

[jira] [Commented] (HIVE-16336) Rename hive.spark.use.file.size.for.mapjoin to hive.spark.use.ts.stats.for.mapjoin

2017-04-01 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15952134#comment-15952134
 ] 

Lefty Leverenz commented on HIVE-16336:
---

+1 for the parameter descriptions.

Thanks for the formatting fix, [~csun].

> Rename hive.spark.use.file.size.for.mapjoin to 
> hive.spark.use.ts.stats.for.mapjoin
> --
>
> Key: HIVE-16336
> URL: https://issues.apache.org/jira/browse/HIVE-16336
> Project: Hive
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Chao Sun
>Assignee: Chao Sun
> Attachments: HIVE-16336.0.patch, HIVE-16336.1.patch
>
>
> The name {{hive.spark.use.file.size.for.mapjoin}} is confusing. It indicates 
> that HoS uses file size for mapjoin but in fact it still uses (in-memory) 
> data size. We should change it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16349) Enable DDL statement for non-native tables

2017-04-01 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15952119#comment-15952119
 ] 

Hive QA commented on HIVE-16349:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12861563/HIVE-16349.01.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 10545 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[escape_comments] 
(batchId=232)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_if_expr]
 (batchId=142)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[alter_non_native]
 (batchId=86)
org.apache.hive.hcatalog.api.TestHCatClient.testTransportFailure (batchId=173)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4507/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4507/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4507/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12861563 - PreCommit-HIVE-Build

> Enable DDL statement for non-native tables
> --
>
> Key: HIVE-16349
> URL: https://issues.apache.org/jira/browse/HIVE-16349
> Project: Hive
>  Issue Type: New Feature
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-16349.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-15007) Hive 1.2.2 release planning

2017-04-01 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15952096#comment-15952096
 ] 

Hive QA commented on HIVE-15007:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12861557/HIVE-15007-branch-1.2.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4506/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4506/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4506/

Messages:
{noformat}
 This message was trimmed, see log for full details 
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-7-openjdk-amd64/lib/ct.sym(META-INF/sym/rt.jar/java/util/Collection.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-7-openjdk-amd64/lib/ct.sym(META-INF/sym/rt.jar/java/util/Collections.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-7-openjdk-amd64/lib/ct.sym(META-INF/sym/rt.jar/java/util/Comparator.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-7-openjdk-amd64/lib/ct.sym(META-INF/sym/rt.jar/java/util/Iterator.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-7-openjdk-amd64/lib/ct.sym(META-INF/sym/rt.jar/java/util/List.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-7-openjdk-amd64/lib/ct.sym(META-INF/sym/rt.jar/java/util/Map.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-7-openjdk-amd64/lib/ct.sym(META-INF/sym/rt.jar/java/util/StringTokenizer.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.6.0/hadoop-common-2.6.0.jar(org/apache/hadoop/conf/Configuration.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.6.0/hadoop-common-2.6.0.jar(org/apache/hadoop/fs/Path.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.6.0/hadoop-common-2.6.0.jar(org/apache/hadoop/util/StringUtils.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.6.0/hadoop-common-2.6.0.jar(org/apache/hadoop/util/VersionInfo.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-7-openjdk-amd64/lib/ct.sym(META-INF/sym/rt.jar/java/lang/Iterable.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.6.0/hadoop-common-2.6.0.jar(org/apache/hadoop/io/Writable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-7-openjdk-amd64/lib/ct.sym(META-INF/sym/rt.jar/java/lang/String.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/aggregate/jetty-all-server/7.6.0.v20120127/jetty-all-server-7.6.0.v20120127.jar(org/eclipse/jetty/http/HttpStatus.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-7-openjdk-amd64/lib/ct.sym(META-INF/sym/rt.jar/java/util/HashMap.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-core/1.14/jersey-core-1.14.jar(javax/ws/rs/core/MediaType.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-core/1.14/jersey-core-1.14.jar(javax/ws/rs/core/Response.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-branch-1.2-source/ql/target/hive-exec-1.2.2.jar(org/codehaus/jackson/map/ObjectMapper.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-7-openjdk-amd64/lib/ct.sym(META-INF/sym/rt.jar/java/lang/Exception.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-7-openjdk-amd64/lib/ct.sym(META-INF/sym/rt.jar/java/lang/Throwable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-7-openjdk-amd64/lib/ct.sym(META-INF/sym/rt.jar/java/io/Serializable.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-server/1.14/jersey-server-1.14.jar(com/sun/jersey/api/core/PackagesResourceConfig.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-servlet/1.14/jersey-servlet-1.14.jar(com/sun/jersey/spi/container/servlet/ServletContainer.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-branch-1.2-source/common/target/hive-common-1.2.2.jar(org/apache/hadoop/hive/common/classification/InterfaceStability.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-hdfs/2.6.0/hadoop-hdfs-2.6.0.jar(org/apache/hadoop/hdfs/web/AuthFilter.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.6.0/hadoop-common-2.6.0.jar(org/apache/hadoop/security/UserGroupInformation.class)]]
[loading 

[jira] [Commented] (HIVE-16277) Exchange Partition between filesystems throws "IllegalArgumentException Wrong FS"

2017-04-01 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15952095#comment-15952095
 ] 

Hive QA commented on HIVE-16277:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12861548/HIVE-16277.4.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 10545 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.hcatalog.api.TestHCatClient.testTransportFailure (batchId=173)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4505/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4505/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4505/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12861548 - PreCommit-HIVE-Build

> Exchange Partition between filesystems throws "IllegalArgumentException Wrong 
> FS"
> -
>
> Key: HIVE-16277
> URL: https://issues.apache.org/jira/browse/HIVE-16277
> Project: Hive
>  Issue Type: Bug
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
> Attachments: HIVE-16277.1.patch, HIVE-16277.2.patch, 
> HIVE-16277.3.patch, HIVE-16277.4.patch
>
>
> The following query: {{alter table s3_tbl exchange partition (country='USA') 
> with table hdfs_tbl}} fails with the following exception:
> {code}
> Error: org.apache.hive.service.cli.HiveSQLException: Error while processing 
> statement: FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Got exception: 
> java.lang.IllegalArgumentException Wrong FS: 
> s3a://[bucket]/table/country=USA, expected: file:///)
>   at 
> org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:379)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:256)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:347)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:361)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
> MetaException(message:Got exception: java.lang.IllegalArgumentException Wrong 
> FS: s3a://[bucket]/table/country=USA, expected: file:///)
>   at 
> org.apache.hadoop.hive.ql.metadata.Hive.exchangeTablePartitions(Hive.java:3553)
>   at 
> org.apache.hadoop.hive.ql.exec.DDLTask.exchangeTablePartition(DDLTask.java:4691)
>   at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:570)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2182)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1838)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1525)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1236)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1231)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:254)
>   ... 11 more
> Caused by: MetaException(message:Got exception: 
> java.lang.IllegalArgumentException Wrong FS: 
> s3a://[bucket]/table/country=USA, expected: file:///)
>   at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.logAndThrowMetaException(MetaStoreUtils.java:1387)
>   at 
> org.apache.hadoop.hive.metastore.Warehouse.renameDir(Warehouse.java:208)
>   at 
> org.apache.hadoop.hive.metastore.Warehouse.renameDir(Warehouse.java:200)
>   at 
> 

[jira] [Commented] (HIVE-16225) Memory leak in Templeton service

2017-04-01 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15952065#comment-15952065
 ] 

Hive QA commented on HIVE-16225:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12861536/HIVE-16225.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 10544 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[escape_comments] 
(batchId=231)
org.apache.hive.hcatalog.api.TestHCatClient.testTransportFailure (batchId=172)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4504/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4504/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4504/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12861536 - PreCommit-HIVE-Build

> Memory leak in Templeton service
> 
>
> Key: HIVE-16225
> URL: https://issues.apache.org/jira/browse/HIVE-16225
> Project: Hive
>  Issue Type: Bug
>Reporter: Subramanyam Pattipaka
>Assignee: Daniel Dai
> Attachments: HIVE-16225.1.patch, HIVE-16225.2.patch, 
> HIVE-16225.3.patch, screenshot-1.png
>
>
> This is a known beast. here are details
> The problem seems to be similar to the one discussed in HIVE-13749. If we 
> submit very large number of jobs like 1000 to 2000 then we can see increase 
> in Configuration objects count.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)