[jira] [Commented] (HIVE-13879) add HiveAuthzContext to grant/revoke methods in HiveAuthorizer api

2016-07-20 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387197#comment-15387197
 ] 

Hive QA commented on HIVE-13879:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12819034/HIVE-13879.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 10 failed/errored test(s), 10344 tests 
executed
*Failed tests:*
{noformat}
TestMsgBusConnection - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_acid_globallimit
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_join_nulls
org.apache.hadoop.hive.llap.daemon.impl.TestLlapTokenChecker.testCheckPermissions
org.apache.hadoop.hive.llap.daemon.impl.TestLlapTokenChecker.testGetToken
org.apache.hadoop.hive.metastore.TestMetaStoreMetrics.testConnections
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/587/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/587/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-587/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 10 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12819034 - PreCommit-HIVE-MASTER-Build

> add HiveAuthzContext to grant/revoke methods in HiveAuthorizer api
> --
>
> Key: HIVE-13879
> URL: https://issues.apache.org/jira/browse/HIVE-13879
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Attachments: HIVE-13879.1.patch
>
>
> HiveAuthzContext provides useful information about the context of the 
> commands, such as the command string and ip address information. However, 
> this is available to only checkPrivileges and filterListCmdObjects api calls.
> This should be made available for other api calls such as grant/revoke 
> methods and role management methods.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14291) count(*) on a table written by hcatstorer returns incorrect result

2016-07-20 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387140#comment-15387140
 ] 

Ashutosh Chauhan commented on HIVE-14291:
-

+1

> count(*) on a table written by hcatstorer returns incorrect result
> --
>
> Key: HIVE-14291
> URL: https://issues.apache.org/jira/browse/HIVE-14291
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-14291.01.patch, HIVE-14291.02.patch
>
>
> {code}
> count(*) on a table written by hcatstorer returns wrong result. 
> {code}
> steps to repro the issue:
> 1) create hive table
> {noformat}
> create  table ${DEST_TABLE}(name string, age int, gpa float)
> row format delimited
> fields terminated by '\t'
> stored as textfile;
> {noformat}
> 2) load data into table using hcatstorer
> {noformat}
> A = LOAD '$DATA_1' USING PigStorage() AS (name:chararray, age:int, gpa:float);
> B = LOAD '$DATA_2' USING PigStorage() AS (name:chararray, age:int, gpa:float);
> C = UNION A, B;
> STORE C INTO '$HIVE_TABLE'  USING org.apache.hive.hcatalog.pig.HCatStorer();
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14290) Refactor HIVE-14054 to use Collections#newSetFromMap

2016-07-20 Thread Peter Slawski (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387122#comment-15387122
 ] 

Peter Slawski commented on HIVE-14290:
--

Thank you [~prasanth_j] for the review. It looks like an unrelated error caused 
the build to fail. I have attached the same patch again to this JIRA to 
hopefully trigger the QA build.

{code}
Could not transfer artifact 
org.pentaho:pentaho-aggdesigner-algorithm:pom:5.1.5-jhyde from/to datanucleus
{code}

> Refactor HIVE-14054 to use Collections#newSetFromMap
> 
>
> Key: HIVE-14290
> URL: https://issues.apache.org/jira/browse/HIVE-14290
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 2.1.0
>Reporter: Peter Slawski
>Assignee: Peter Slawski
>Priority: Trivial
> Attachments: HIVE-14290.1.patch, HIVE-14290.1.patch
>
>
> There is a minor refactor that can be made to HiveMetaStoreChecker so that it 
> cleanly creates and uses a set that is backed by a Map implementation. In 
> this case, the underlying Map implementation is ConcurrentHashMap. This 
> refactor will help prevent issues such as the one reported in HIVE-14054.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14290) Refactor HIVE-14054 to use Collections#newSetFromMap

2016-07-20 Thread Peter Slawski (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Slawski updated HIVE-14290:
-
Attachment: HIVE-14290.1.patch

> Refactor HIVE-14054 to use Collections#newSetFromMap
> 
>
> Key: HIVE-14290
> URL: https://issues.apache.org/jira/browse/HIVE-14290
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 2.1.0
>Reporter: Peter Slawski
>Assignee: Peter Slawski
>Priority: Trivial
> Attachments: HIVE-14290.1.patch, HIVE-14290.1.patch
>
>
> There is a minor refactor that can be made to HiveMetaStoreChecker so that it 
> cleanly creates and uses a set that is backed by a Map implementation. In 
> this case, the underlying Map implementation is ConcurrentHashMap. This 
> refactor will help prevent issues such as the one reported in HIVE-14054.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14205) Hive doesn't support union type with AVRO file format

2016-07-20 Thread Yibing Shi (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387114#comment-15387114
 ] 

Yibing Shi commented on HIVE-14205:
---

Still failed. I will work on a new patch.

> Hive doesn't support union type with AVRO file format
> -
>
> Key: HIVE-14205
> URL: https://issues.apache.org/jira/browse/HIVE-14205
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers
>Reporter: Yibing Shi
>Assignee: Yibing Shi
> Attachments: HIVE-14205.1.patch, HIVE-14205.2.patch, 
> HIVE-14205.3.patch, HIVE-14205.4.patch, HIVE-14205.5.patch
>
>
> Reproduce steps:
> {noformat}
> hive> CREATE TABLE avro_union_test
> > PARTITIONED BY (p int)
> > ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
> > STORED AS INPUTFORMAT 
> 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
> > OUTPUTFORMAT 
> 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
> > TBLPROPERTIES ('avro.schema.literal'='{
> >"type":"record",
> >"name":"nullUnionTest",
> >"fields":[
> >   {
> >  "name":"value",
> >  "type":[
> > "null",
> > "int",
> > "long"
> >  ],
> >  "default":null
> >   }
> >]
> > }');
> OK
> Time taken: 0.105 seconds
> hive> alter table avro_union_test add partition (p=1);
> OK
> Time taken: 0.093 seconds
> hive> select * from avro_union_test;
> FAILED: RuntimeException org.apache.hadoop.hive.ql.metadata.HiveException: 
> Failed with exception Hive internal error inside 
> isAssignableFromSettablePrimitiveOI void not supported 
> yet.java.lang.RuntimeException: Hive internal error inside 
> isAssignableFromSettablePrimitiveOI void not supported yet.
>   at 
> org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.isInstanceOfSettablePrimitiveOI(ObjectInspectorUtils.java:1140)
>   at 
> org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.isInstanceOfSettableOI(ObjectInspectorUtils.java:1149)
>   at 
> org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.hasAllFieldsSettable(ObjectInspectorUtils.java:1187)
>   at 
> org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.hasAllFieldsSettable(ObjectInspectorUtils.java:1220)
>   at 
> org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.hasAllFieldsSettable(ObjectInspectorUtils.java:1200)
>   at 
> org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorConverters.getConvertedOI(ObjectInspectorConverters.java:219)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.setupOutputObjectInspector(FetchOperator.java:581)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.initialize(FetchOperator.java:172)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.(FetchOperator.java:140)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchTask.initialize(FetchTask.java:79)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:482)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:311)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1194)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1289)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1120)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1108)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:218)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:170)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:381)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:773)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:691)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:626)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {noformat}
> Another test case to show this problem is:
> {noformat}
> hive> create table avro_union_test2 (value uniontype) stored as 
> avro;
> OK
> Time taken: 0.053 seconds
> hive> show create table avro_union_test2;
> OK
> CREATE TABLE `avro_union_test2`(
>   `value` uniontype COMMENT '')
> ROW FORMAT SERDE
>   'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
> STORED AS INPUTFORMAT
>   

[jira] [Commented] (HIVE-14214) ORC Schema Evolution and Predicate Push Down do not work together (no rows returned)

2016-07-20 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387086#comment-15387086
 ] 

Matt McCline commented on HIVE-14214:
-

Agreed except I don't have time to rework it right now.  Currently the 
RecordReader has the SchemaEvolution not the Reader.  I'll create another JIRA 
for reworking this later.

> ORC Schema Evolution and Predicate Push Down do not work together (no rows 
> returned)
> 
>
> Key: HIVE-14214
> URL: https://issues.apache.org/jira/browse/HIVE-14214
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-14214.01.patch, HIVE-14214.02.patch, 
> HIVE-14214.03.patch, HIVE-14214.04.patch, HIVE-14214.05.patch, 
> HIVE-14214.06.patch, HIVE-14214.WIP.patch
>
>
> In Schema Evolution, the reader schema is different than the file schema 
> which is used to evaluate predicate push down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14214) ORC Schema Evolution and Predicate Push Down do not work together (no rows returned)

2016-07-20 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-14214:

Attachment: HIVE-14214.06.patch

> ORC Schema Evolution and Predicate Push Down do not work together (no rows 
> returned)
> 
>
> Key: HIVE-14214
> URL: https://issues.apache.org/jira/browse/HIVE-14214
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-14214.01.patch, HIVE-14214.02.patch, 
> HIVE-14214.03.patch, HIVE-14214.04.patch, HIVE-14214.05.patch, 
> HIVE-14214.06.patch, HIVE-14214.WIP.patch
>
>
> In Schema Evolution, the reader schema is different than the file schema 
> which is used to evaluate predicate push down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14304) Beeline command will fail when entireLineAsCommand set to true

2016-07-20 Thread niklaus xiao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

niklaus xiao updated HIVE-14304:

Fix Version/s: 2.2.0
   Status: Patch Available  (was: Open)

> Beeline command will fail when entireLineAsCommand set to true
> --
>
> Key: HIVE-14304
> URL: https://issues.apache.org/jira/browse/HIVE-14304
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 1.3.0, 2.2.0
>Reporter: niklaus xiao
>Assignee: niklaus xiao
> Fix For: 2.2.0
>
> Attachments: HIVE-14304.1.patch
>
>
> Use beeline
> {code}
> beeline --entireLineAsCommand=true
> {code}
> show tables fail:
> {code}
> 0: jdbc:hive2://189.39.151.44:21066/> show tables;
> Error: Error while compiling statement: FAILED: ParseException line 1:11 
> extraneous input ';' expecting EOF near '' (state=42000,code=4)
> {code}
> We should remove the trailing semi-colon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14303) CommonJoinOperator.checkAndGenObject should return directly at CLOSE state to avoid NPE if ExecReducer.close is called twice.

2016-07-20 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated HIVE-14303:
-
Status: Patch Available  (was: Open)

> CommonJoinOperator.checkAndGenObject should return directly at CLOSE state to 
> avoid NPE if ExecReducer.close is called twice.
> -
>
> Key: HIVE-14303
> URL: https://issues.apache.org/jira/browse/HIVE-14303
> Project: Hive
>  Issue Type: Bug
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.1.0
>
> Attachments: HIVE-14303.000.patch
>
>
> CommonJoinOperator.checkAndGenObject should return directly at CLOSE state to 
> avoid NPE if ExecReducer.close is called twice. ExecReducer.close implements 
> Closeable interface and ExecReducer.close can be called multiple time. We saw 
> the following NPE which hide the real exception due to this bug.
> {code}
> Error: java.lang.RuntimeException: Hive Runtime Error while closing 
> operators: null
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:296)
> at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
> at 
> org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:718)
> at 
> org.apache.hadoop.hive.ql.exec.JoinOperator.endGroup(JoinOperator.java:256)
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:284)
> ... 8 more
> {code}
> The code from ReduceTask.runOldReducer:
> {code}
>   reducer.close(); //line 453
>   reducer = null;
>   
>   out.close(reporter);
>   out = null;
> } finally {
>   IOUtils.cleanup(LOG, reducer);// line 459
>   closeQuietly(out, reporter);
> }
> {code}
> Based on the above stack trace and code, reducer.close() is called twice 
> because the exception happened when reducer.close() is called for the first 
> time at line 453, the code exit before reducer was set to null. 
> NullPointerException is triggered when reducer.close() is called for the 
> second time in IOUtils.cleanup. NullPointerException hide the real exception 
> which happened when reducer.close() is called for the first time at line 453.
> The reason for NPE is:
> The first reducer.close called CommonJoinOperator.closeOp which clear 
> {{storage}}
> {code}
> Arrays.fill(storage, null);
> {code}
> the second reduce.close generated NPE due to null {{storage[alias]}} which is 
> set to null by first reducer.close.
> The following reducer log can give more proof:
> {code}
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: 0 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: 0 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: SKEWJOINFOLLOWUPJOBS:0
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 1 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 2 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 3 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: 4 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: FS[4]: records written - 
> 53466
> 2016-07-14 22:25:11,555 ERROR [main] ExecReducer: Hit error while closing 
> operators - failing tree
> 2016-07-14 22:25:11,649 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : java.lang.RuntimeException: Hive Runtime Error 
> while closing operators: null
>   at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:296)
>   at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
>   at 
> org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
>   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
>   at java.security.AccessController.doPrivileged(Native 

[jira] [Updated] (HIVE-14304) Beeline command will fail when entireLineAsCommand set to true

2016-07-20 Thread niklaus xiao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

niklaus xiao updated HIVE-14304:

Attachment: HIVE-14304.1.patch

> Beeline command will fail when entireLineAsCommand set to true
> --
>
> Key: HIVE-14304
> URL: https://issues.apache.org/jira/browse/HIVE-14304
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 1.3.0, 2.2.0
>Reporter: niklaus xiao
>Assignee: niklaus xiao
> Attachments: HIVE-14304.1.patch
>
>
> Use beeline
> {code}
> beeline --entireLineAsCommand=true
> {code}
> show tables fail:
> {code}
> 0: jdbc:hive2://189.39.151.44:21066/> show tables;
> Error: Error while compiling statement: FAILED: ParseException line 1:11 
> extraneous input ';' expecting EOF near '' (state=42000,code=4)
> {code}
> We should remove the trailing semi-colon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14303) CommonJoinOperator.checkAndGenObject should return directly at CLOSE state to avoid NPE if ExecReducer.close is called twice.

2016-07-20 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated HIVE-14303:
-
Description: 
CommonJoinOperator.checkAndGenObject should return directly at CLOSE state to 
avoid NPE if ExecReducer.close is called twice. ExecReducer.close implements 
Closeable interface and ExecReducer.close can be called multiple time. We saw 
the following NPE which hide the real exception due to this bug.
{code}
Error: java.lang.RuntimeException: Hive Runtime Error while closing operators: 
null

at 
org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:296)

at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)

at 
org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)

at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)

at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:415)

at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)

at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

Caused by: java.lang.NullPointerException

at 
org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:718)

at 
org.apache.hadoop.hive.ql.exec.JoinOperator.endGroup(JoinOperator.java:256)

at 
org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:284)

... 8 more
{code}
The code from ReduceTask.runOldReducer:
{code}
  reducer.close(); //line 453
  reducer = null;
  
  out.close(reporter);
  out = null;
} finally {
  IOUtils.cleanup(LOG, reducer);// line 459
  closeQuietly(out, reporter);
}
{code}
Based on the above stack trace and code, reducer.close() is called twice 
because the exception happened when reducer.close() is called for the first 
time at line 453, the code exit before reducer was set to null. 
NullPointerException is triggered when reducer.close() is called for the second 
time in IOUtils.cleanup. NullPointerException hide the real exception which 
happened when reducer.close() is called for the first time at line 453.
The reason for NPE is:
The first reducer.close called CommonJoinOperator.closeOp which clear 
{{storage}}
{code}
Arrays.fill(storage, null);
{code}
the second reduce.close generated NPE due to null {{storage[alias]}} which is 
set to null by first reducer.close.
The following reducer log can give more proof:
{code}
2016-07-14 22:24:51,016 INFO [main] 
org.apache.hadoop.hive.ql.exec.JoinOperator: 0 finished. closing... 
2016-07-14 22:24:51,016 INFO [main] 
org.apache.hadoop.hive.ql.exec.JoinOperator: 0 finished. closing... 
2016-07-14 22:24:51,016 INFO [main] 
org.apache.hadoop.hive.ql.exec.JoinOperator: SKEWJOINFOLLOWUPJOBS:0
2016-07-14 22:24:51,016 INFO [main] 
org.apache.hadoop.hive.ql.exec.SelectOperator: 1 finished. closing... 
2016-07-14 22:24:51,016 INFO [main] 
org.apache.hadoop.hive.ql.exec.SelectOperator: 2 finished. closing... 
2016-07-14 22:24:51,016 INFO [main] 
org.apache.hadoop.hive.ql.exec.SelectOperator: 3 finished. closing... 
2016-07-14 22:24:51,016 INFO [main] 
org.apache.hadoop.hive.ql.exec.FileSinkOperator: 4 finished. closing... 
2016-07-14 22:24:51,016 INFO [main] 
org.apache.hadoop.hive.ql.exec.FileSinkOperator: FS[4]: records written - 53466
2016-07-14 22:25:11,555 ERROR [main] ExecReducer: Hit error while closing 
operators - failing tree
2016-07-14 22:25:11,649 WARN [main] org.apache.hadoop.mapred.YarnChild: 
Exception running child : java.lang.RuntimeException: Hive Runtime Error while 
closing operators: null
at 
org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:296)
at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
at 
org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:718)
at 
org.apache.hadoop.hive.ql.exec.JoinOperator.endGroup(JoinOperator.java:256)
at 
org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:284)
... 8 more
{code}

  was:
CommonJoinOperator.checkAndGenObject should return directly in CLOSE state to 
avoid NPE if ExecReducer.close is called twice. 

[jira] [Commented] (HIVE-14303) CommonJoinOperator.checkAndGenObject should return directly in CLOSE state to avoid NPE if ExecReducer.close is called twice.

2016-07-20 Thread zhihai xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387076#comment-15387076
 ] 

zhihai xu commented on HIVE-14303:
--

I attached a patch HIVE-14303.000.patch which will return directly at CLOSE 
state from checkAndGenObject if ExecReducer.close is called for the second 
time. So 
https://docs.oracle.com/javase/7/docs/api/java/io/Closeable.html#close() can be 
supported correctly.

> CommonJoinOperator.checkAndGenObject should return directly in CLOSE state to 
> avoid NPE if ExecReducer.close is called twice.
> -
>
> Key: HIVE-14303
> URL: https://issues.apache.org/jira/browse/HIVE-14303
> Project: Hive
>  Issue Type: Bug
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.1.0
>
> Attachments: HIVE-14303.000.patch
>
>
> CommonJoinOperator.checkAndGenObject should return directly in CLOSE state to 
> avoid NPE if ExecReducer.close is called twice. ExecReducer.close implements 
> Closeable interface and ExecReducer.close can be called multiple time. We saw 
> the following NPE which hide the real exception due to this bug.
> {code}
> Error: java.lang.RuntimeException: Hive Runtime Error while closing 
> operators: null
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:296)
> at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
> at 
> org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:718)
> at 
> org.apache.hadoop.hive.ql.exec.JoinOperator.endGroup(JoinOperator.java:256)
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:284)
> ... 8 more
> {code}
> The code from ReduceTask.runOldReducer:
> {code}
>   reducer.close(); //line 453
>   reducer = null;
>   
>   out.close(reporter);
>   out = null;
> } finally {
>   IOUtils.cleanup(LOG, reducer);// line 459
>   closeQuietly(out, reporter);
> }
> {code}
> Based on the above stack trace and code, reducer.close() is called twice 
> because the exception happened when reducer.close() is called for the first 
> time at line 453, the code exit before reducer was set to null. 
> NullPointerException is triggered when reducer.close() is called for the 
> second time in IOUtils.cleanup. NullPointerException hide the real exception 
> which happened when reducer.close() is called for the first time at line 453.
> The reason for NPE is:
> The first reducer.close called CommonJoinOperator.closeOp which clear 
> {{storage}}
> {code}
> Arrays.fill(storage, null);
> {code}
> the second reduce.close generated NPE due to null {{storage[alias]}} which is 
> set to null by first reducer.close.
> The following reducer log can give more proof:
> {code}
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: 0 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: 0 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: SKEWJOINFOLLOWUPJOBS:0
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 1 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 2 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 3 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: 4 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: FS[4]: records written - 
> 53466
> 2016-07-14 22:25:11,555 ERROR [main] ExecReducer: Hit error while closing 
> operators - failing tree
> 2016-07-14 22:25:11,649 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : java.lang.RuntimeException: Hive Runtime Error 
> while closing operators: null
>   at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:296)
>   at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
>   at 
> 

[jira] [Updated] (HIVE-14303) CommonJoinOperator.checkAndGenObject should return directly in CLOSE state to avoid NPE if ExecReducer.close is called twice.

2016-07-20 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated HIVE-14303:
-
Attachment: HIVE-14303.000.patch

> CommonJoinOperator.checkAndGenObject should return directly in CLOSE state to 
> avoid NPE if ExecReducer.close is called twice.
> -
>
> Key: HIVE-14303
> URL: https://issues.apache.org/jira/browse/HIVE-14303
> Project: Hive
>  Issue Type: Bug
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.1.0
>
> Attachments: HIVE-14303.000.patch
>
>
> CommonJoinOperator.checkAndGenObject should return directly in CLOSE state to 
> avoid NPE if ExecReducer.close is called twice. ExecReducer.close implements 
> Closeable interface and ExecReducer.close can be called multiple time. We saw 
> the following NPE which hide the real exception due to this bug.
> {code}
> Error: java.lang.RuntimeException: Hive Runtime Error while closing 
> operators: null
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:296)
> at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
> at 
> org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:718)
> at 
> org.apache.hadoop.hive.ql.exec.JoinOperator.endGroup(JoinOperator.java:256)
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:284)
> ... 8 more
> {code}
> The code from ReduceTask.runOldReducer:
> {code}
>   reducer.close(); //line 453
>   reducer = null;
>   
>   out.close(reporter);
>   out = null;
> } finally {
>   IOUtils.cleanup(LOG, reducer);// line 459
>   closeQuietly(out, reporter);
> }
> {code}
> Based on the above stack trace and code, reducer.close() is called twice 
> because the exception happened when reducer.close() is called for the first 
> time at line 453, the code exit before reducer was set to null. 
> NullPointerException is triggered when reducer.close() is called for the 
> second time in IOUtils.cleanup. NullPointerException hide the real exception 
> which happened when reducer.close() is called for the first time at line 453.
> The reason for NPE is:
> The first reducer.close called CommonJoinOperator.closeOp which clear 
> {{storage}}
> {code}
> Arrays.fill(storage, null);
> {code}
> the second reduce.close generated NPE due to null {{storage[alias]}} which is 
> set to null by first reducer.close.
> The following reducer log can give more proof:
> {code}
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: 0 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: 0 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.JoinOperator: SKEWJOINFOLLOWUPJOBS:0
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 1 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 2 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.SelectOperator: 3 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: 4 finished. closing... 
> 2016-07-14 22:24:51,016 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: FS[4]: records written - 
> 53466
> 2016-07-14 22:25:11,555 ERROR [main] ExecReducer: Hit error while closing 
> operators - failing tree
> 2016-07-14 22:25:11,649 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : java.lang.RuntimeException: Hive Runtime Error 
> while closing operators: null
>   at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:296)
>   at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
>   at 
> org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
>   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
>   at java.security.AccessController.doPrivileged(Native 

[jira] [Commented] (HIVE-14205) Hive doesn't support union type with AVRO file format

2016-07-20 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387048#comment-15387048
 ] 

Hive QA commented on HIVE-14205:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12819020/HIVE-14205.5.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/586/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/586/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-586/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.8.0_25 ]]
+ export JAVA_HOME=/usr/java/jdk1.8.0_25
+ JAVA_HOME=/usr/java/jdk1.8.0_25
+ export 
PATH=/usr/java/jdk1.8.0_25/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/java/jdk1.8.0_25/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-MASTER-Build-586/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 3390f5d HIVE-14279 : fix mvn test 
TestHiveMetaStore.testTransactionalValidation  (Zoltan Haindrich via Ashutosh 
Chauhan)
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 3390f5d HIVE-14279 : fix mvn test 
TestHiveMetaStore.testTransactionalValidation  (Zoltan Haindrich via Ashutosh 
Chauhan)
+ git merge --ff-only origin/master
Already up-to-date.
+ git gc
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12819020 - PreCommit-HIVE-MASTER-Build

> Hive doesn't support union type with AVRO file format
> -
>
> Key: HIVE-14205
> URL: https://issues.apache.org/jira/browse/HIVE-14205
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers
>Reporter: Yibing Shi
>Assignee: Yibing Shi
> Attachments: HIVE-14205.1.patch, HIVE-14205.2.patch, 
> HIVE-14205.3.patch, HIVE-14205.4.patch, HIVE-14205.5.patch
>
>
> Reproduce steps:
> {noformat}
> hive> CREATE TABLE avro_union_test
> > PARTITIONED BY (p int)
> > ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
> > STORED AS INPUTFORMAT 
> 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
> > OUTPUTFORMAT 
> 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
> > TBLPROPERTIES ('avro.schema.literal'='{
> >"type":"record",
> >"name":"nullUnionTest",
> >"fields":[
> >   {
> >  "name":"value",
> >  "type":[
> > "null",
> > "int",
> > "long"
> >  ],
> >  "default":null
> >   }
> >]
> > }');
> OK
> Time taken: 0.105 seconds
> hive> alter table avro_union_test add partition (p=1);
> OK
> Time taken: 0.093 seconds
> hive> select * from avro_union_test;
> FAILED: RuntimeException org.apache.hadoop.hive.ql.metadata.HiveException: 
> Failed with exception Hive internal error inside 
> isAssignableFromSettablePrimitiveOI void not supported 
> yet.java.lang.RuntimeException: Hive internal error inside 
> isAssignableFromSettablePrimitiveOI void not supported yet.
>   at 
> org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.isInstanceOfSettablePrimitiveOI(ObjectInspectorUtils.java:1140)
>   at 
> 

[jira] [Commented] (HIVE-14290) Refactor HIVE-14054 to use Collections#newSetFromMap

2016-07-20 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387039#comment-15387039
 ] 

Hive QA commented on HIVE-14290:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12818954/HIVE-14290.1.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/585/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/585/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-585/

Messages:
{noformat}
 This message was trimmed, see log for full details 
main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hive-llap-tez 
---
[INFO] Compiling 11 source files to 
/data/hive-ptest/working/apache-github-source-source/llap-tez/target/classes
[WARNING] 
/data/hive-ptest/working/apache-github-source-source/llap-tez/src/java/org/apache/hadoop/hive/llap/tezplugins/LlapTaskSchedulerService.java:
 
/data/hive-ptest/working/apache-github-source-source/llap-tez/src/java/org/apache/hadoop/hive/llap/tezplugins/LlapTaskSchedulerService.java
 uses unchecked or unsafe operations.
[WARNING] 
/data/hive-ptest/working/apache-github-source-source/llap-tez/src/java/org/apache/hadoop/hive/llap/tezplugins/LlapTaskSchedulerService.java:
 Recompile with -Xlint:unchecked for details.
[INFO] 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ 
hive-llap-tez ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 
/data/hive-ptest/working/apache-github-source-source/llap-tez/src/test/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (setup-test-dirs) @ hive-llap-tez ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/llap-tez/target/tmp
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/llap-tez/target/warehouse
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/llap-tez/target/tmp/conf
 [copy] Copying 15 files to 
/data/hive-ptest/working/apache-github-source-source/llap-tez/target/tmp/conf
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hive-llap-tez ---
[INFO] Compiling 2 source files to 
/data/hive-ptest/working/apache-github-source-source/llap-tez/target/test-classes
[WARNING] 
/data/hive-ptest/working/apache-github-source-source/llap-tez/src/test/org/apache/hadoop/hive/llap/tezplugins/TestLlapTaskCommunicator.java:
 
/data/hive-ptest/working/apache-github-source-source/llap-tez/src/test/org/apache/hadoop/hive/llap/tezplugins/TestLlapTaskCommunicator.java
 uses or overrides a deprecated API.
[WARNING] 
/data/hive-ptest/working/apache-github-source-source/llap-tez/src/test/org/apache/hadoop/hive/llap/tezplugins/TestLlapTaskCommunicator.java:
 Recompile with -Xlint:deprecation for details.
[WARNING] 
/data/hive-ptest/working/apache-github-source-source/llap-tez/src/test/org/apache/hadoop/hive/llap/tezplugins/TestLlapTaskCommunicator.java:
 
/data/hive-ptest/working/apache-github-source-source/llap-tez/src/test/org/apache/hadoop/hive/llap/tezplugins/TestLlapTaskCommunicator.java
 uses unchecked or unsafe operations.
[WARNING] 
/data/hive-ptest/working/apache-github-source-source/llap-tez/src/test/org/apache/hadoop/hive/llap/tezplugins/TestLlapTaskCommunicator.java:
 Recompile with -Xlint:unchecked for details.
[INFO] 
[INFO] --- maven-surefire-plugin:2.16:test (default-test) @ hive-llap-tez ---
[INFO] Tests are skipped.
[INFO] 
[INFO] --- maven-jar-plugin:2.2:jar (default-jar) @ hive-llap-tez ---
[INFO] Building jar: 
/data/hive-ptest/working/apache-github-source-source/llap-tez/target/hive-llap-tez-2.2.0-SNAPSHOT.jar
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ 
hive-llap-tez ---
[INFO] 
[INFO] --- maven-install-plugin:2.4:install (default-install) @ hive-llap-tez 
---
[INFO] Installing 
/data/hive-ptest/working/apache-github-source-source/llap-tez/target/hive-llap-tez-2.2.0-SNAPSHOT.jar
 to 
/data/hive-ptest/working/maven/org/apache/hive/hive-llap-tez/2.2.0-SNAPSHOT/hive-llap-tez-2.2.0-SNAPSHOT.jar
[INFO] Installing 
/data/hive-ptest/working/apache-github-source-source/llap-tez/pom.xml to 
/data/hive-ptest/working/maven/org/apache/hive/hive-llap-tez/2.2.0-SNAPSHOT/hive-llap-tez-2.2.0-SNAPSHOT.pom
[INFO] 
[INFO] 
[INFO] Building Spark Remote Client 2.2.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean 

[jira] [Commented] (HIVE-14251) Union All of different types resolves to incorrect data

2016-07-20 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387009#comment-15387009
 ] 

Hive QA commented on HIVE-14251:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12818862/HIVE-14251.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 17 failed/errored test(s), 10342 tests 
executed
*Failed tests:*
{noformat}
TestMsgBusConnection - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_1_23
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_skew_1_23
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union32
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_unionDistinct_1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_acid_globallimit
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_unionDistinct_1
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby_sort_1_23
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby_sort_skew_1_23
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_union32
org.apache.hadoop.hive.llap.daemon.impl.TestLlapTokenChecker.testCheckPermissions
org.apache.hadoop.hive.llap.daemon.impl.TestLlapTokenChecker.testGetToken
org.apache.hadoop.hive.metastore.TestMetaStoreMetrics.testConnections
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/584/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/584/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-584/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 17 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12818862 - PreCommit-HIVE-MASTER-Build

> Union All of different types resolves to incorrect data
> ---
>
> Key: HIVE-14251
> URL: https://issues.apache.org/jira/browse/HIVE-14251
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 2.0.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-14251.1.patch
>
>
> create table src(c1 date, c2 int, c3 double);
> insert into src values ('2016-01-01',5,1.25);
> select * from 
> (select c1 from src union all
> select c2 from src union all
> select c3 from src) t;
> It will return NULL for the c1 values. Seems the common data type is resolved 
> to the last c3 which is double.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14291) count(*) on a table written by hcatstorer returns incorrect result

2016-07-20 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-14291:
---
Status: Open  (was: Patch Available)

> count(*) on a table written by hcatstorer returns incorrect result
> --
>
> Key: HIVE-14291
> URL: https://issues.apache.org/jira/browse/HIVE-14291
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-14291.01.patch, HIVE-14291.02.patch
>
>
> {code}
> count(*) on a table written by hcatstorer returns wrong result. 
> {code}
> steps to repro the issue:
> 1) create hive table
> {noformat}
> create  table ${DEST_TABLE}(name string, age int, gpa float)
> row format delimited
> fields terminated by '\t'
> stored as textfile;
> {noformat}
> 2) load data into table using hcatstorer
> {noformat}
> A = LOAD '$DATA_1' USING PigStorage() AS (name:chararray, age:int, gpa:float);
> B = LOAD '$DATA_2' USING PigStorage() AS (name:chararray, age:int, gpa:float);
> C = UNION A, B;
> STORE C INTO '$HIVE_TABLE'  USING org.apache.hive.hcatalog.pig.HCatStorer();
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14291) count(*) on a table written by hcatstorer returns incorrect result

2016-07-20 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-14291:
---
Attachment: HIVE-14291.02.patch

> count(*) on a table written by hcatstorer returns incorrect result
> --
>
> Key: HIVE-14291
> URL: https://issues.apache.org/jira/browse/HIVE-14291
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-14291.01.patch, HIVE-14291.02.patch
>
>
> {code}
> count(*) on a table written by hcatstorer returns wrong result. 
> {code}
> steps to repro the issue:
> 1) create hive table
> {noformat}
> create  table ${DEST_TABLE}(name string, age int, gpa float)
> row format delimited
> fields terminated by '\t'
> stored as textfile;
> {noformat}
> 2) load data into table using hcatstorer
> {noformat}
> A = LOAD '$DATA_1' USING PigStorage() AS (name:chararray, age:int, gpa:float);
> B = LOAD '$DATA_2' USING PigStorage() AS (name:chararray, age:int, gpa:float);
> C = UNION A, B;
> STORE C INTO '$HIVE_TABLE'  USING org.apache.hive.hcatalog.pig.HCatStorer();
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14291) count(*) on a table written by hcatstorer returns incorrect result

2016-07-20 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-14291:
---
Status: Patch Available  (was: Open)

address [~ashutoshc]'s comments

> count(*) on a table written by hcatstorer returns incorrect result
> --
>
> Key: HIVE-14291
> URL: https://issues.apache.org/jira/browse/HIVE-14291
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-14291.01.patch, HIVE-14291.02.patch
>
>
> {code}
> count(*) on a table written by hcatstorer returns wrong result. 
> {code}
> steps to repro the issue:
> 1) create hive table
> {noformat}
> create  table ${DEST_TABLE}(name string, age int, gpa float)
> row format delimited
> fields terminated by '\t'
> stored as textfile;
> {noformat}
> 2) load data into table using hcatstorer
> {noformat}
> A = LOAD '$DATA_1' USING PigStorage() AS (name:chararray, age:int, gpa:float);
> B = LOAD '$DATA_2' USING PigStorage() AS (name:chararray, age:int, gpa:float);
> C = UNION A, B;
> STORE C INTO '$HIVE_TABLE'  USING org.apache.hive.hcatalog.pig.HCatStorer();
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14279) fix mvn test TestHiveMetaStore.testTransactionalValidation

2016-07-20 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-14279:

   Resolution: Fixed
Fix Version/s: 2.2.0
   Status: Resolved  (was: Patch Available)

Pushed to master. Thanks, Zoltan!

> fix mvn test TestHiveMetaStore.testTransactionalValidation 
> ---
>
> Key: HIVE-14279
> URL: https://issues.apache.org/jira/browse/HIVE-14279
> Project: Hive
>  Issue Type: Improvement
>  Components: Tests
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Minor
> Fix For: 2.2.0
>
> Attachments: HIVE-14279.1.patch
>
>
> This test doesn't drop it's table. And because there are a few subclasses of 
> it...the second one will fail - because the table already exists. for example:
> {code}
> mvn clean package  -Pitests 
> -Dtest=TestSetUGIOnBothClientServer,TestSetUGIOnOnlyClient
> {code}
> will cause:
> {code}
> org.apache.hadoop.hive.metastore.api.AlreadyExistsException: Table acidTable 
> already exists
> {code}
> for the second test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-14302) Tez: Optimized Hashtable can support DECIMAL keys of same precision

2016-07-20 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V reassigned HIVE-14302:
--

Assignee: Gopal V

> Tez: Optimized Hashtable can support DECIMAL keys of same precision
> ---
>
> Key: HIVE-14302
> URL: https://issues.apache.org/jira/browse/HIVE-14302
> Project: Hive
>  Issue Type: Improvement
>  Components: Tez
>Affects Versions: 2.2.0
>Reporter: Gopal V
>Assignee: Gopal V
>
> Decimal support in the optimized hashtable was decided on the basis of the 
> fact that Decimal(10,1) == Decimal(10, 2) when both contain "1.0" and "1.00".
> However, the joins now don't have any issues with decimal precision because 
> they cast to common.
> {code}
> create temporary table x (a decimal(10,2), b decimal(10,1)) stored as orc;
> insert into x values (1.0, 1.0);
> > explain logical select count(1) from x, x x1 where x.a = x1.b;
> OK  
> LOGICAL PLAN:
> $hdt$_0:$hdt$_0:x
>   TableScan (TS_0)
> alias: x
> filterExpr: (a is not null and true) (type: boolean)
> Filter Operator (FIL_18)
>   predicate: (a is not null and true) (type: boolean)
>   Select Operator (SEL_2)
> expressions: a (type: decimal(10,2))
> outputColumnNames: _col0
> Reduce Output Operator (RS_6)
>   key expressions: _col0 (type: decimal(11,2))
>   sort order: +
>   Map-reduce partition columns: _col0 (type: decimal(11,2))
>   Join Operator (JOIN_8)
> condition map:
>  Inner Join 0 to 1
> keys:
>   0 _col0 (type: decimal(11,2))
>   1 _col0 (type: decimal(11,2))
> Group By Operator (GBY_11)
>   aggregations: count(1)
>   mode: hash
>   outputColumnNames: _col0
> {code}
> See cast up to Decimal(11, 2) in the plan, which normalizes both sides of the 
> join to be able to compare HiveDecimal as-is.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14301) insert overwrite fails for nonpartitioned tables in s3

2016-07-20 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386932#comment-15386932
 ] 

Ashutosh Chauhan commented on HIVE-14301:
-

sorry.. my bad.. i read it wrong. +1

> insert overwrite fails for nonpartitioned tables in s3
> --
>
> Key: HIVE-14301
> URL: https://issues.apache.org/jira/browse/HIVE-14301
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HIVE-14301.1.patch
>
>
> {noformat}
> hive> insert overwrite table s3_2 select * from default.test2;
> Query ID = hrt_qa_20160719164737_90fb1f30-0ade-4a64-ab65-a6a7550be25a
> Total jobs = 1
> Launching Job 1 out of 1
> Status: Running (Executing on YARN cluster with App id 
> application_1468941549982_0010)
> 
> VERTICES  STATUS  TOTAL  COMPLETED  RUNNING  PENDING  FAILED  
> KILLED
> 
> Map 1 ..   SUCCEEDED  1  100   0  
>  0
> 
> VERTICES: 01/01  [==>>] 100%  ELAPSED TIME: 11.90 s   
>  
> 
> Loading data to table default.s3_2
> Failed with exception java.io.IOException: rename for src path: 
> s3a://test-ks/test2/.hive-staging_hive_2016-07-19_16-47-37_787_4725676452829013403-1/-ext-1/00_0.deflate
>  to dest path:s3a://test-ks/test2/00_0.deflate returned false
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.MoveTask
> 2016-07-19 16:43:46,244 ERROR [main]: exec.Task 
> (SessionState.java:printError(948)) - Failed with exception 
> java.io.IOException: rename for src path: 
> s3a://test-ks/testing/.hive-staging_hive_2016-07-19_16-42-20_739_1716954454570249450-1/-ext-1/00_0.deflate
>  to dest path:s3a://test-ks/testing/00_0.deflate returned false
> org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: rename 
> for src path: 
> s3a://test-ks/testing/.hive-staging_hive_2016-07-19_16-42-20_739_1716954454570249450-1/-ext-1/00_0.deflate
>  to dest path:s3a://test-ks/testing/00_0.deflate returned false
>   at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:2856)
>   at org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:3113)
>   at org.apache.hadoop.hive.ql.metadata.Hive.loadTable(Hive.java:1700)
>   at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:328)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:89)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1726)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1472)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1271)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1138)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1128)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:216)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:168)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:379)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:739)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:684)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:624)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.io.IOException: rename for src path: 
> s3a://test-ks/testing/.hive-staging_hive_2016-07-19_16-42-20_739_1716954454570249450-1/-ext-1/00_0.deflate
>  to dest path:s3a://test-ks/testing/00_0.deflate returned false
>   at org.apache.hadoop.hive.ql.metadata.Hive$3.call(Hive.java:2836)
>   at org.apache.hadoop.hive.ql.metadata.Hive$3.call(Hive.java:2825)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> 

[jira] [Commented] (HIVE-14301) insert overwrite fails for nonpartitioned tables in s3

2016-07-20 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386930#comment-15386930
 ] 

Ashutosh Chauhan commented on HIVE-14301:
-

 Both destPath & destFile are serving same purpose. You may want to get rid of 
one of those to simplify this.

> insert overwrite fails for nonpartitioned tables in s3
> --
>
> Key: HIVE-14301
> URL: https://issues.apache.org/jira/browse/HIVE-14301
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HIVE-14301.1.patch
>
>
> {noformat}
> hive> insert overwrite table s3_2 select * from default.test2;
> Query ID = hrt_qa_20160719164737_90fb1f30-0ade-4a64-ab65-a6a7550be25a
> Total jobs = 1
> Launching Job 1 out of 1
> Status: Running (Executing on YARN cluster with App id 
> application_1468941549982_0010)
> 
> VERTICES  STATUS  TOTAL  COMPLETED  RUNNING  PENDING  FAILED  
> KILLED
> 
> Map 1 ..   SUCCEEDED  1  100   0  
>  0
> 
> VERTICES: 01/01  [==>>] 100%  ELAPSED TIME: 11.90 s   
>  
> 
> Loading data to table default.s3_2
> Failed with exception java.io.IOException: rename for src path: 
> s3a://test-ks/test2/.hive-staging_hive_2016-07-19_16-47-37_787_4725676452829013403-1/-ext-1/00_0.deflate
>  to dest path:s3a://test-ks/test2/00_0.deflate returned false
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.MoveTask
> 2016-07-19 16:43:46,244 ERROR [main]: exec.Task 
> (SessionState.java:printError(948)) - Failed with exception 
> java.io.IOException: rename for src path: 
> s3a://test-ks/testing/.hive-staging_hive_2016-07-19_16-42-20_739_1716954454570249450-1/-ext-1/00_0.deflate
>  to dest path:s3a://test-ks/testing/00_0.deflate returned false
> org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: rename 
> for src path: 
> s3a://test-ks/testing/.hive-staging_hive_2016-07-19_16-42-20_739_1716954454570249450-1/-ext-1/00_0.deflate
>  to dest path:s3a://test-ks/testing/00_0.deflate returned false
>   at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:2856)
>   at org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:3113)
>   at org.apache.hadoop.hive.ql.metadata.Hive.loadTable(Hive.java:1700)
>   at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:328)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:89)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1726)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1472)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1271)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1138)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1128)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:216)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:168)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:379)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:739)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:684)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:624)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.io.IOException: rename for src path: 
> s3a://test-ks/testing/.hive-staging_hive_2016-07-19_16-42-20_739_1716954454570249450-1/-ext-1/00_0.deflate
>  to dest path:s3a://test-ks/testing/00_0.deflate returned false
>   at org.apache.hadoop.hive.ql.metadata.Hive$3.call(Hive.java:2836)
>   at org.apache.hadoop.hive.ql.metadata.Hive$3.call(Hive.java:2825)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> 

[jira] [Commented] (HIVE-14295) Some metastore event listeners always initialize deleteData as false

2016-07-20 Thread niklaus xiao (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386908#comment-15386908
 ] 

niklaus xiao commented on HIVE-14295:
-

Should be 2.2

> Some metastore event listeners always initialize deleteData as false
> 
>
> Key: HIVE-14295
> URL: https://issues.apache.org/jira/browse/HIVE-14295
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 1.3.0, 2.1.0
>Reporter: niklaus xiao
>Assignee: niklaus xiao
>Priority: Minor
> Fix For: 2.2.0
>
> Attachments: HIVE-14295.1.patch
>
>
> DropTableEvent:
> {code}
>   public DropTableEvent(Table table, boolean status, boolean deleteData, 
> HMSHandler handler) {
> super(status, handler);
> this.table = table;
> // In HiveMetaStore, the deleteData flag indicates whether DFS data 
> should be
> // removed on a drop.
> this.deleteData = false;
>   }
> {code}
> Same as PreDropPartitionEvent and PreDropTableEvent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14295) Some metastore event listeners always initialize deleteData as false

2016-07-20 Thread niklaus xiao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

niklaus xiao updated HIVE-14295:

Fix Version/s: (was: 1.3.0)
   2.2.0

> Some metastore event listeners always initialize deleteData as false
> 
>
> Key: HIVE-14295
> URL: https://issues.apache.org/jira/browse/HIVE-14295
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 1.3.0, 2.1.0
>Reporter: niklaus xiao
>Assignee: niklaus xiao
>Priority: Minor
> Fix For: 2.2.0
>
> Attachments: HIVE-14295.1.patch
>
>
> DropTableEvent:
> {code}
>   public DropTableEvent(Table table, boolean status, boolean deleteData, 
> HMSHandler handler) {
> super(status, handler);
> this.table = table;
> // In HiveMetaStore, the deleteData flag indicates whether DFS data 
> should be
> // removed on a drop.
> this.deleteData = false;
>   }
> {code}
> Same as PreDropPartitionEvent and PreDropTableEvent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14301) insert overwrite fails for nonpartitioned tables in s3

2016-07-20 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HIVE-14301:

Attachment: HIVE-14301.1.patch

> insert overwrite fails for nonpartitioned tables in s3
> --
>
> Key: HIVE-14301
> URL: https://issues.apache.org/jira/browse/HIVE-14301
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HIVE-14301.1.patch
>
>
> {noformat}
> hive> insert overwrite table s3_2 select * from default.test2;
> Query ID = hrt_qa_20160719164737_90fb1f30-0ade-4a64-ab65-a6a7550be25a
> Total jobs = 1
> Launching Job 1 out of 1
> Status: Running (Executing on YARN cluster with App id 
> application_1468941549982_0010)
> 
> VERTICES  STATUS  TOTAL  COMPLETED  RUNNING  PENDING  FAILED  
> KILLED
> 
> Map 1 ..   SUCCEEDED  1  100   0  
>  0
> 
> VERTICES: 01/01  [==>>] 100%  ELAPSED TIME: 11.90 s   
>  
> 
> Loading data to table default.s3_2
> Failed with exception java.io.IOException: rename for src path: 
> s3a://test-ks/test2/.hive-staging_hive_2016-07-19_16-47-37_787_4725676452829013403-1/-ext-1/00_0.deflate
>  to dest path:s3a://test-ks/test2/00_0.deflate returned false
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.MoveTask
> 2016-07-19 16:43:46,244 ERROR [main]: exec.Task 
> (SessionState.java:printError(948)) - Failed with exception 
> java.io.IOException: rename for src path: 
> s3a://test-ks/testing/.hive-staging_hive_2016-07-19_16-42-20_739_1716954454570249450-1/-ext-1/00_0.deflate
>  to dest path:s3a://test-ks/testing/00_0.deflate returned false
> org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: rename 
> for src path: 
> s3a://test-ks/testing/.hive-staging_hive_2016-07-19_16-42-20_739_1716954454570249450-1/-ext-1/00_0.deflate
>  to dest path:s3a://test-ks/testing/00_0.deflate returned false
>   at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:2856)
>   at org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:3113)
>   at org.apache.hadoop.hive.ql.metadata.Hive.loadTable(Hive.java:1700)
>   at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:328)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:89)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1726)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1472)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1271)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1138)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1128)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:216)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:168)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:379)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:739)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:684)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:624)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.io.IOException: rename for src path: 
> s3a://test-ks/testing/.hive-staging_hive_2016-07-19_16-42-20_739_1716954454570249450-1/-ext-1/00_0.deflate
>  to dest path:s3a://test-ks/testing/00_0.deflate returned false
>   at org.apache.hadoop.hive.ql.metadata.Hive$3.call(Hive.java:2836)
>   at org.apache.hadoop.hive.ql.metadata.Hive$3.call(Hive.java:2825)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at 

[jira] [Updated] (HIVE-14301) insert overwrite fails for nonpartitioned tables in s3

2016-07-20 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HIVE-14301:

Status: Patch Available  (was: Open)

> insert overwrite fails for nonpartitioned tables in s3
> --
>
> Key: HIVE-14301
> URL: https://issues.apache.org/jira/browse/HIVE-14301
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HIVE-14301.1.patch
>
>
> {noformat}
> hive> insert overwrite table s3_2 select * from default.test2;
> Query ID = hrt_qa_20160719164737_90fb1f30-0ade-4a64-ab65-a6a7550be25a
> Total jobs = 1
> Launching Job 1 out of 1
> Status: Running (Executing on YARN cluster with App id 
> application_1468941549982_0010)
> 
> VERTICES  STATUS  TOTAL  COMPLETED  RUNNING  PENDING  FAILED  
> KILLED
> 
> Map 1 ..   SUCCEEDED  1  100   0  
>  0
> 
> VERTICES: 01/01  [==>>] 100%  ELAPSED TIME: 11.90 s   
>  
> 
> Loading data to table default.s3_2
> Failed with exception java.io.IOException: rename for src path: 
> s3a://test-ks/test2/.hive-staging_hive_2016-07-19_16-47-37_787_4725676452829013403-1/-ext-1/00_0.deflate
>  to dest path:s3a://test-ks/test2/00_0.deflate returned false
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.MoveTask
> 2016-07-19 16:43:46,244 ERROR [main]: exec.Task 
> (SessionState.java:printError(948)) - Failed with exception 
> java.io.IOException: rename for src path: 
> s3a://test-ks/testing/.hive-staging_hive_2016-07-19_16-42-20_739_1716954454570249450-1/-ext-1/00_0.deflate
>  to dest path:s3a://test-ks/testing/00_0.deflate returned false
> org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: rename 
> for src path: 
> s3a://test-ks/testing/.hive-staging_hive_2016-07-19_16-42-20_739_1716954454570249450-1/-ext-1/00_0.deflate
>  to dest path:s3a://test-ks/testing/00_0.deflate returned false
>   at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:2856)
>   at org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:3113)
>   at org.apache.hadoop.hive.ql.metadata.Hive.loadTable(Hive.java:1700)
>   at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:328)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:89)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1726)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1472)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1271)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1138)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1128)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:216)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:168)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:379)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:739)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:684)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:624)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.io.IOException: rename for src path: 
> s3a://test-ks/testing/.hive-staging_hive_2016-07-19_16-42-20_739_1716954454570249450-1/-ext-1/00_0.deflate
>  to dest path:s3a://test-ks/testing/00_0.deflate returned false
>   at org.apache.hadoop.hive.ql.metadata.Hive$3.call(Hive.java:2836)
>   at org.apache.hadoop.hive.ql.metadata.Hive$3.call(Hive.java:2825)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at 

[jira] [Updated] (HIVE-14301) insert overwrite fails for nonpartitioned tables in s3

2016-07-20 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HIVE-14301:

Component/s: Metastore

> insert overwrite fails for nonpartitioned tables in s3
> --
>
> Key: HIVE-14301
> URL: https://issues.apache.org/jira/browse/HIVE-14301
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
>
> {noformat}
> hive> insert overwrite table s3_2 select * from default.test2;
> Query ID = hrt_qa_20160719164737_90fb1f30-0ade-4a64-ab65-a6a7550be25a
> Total jobs = 1
> Launching Job 1 out of 1
> Status: Running (Executing on YARN cluster with App id 
> application_1468941549982_0010)
> 
> VERTICES  STATUS  TOTAL  COMPLETED  RUNNING  PENDING  FAILED  
> KILLED
> 
> Map 1 ..   SUCCEEDED  1  100   0  
>  0
> 
> VERTICES: 01/01  [==>>] 100%  ELAPSED TIME: 11.90 s   
>  
> 
> Loading data to table default.s3_2
> Failed with exception java.io.IOException: rename for src path: 
> s3a://test-ks/test2/.hive-staging_hive_2016-07-19_16-47-37_787_4725676452829013403-1/-ext-1/00_0.deflate
>  to dest path:s3a://test-ks/test2/00_0.deflate returned false
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.MoveTask
> 2016-07-19 16:43:46,244 ERROR [main]: exec.Task 
> (SessionState.java:printError(948)) - Failed with exception 
> java.io.IOException: rename for src path: 
> s3a://test-ks/testing/.hive-staging_hive_2016-07-19_16-42-20_739_1716954454570249450-1/-ext-1/00_0.deflate
>  to dest path:s3a://test-ks/testing/00_0.deflate returned false
> org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: rename 
> for src path: 
> s3a://test-ks/testing/.hive-staging_hive_2016-07-19_16-42-20_739_1716954454570249450-1/-ext-1/00_0.deflate
>  to dest path:s3a://test-ks/testing/00_0.deflate returned false
>   at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:2856)
>   at org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:3113)
>   at org.apache.hadoop.hive.ql.metadata.Hive.loadTable(Hive.java:1700)
>   at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:328)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:89)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1726)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1472)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1271)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1138)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1128)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:216)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:168)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:379)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:739)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:684)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:624)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.io.IOException: rename for src path: 
> s3a://test-ks/testing/.hive-staging_hive_2016-07-19_16-42-20_739_1716954454570249450-1/-ext-1/00_0.deflate
>  to dest path:s3a://test-ks/testing/00_0.deflate returned false
>   at org.apache.hadoop.hive.ql.metadata.Hive$3.call(Hive.java:2836)
>   at org.apache.hadoop.hive.ql.metadata.Hive$3.call(Hive.java:2825)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--

[jira] [Updated] (HIVE-14224) LLAP rename query specific log files once a query is complete

2016-07-20 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-14224:
--
   Resolution: Fixed
Fix Version/s: 2.2.0
   Status: Resolved  (was: Patch Available)

Created HIVE-14300 to track the race mentioned in the comments.

> LLAP rename query specific log files once a query is complete
> -
>
> Key: HIVE-14224
> URL: https://issues.apache.org/jira/browse/HIVE-14224
> Project: Hive
>  Issue Type: Improvement
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Fix For: 2.2.0
>
> Attachments: HIVE-14224.02.patch, HIVE-14224.03.patch, 
> HIVE-14224.04.patch, HIVE-14224.05.patch, HIVE-14224.wip.01.patch
>
>
> Once a query is complete, rename the query specific log file so that YARN can 
> aggregate the logs (once it's configured to do so).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14224) LLAP rename query specific log files once a query is complete

2016-07-20 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386854#comment-15386854
 ] 

Siddharth Seth commented on HIVE-14224:
---

Thanks for the reviews. Committing. The test failures are not related.

> LLAP rename query specific log files once a query is complete
> -
>
> Key: HIVE-14224
> URL: https://issues.apache.org/jira/browse/HIVE-14224
> Project: Hive
>  Issue Type: Improvement
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-14224.02.patch, HIVE-14224.03.patch, 
> HIVE-14224.04.patch, HIVE-14224.05.patch, HIVE-14224.wip.01.patch
>
>
> Once a query is complete, rename the query specific log file so that YARN can 
> aggregate the logs (once it's configured to do so).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14299) Log serialized plan size

2016-07-20 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-14299:
-
Status: Patch Available  (was: Open)

> Log serialized plan size 
> -
>
> Key: HIVE-14299
> URL: https://issues.apache.org/jira/browse/HIVE-14299
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Minor
> Attachments: HIVE-14299.1.patch
>
>
> It will be good to log the size of the serialized plan. This can help 
> identifying cases where large objects are accidentally serialized. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14290) Refactor HIVE-14054 to use Collections#newSetFromMap

2016-07-20 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386850#comment-15386850
 ] 

Prasanth Jayachandran commented on HIVE-14290:
--

+1

> Refactor HIVE-14054 to use Collections#newSetFromMap
> 
>
> Key: HIVE-14290
> URL: https://issues.apache.org/jira/browse/HIVE-14290
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 2.1.0
>Reporter: Peter Slawski
>Assignee: Peter Slawski
>Priority: Trivial
> Attachments: HIVE-14290.1.patch
>
>
> There is a minor refactor that can be made to HiveMetaStoreChecker so that it 
> cleanly creates and uses a set that is backed by a Map implementation. In 
> this case, the underlying Map implementation is ConcurrentHashMap. This 
> refactor will help prevent issues such as the one reported in HIVE-14054.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14299) Log serialized plan size

2016-07-20 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-14299:
-
Attachment: HIVE-14299.1.patch

> Log serialized plan size 
> -
>
> Key: HIVE-14299
> URL: https://issues.apache.org/jira/browse/HIVE-14299
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Minor
> Attachments: HIVE-14299.1.patch
>
>
> It will be good to log the size of the serialized plan. This can help 
> identifying cases where large objects are accidentally serialized. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14224) LLAP rename query specific log files once a query is complete

2016-07-20 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386844#comment-15386844
 ] 

Prasanth Jayachandran commented on HIVE-14224:
--

changes lgtm, +1

> LLAP rename query specific log files once a query is complete
> -
>
> Key: HIVE-14224
> URL: https://issues.apache.org/jira/browse/HIVE-14224
> Project: Hive
>  Issue Type: Improvement
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-14224.02.patch, HIVE-14224.03.patch, 
> HIVE-14224.04.patch, HIVE-14224.05.patch, HIVE-14224.wip.01.patch
>
>
> Once a query is complete, rename the query specific log file so that YARN can 
> aggregate the logs (once it's configured to do so).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14225) Llap slider package should support configuring YARN rolling log aggregation

2016-07-20 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386838#comment-15386838
 ] 

Siddharth Seth commented on HIVE-14225:
---

Think the query-routing name still makes sense - since this is query based 
routing.

> Llap slider package should support configuring YARN rolling log aggregation
> ---
>
> Key: HIVE-14225
> URL: https://issues.apache.org/jira/browse/HIVE-14225
> Project: Hive
>  Issue Type: Improvement
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-14225.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14225) Llap slider package should support configuring YARN rolling log aggregation

2016-07-20 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-14225:
--
Status: Patch Available  (was: Open)

> Llap slider package should support configuring YARN rolling log aggregation
> ---
>
> Key: HIVE-14225
> URL: https://issues.apache.org/jira/browse/HIVE-14225
> Project: Hive
>  Issue Type: Improvement
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-14225.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14224) LLAP rename query specific log files once a query is complete

2016-07-20 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-14224:
--
Attachment: HIVE-14224.05.patch

Updated patch with a log message, and some null checks. The exception handler 
can be a separate jira.

> LLAP rename query specific log files once a query is complete
> -
>
> Key: HIVE-14224
> URL: https://issues.apache.org/jira/browse/HIVE-14224
> Project: Hive
>  Issue Type: Improvement
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-14224.02.patch, HIVE-14224.03.patch, 
> HIVE-14224.04.patch, HIVE-14224.05.patch, HIVE-14224.wip.01.patch
>
>
> Once a query is complete, rename the query specific log file so that YARN can 
> aggregate the logs (once it's configured to do so).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13560) Adding Omid as connection manager for HBase Metastore

2016-07-20 Thread Daniel Dai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-13560:
--
Attachment: HIVE-13560.9.patch

> Adding Omid as connection manager for HBase Metastore
> -
>
> Key: HIVE-13560
> URL: https://issues.apache.org/jira/browse/HIVE-13560
> Project: Hive
>  Issue Type: Improvement
>  Components: HBase Metastore
>Reporter: Daniel Dai
>Assignee: Daniel Dai
> Attachments: HIVE-13560.1.patch, HIVE-13560.2.patch, 
> HIVE-13560.3.patch, HIVE-13560.4.patch, HIVE-13560.5.patch, 
> HIVE-13560.6.patch, HIVE-13560.7.patch, HIVE-13560.8.patch, HIVE-13560.9.patch
>
>
> Adding Omid as a transaction manager to HBase Metastore. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-14035) Enable predicate pushdown to delta files created by ACID Transactions

2016-07-20 Thread Saket Saurabh (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386761#comment-15386761
 ] 

Saket Saurabh edited comment on HIVE-14035 at 7/20/16 10:52 PM:


Updated the patch by rebasing with master. No additional code changes. Patch 
(#10)


was (Author: saketj):
Updated the patch by rebasing with master. No additional code changes.

> Enable predicate pushdown to delta files created by ACID Transactions
> -
>
> Key: HIVE-14035
> URL: https://issues.apache.org/jira/browse/HIVE-14035
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Reporter: Saket Saurabh
>Assignee: Saket Saurabh
> Attachments: HIVE-14035.02.patch, HIVE-14035.03.patch, 
> HIVE-14035.04.patch, HIVE-14035.05.patch, HIVE-14035.06.patch, 
> HIVE-14035.07.patch, HIVE-14035.08.patch, HIVE-14035.09.patch, 
> HIVE-14035.10.patch, HIVE-14035.patch
>
>
> In current Hive version, delta files created by ACID transactions do not 
> allow predicate pushdown if they contain any update/delete events. This is 
> done to preserve correctness when following a multi-version approach during 
> event collapsing, where an update event overwrites an existing insert event. 
> This JIRA proposes to split an update event into a combination of a delete 
> event followed by a new insert event, that can enable predicate push down to 
> all delta files without breaking correctness. To support backward 
> compatibility for this feature, this JIRA also proposes to add some sort of 
> versioning to ACID that can allow different versions of ACID transactions to 
> co-exist together.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13708) Create table should verify datatypes supported by the serde

2016-07-20 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386762#comment-15386762
 ] 

Hari Sankar Sivarama Subramaniyan commented on HIVE-13708:
--

[~ashutoshc] My .1 patch does not support non string column types with 
OpenCSVSerde. Rather it throws an error when non-string columns are used.  The 
change for HIVE-13709 might be to replace the below code with the ones 
corresponding to the field type and make the corresponding changes everywhere 
else affected :
{code}
for (int i = 0; i < numCols; i++) {
  columnOIs.add(PrimitiveObjectInspectorFactory.javaStringObjectInspector);
}
{code}

Thanks

> Create table should verify datatypes supported by the serde
> ---
>
> Key: HIVE-13708
> URL: https://issues.apache.org/jira/browse/HIVE-13708
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Reporter: Thejas M Nair
>Assignee: Hari Sankar Sivarama Subramaniyan
>Priority: Critical
> Attachments: HIVE-13708.1.patch, HIVE-13708.2.patch, 
> HIVE-13708.3.patch, HIVE-13708.4.patch
>
>
> As [~Goldshuv] mentioned in HIVE-.
> Create table with serde such as OpenCSVSerde allows for creation of table 
> with columns of arbitrary types. But 'describe table' would still return 
> string datatypes, and so does selects on the table.
> This is misleading and would result in users not getting intended results.
> The create table ideally should disallow the creation of such tables with 
> unsupported types.
> Example posted by [~Goldshuv] in HIVE- -
> {noformat}
> CREATE EXTERNAL TABLE test (totalprice DECIMAL(38,10)) 
> ROW FORMAT SERDE 'com.bizo.hive.serde.csv.CSVSerde' with 
> serdeproperties ("separatorChar" = ",","quoteChar"= "'","escapeChar"= "\\") 
> STORED AS TEXTFILE 
> LOCATION '' 
> tblproperties ("skip.header.line.count"="1");
> {noformat}
> Now consider this sql:
> hive> select min(totalprice) from test;
> in this case given my data, the result should have been 874.89, but the 
> actual result became 11.57 (as it is first according to byte ordering of 
> a string type). this is a wrong result.
> hive> desc extended test;
> OK
> o_totalprice  string  from deserializer
> ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14291) count(*) on a table written by hcatstorer returns incorrect result

2016-07-20 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-14291:
---
Status: Patch Available  (was: Open)

> count(*) on a table written by hcatstorer returns incorrect result
> --
>
> Key: HIVE-14291
> URL: https://issues.apache.org/jira/browse/HIVE-14291
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-14291.01.patch
>
>
> {code}
> count(*) on a table written by hcatstorer returns wrong result. 
> {code}
> steps to repro the issue:
> 1) create hive table
> {noformat}
> create  table ${DEST_TABLE}(name string, age int, gpa float)
> row format delimited
> fields terminated by '\t'
> stored as textfile;
> {noformat}
> 2) load data into table using hcatstorer
> {noformat}
> A = LOAD '$DATA_1' USING PigStorage() AS (name:chararray, age:int, gpa:float);
> B = LOAD '$DATA_2' USING PigStorage() AS (name:chararray, age:int, gpa:float);
> C = UNION A, B;
> STORE C INTO '$HIVE_TABLE'  USING org.apache.hive.hcatalog.pig.HCatStorer();
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14282) HCatLoader ToDate() exception with hive partition table ,partitioned by column of DATE datatype

2016-07-20 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386700#comment-15386700
 ] 

Hive QA commented on HIVE-14282:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12818869/HIVE-14282.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 10341 tests 
executed
*Failed tests:*
{noformat}
TestMsgBusConnection - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_acid_globallimit
org.apache.hadoop.hive.llap.daemon.impl.TestLlapTokenChecker.testCheckPermissions
org.apache.hadoop.hive.llap.daemon.impl.TestLlapTokenChecker.testGetToken
org.apache.hadoop.hive.metastore.TestMetaStoreMetrics.testConnections
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/583/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/583/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-583/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 9 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12818869 - PreCommit-HIVE-MASTER-Build

> HCatLoader ToDate() exception with hive partition table ,partitioned by 
> column of DATE datatype
> ---
>
> Key: HIVE-14282
> URL: https://issues.apache.org/jira/browse/HIVE-14282
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 1.2.1
> Environment: PIG Version : (0.15.0) 
> HIVE : 1.2.1
> OS Version : CentOS release 6.7 (Final)
> OS Kernel : 2.6.32-573.18.1.el6.x86_64
>Reporter: Raghavender Rao Guruvannagari
>Assignee: Daniel Dai
> Fix For: 1.3.0, 2.2.0, 2.1.1
>
> Attachments: HIVE-14282.1.patch
>
>
> ToDate() function doesnt work with a partitioned table, partitioned by the 
> column of DATE Datatype.
> Below are the steps I followed to recreate the problem.
> -->Sample input file to hive table :
> hdfs@testhost ~$ cat test.log 
> 2012-06-13,16:11:17,574,140.134.127.109,SearchPage,Google.com,Win8,5,HTC
> 2012-06-13,16:11:17,466,43.176.108.158,Electronics,Google.com,Win8,3,iPhone
> 2012-06-13,16:11:17,501,97.73.102.79,Appliances,Google.com,Android,4,iPhone
> 2012-06-13,16:11:17,469,166.98.157.122,Recommendations,Google.com,Win8,5,HTC
> 2012-06-13,16:11:17,557,36.159.147.50,Sporting,Google.com,Win8,3,Samsung
> 2012-06-13,16:11:17,449,128.215.122.234,ShoppingCart,Google.com,Win8,5,HTC
> 2012-06-13,16:11:17,502,46.81.131.92,Electronics,Google.com,Android,5,Samsung
> 2012-06-13,16:11:17,554,120.187.105.127,Automotive,Google.com,Win8,5,HTC
> 2012-06-13,16:11:17,447,127.94.64.59,DetailPage,Google.com,Win8,3,Samsung
> 2012-06-13,16:11:17,490,132.54.25.75,ShoppingCart,Google.com,Win8,3,iPhone
> 2012-06-13,16:11:17,578,79.201.53.179,Automotive,Google.com,Win8,5,Samsung
> 2012-06-13,16:11:17,435,158.106.164.38,HomePage,Google.com,Web,5,Chrome
> 2012-06-13,16:11:17,523,17.131.82.171,Recommendations,Google.com,Web,3,IE9
> 2012-06-13,16:11:17,575,178.95.126.105,Appliances,Google.com,iOS,3,iPhone
> 2012-06-13,16:11:17,468,225.143.39.176,SearchPage,Google.com,iOS,5,HTC
> 2012-06-13,16:11:17,511,43.103.102.147,ShoppingCart,Google.com,iOS,5,Samsung
> --> Copied to hdfs directory:
> hdfs@testhost ~$ hdfs dfs -put -f test.log /user/hdfs/
> -->Create partitoned table (partitioned with date data type column) in hive:
> 0: jdbc:hive2://hdp2.raghav.com:1/default> create table mytable(Dt 
> DATE,Time STRING,Number INT,IPAddr STRING,Type STRING,Site STRING,OSType 
> STRING,Visit INT,PhModel STRING) row format delimited fields terminated by 
> ',' stored as textfile;
> 0: jdbc:hive2://testhost.com:1/default> load data inpath 
> '/user/hdfs/test.log' overwrite into table mytable;
> 0: jdbc:hive2://testhost..com:1/default> SET hive.exec.dynamic.partition 
> = true;
> 0: jdbc:hive2://testhost.com:1/default> SET 
> hive.exec.dynamic.partition.mode = nonstrict;
> 0: jdbc:hive2://testhost.com:1/default> create table partmytable(Number 
> INT,IPAddr STRING,Type STRING,Site STRING,OSType STRING,Visit INT,PhModel 

[jira] [Updated] (HIVE-14286) ExplainTask#outputMap usage: not desired call

2016-07-20 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-14286:

Status: Patch Available  (was: Open)

> ExplainTask#outputMap usage: not desired call
> -
>
> Key: HIVE-14286
> URL: https://issues.apache.org/jira/browse/HIVE-14286
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
> Attachments: HIVE-14286.1.patch
>
>
> I've noticed some odd behaviour...while fabricating a test:
> in {{ExplainTask#getJSONLogicalPlan}} there is a call to {{#outputMap}} which 
> exchanges the outputJson and the extended boolean values.
> for extended json explain question there is no difference; but for 
> non-extended json queries there is no output at all.
> i'm separating this small change because it might need qtest updates



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14286) ExplainTask#outputMap usage: not desired call

2016-07-20 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-14286:

Attachment: HIVE-14286.1.patch

this is a small patch...i think it may break some itests ; but maybe i'm lucky 
;)

> ExplainTask#outputMap usage: not desired call
> -
>
> Key: HIVE-14286
> URL: https://issues.apache.org/jira/browse/HIVE-14286
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
> Attachments: HIVE-14286.1.patch
>
>
> I've noticed some odd behaviour...while fabricating a test:
> in {{ExplainTask#getJSONLogicalPlan}} there is a call to {{#outputMap}} which 
> exchanges the outputJson and the extended boolean values.
> for extended json explain question there is no difference; but for 
> non-extended json queries there is no output at all.
> i'm separating this small change because it might need qtest updates



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-14289) can't reliably specify hadoop.version for maven build

2016-07-20 Thread Zoltan Haindrich (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386687#comment-15386687
 ] 

Zoltan Haindrich edited comment on HIVE-14289 at 7/20/16 9:46 PM:
--

i've narrowed it down to the {{maven-remote-resources-plugin}} ... not sure why 
it's needed...but with this patch the problem goes away...and i was able to 
install the root project; but it reappeared inside itests...and i was not able 
to skip that...i think this anomaly maybe related to the ant plugin...

i've created an ugly patch with this skip..hoping that it may help in any 
further investigations

I can't really see any solutions...if someone would give it a try - maybe a 
fresh mindset can help ;)


was (Author: kgyrtkirk):
i've narrowed it down to the {{maven-remote-resources-plugin}} ... not sure why 
it's needed...but with this patch the problem goes away...and i was able to 
install the root project; but it reappeared inside itests...and i was not able 
to skip that...i think this anomaly maybe related to the ant plugin...

I can't really see any solutions...if someone would give it a try - maybe a 
fresh mindset can help ;)

> can't reliably specify hadoop.version for maven build
> -
>
> Key: HIVE-14289
> URL: https://issues.apache.org/jira/browse/HIVE-14289
> Project: Hive
>  Issue Type: Bug
> Environment: maven 3.3.9
>Reporter: Zoltan Haindrich
> Attachments: experimental.patch
>
>
> if someone would like to build against a different hadoop.version; it looks 
> straightforward to use {{-Dhadoop.version=...}}. however this doesn't "fully" 
> override the default value of the {{hadoop.version}} maven property.
> steps to reproduce:
>   * change hadoop.version to some nonsence:
> {code}
> sed -i 
> "/hadoop.version.*hadoop.version/s|.*|nonexistentt|"
>  pom.xml
> {code}
>  * specify a valid {{hadoop.version}} from the commandline:
> {code}
> mvn clean package -DskipTests -Dhadoop.version=2.6.1
> {code}
> i'm not sure..but from {{-X}} output i've seen:
> {code}
> [DEBUG] Ant property 'hadoop.version=2.6.1' clashs with an existing Maven 
> property, SKIPPING this Ant pr
> operty propagation.
> {code}
> the build will fail..or at least it fails for me..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14289) can't reliably specify hadoop.version for maven build

2016-07-20 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-14289:

Attachment: experimental.patch

i've narrowed it down to the {{maven-remote-resources-plugin}} ... not sure why 
it's needed...but with this patch the problem goes away...and i was able to 
install the root project; but it reappeared inside itests...and i was not able 
to skip that...i think this anomaly maybe related to the ant plugin...

I can't really see any solutions...if someone would give it a try - maybe a 
fresh mindset can help ;)

> can't reliably specify hadoop.version for maven build
> -
>
> Key: HIVE-14289
> URL: https://issues.apache.org/jira/browse/HIVE-14289
> Project: Hive
>  Issue Type: Bug
> Environment: maven 3.3.9
>Reporter: Zoltan Haindrich
> Attachments: experimental.patch
>
>
> if someone would like to build against a different hadoop.version; it looks 
> straightforward to use {{-Dhadoop.version=...}}. however this doesn't "fully" 
> override the default value of the {{hadoop.version}} maven property.
> steps to reproduce:
>   * change hadoop.version to some nonsence:
> {code}
> sed -i 
> "/hadoop.version.*hadoop.version/s|.*|nonexistentt|"
>  pom.xml
> {code}
>  * specify a valid {{hadoop.version}} from the commandline:
> {code}
> mvn clean package -DskipTests -Dhadoop.version=2.6.1
> {code}
> i'm not sure..but from {{-X}} output i've seen:
> {code}
> [DEBUG] Ant property 'hadoop.version=2.6.1' clashs with an existing Maven 
> property, SKIPPING this Ant pr
> operty propagation.
> {code}
> the build will fail..or at least it fails for me..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14224) LLAP rename query specific log files once a query is complete

2016-07-20 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386682#comment-15386682
 ] 

Prasanth Jayachandran commented on HIVE-14224:
--

Unchecked cast to RandomAccessFileAppender. I think for other file appender 
types atleast we should log error that renaming is unsupported.  With async 
logger these exceptions (any exceptions) will be unknown. Alternatively we can 
setup AsyncLoggerConfig.ExceptionHandler.



> LLAP rename query specific log files once a query is complete
> -
>
> Key: HIVE-14224
> URL: https://issues.apache.org/jira/browse/HIVE-14224
> Project: Hive
>  Issue Type: Improvement
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-14224.02.patch, HIVE-14224.03.patch, 
> HIVE-14224.04.patch, HIVE-14224.wip.01.patch
>
>
> Once a query is complete, rename the query specific log file so that YARN can 
> aggregate the logs (once it's configured to do so).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14214) ORC Schema Evolution and Predicate Push Down do not work together (no rows returned)

2016-07-20 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386665#comment-15386665
 ] 

Prasanth Jayachandran commented on HIVE-14214:
--

Left some comments in RB.

I think many of the complications in the patch can be avoided if we just 
provide a Reader api that returns true if there is conversion. 
Reader.hasConversion(). Behind the scenes we should do all the magic of 
determining if the conversion is required based on reader schema, file schema 
and included. In OrcInputFormat the only place we need to disable PPD is in ETL 
strategy, which creates ORC reader. If this reader returns hasConversion() then 
we should disable PPD. Similarly for task side. 

> ORC Schema Evolution and Predicate Push Down do not work together (no rows 
> returned)
> 
>
> Key: HIVE-14214
> URL: https://issues.apache.org/jira/browse/HIVE-14214
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-14214.01.patch, HIVE-14214.02.patch, 
> HIVE-14214.03.patch, HIVE-14214.04.patch, HIVE-14214.05.patch, 
> HIVE-14214.WIP.patch
>
>
> In Schema Evolution, the reader schema is different than the file schema 
> which is used to evaluate predicate push down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-14275) LineageState#clear throws NullPointerException on branch-1

2016-07-20 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta resolved HIVE-14275.
-
  Resolution: Fixed
Hadoop Flags: Reviewed

Committed to branch-1. Thanks for the review [~thejas].

> LineageState#clear throws NullPointerException on branch-1
> --
>
> Key: HIVE-14275
> URL: https://issues.apache.org/jira/browse/HIVE-14275
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0, 1.2.1
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-14275.1.patch
>
>
> We'll need to add a null check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14275) LineageState#clear throws NullPointerException on branch-1

2016-07-20 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-14275:

Fix Version/s: 1.3.0

> LineageState#clear throws NullPointerException on branch-1
> --
>
> Key: HIVE-14275
> URL: https://issues.apache.org/jira/browse/HIVE-14275
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0, 1.2.1
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Fix For: 1.3.0
>
> Attachments: HIVE-14275.1.patch
>
>
> We'll need to add a null check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14268) INSERT-OVERWRITE is not generating an INSERT event during hive replication

2016-07-20 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-14268:

Attachment: HIVE-14268.4.patch

reuploading .1/.3.patch as .4.patch because the builds.apache.org job borked 
and is not picking it up again.

> INSERT-OVERWRITE is not generating an INSERT event during hive replication
> --
>
> Key: HIVE-14268
> URL: https://issues.apache.org/jira/browse/HIVE-14268
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.2.0
>Reporter: Murali Ramasami
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-14268.2.patch, HIVE-14268.3.patch, 
> HIVE-14268.4.patch, HIVE-14268.patch
>
>
> During Hive replication invoked from falcon, the source cluster did not 
> generate appropriate INSERT events associated with the INSERT OVERWRITE, 
> generating only an ALTER PARTITION event. However, an ALTER PARTITION is a 
> metadata-only event, and thus, only metadata changes were replicated across, 
> modifying the metadata of the destination, while not updating the data. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14275) LineageState#clear throws NullPointerException on branch-1

2016-07-20 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386595#comment-15386595
 ] 

Vaibhav Gumashta commented on HIVE-14275:
-

Ran precommits locally and I see no issues. Will commit this shortly since we 
won't get a QA run on branch-1.

> LineageState#clear throws NullPointerException on branch-1
> --
>
> Key: HIVE-14275
> URL: https://issues.apache.org/jira/browse/HIVE-14275
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0, 1.2.1
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-14275.1.patch
>
>
> We'll need to add a null check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14296) Session count is not decremented when HS2 clients do not shutdown cleanly.

2016-07-20 Thread Mohit Sabharwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386585#comment-15386585
 ] 

Mohit Sabharwal commented on HIVE-14296:


Yeah, good to get rid of sessionCount

I think the current code is making no distinction between a connection & a 
session.
MetricsConstant.OPEN_CONNECTIONS is getting increment/decremented
at connection level. But we are also closing the session when the connection
is detected to be closed/dropped (in deleteContext). Which implies that 
connection and session
are the same thing from p.o.v. of the metrics, which seems fine.

Separately, looks like MetricsConstant.OPEN_CONNECTIONS is used in both
HS2 and HMS, which means this count includes both HS2 and HMS connections when
HMS is embedded in HS2. [~szehon] looks like we need to have a separate metric 
for HMS connections ?

> Session count is not decremented when HS2 clients do not shutdown cleanly.
> --
>
> Key: HIVE-14296
> URL: https://issues.apache.org/jira/browse/HIVE-14296
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 2.0.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Attachments: HIVE-14296.patch
>
>
> When a JDBC client like beeline abruptly disconnects from HS2, the session 
> gets closed on the serverside but the session count reported in the logs is 
> incorrect. It never gets decremented.
> For example, I created 6 connections from the same instance of beeline to HS2.
> {code}
> 2016-07-20T15:05:17,987  INFO [HiveServer2-Handler-Pool: Thread-40] 
> thrift.ThriftCLIService: Opened a session SessionHandle 
> [28b225ee-204f-4b3e-b4fd-0039ef8e276e], current sessions: 1
> .
> 2016-07-20T15:05:24,239  INFO [HiveServer2-Handler-Pool: Thread-45] 
> thrift.ThriftCLIService: Opened a session SessionHandle 
> [1d267de8-ff9a-4e76-ac5c-e82c871588e7], current sessions: 2
> .
> 2016-07-20T15:05:25,710  INFO [HiveServer2-Handler-Pool: Thread-50] 
> thrift.ThriftCLIService: Opened a session SessionHandle 
> [04d53deb-8965-464b-aa3f-7042304cfb54], current sessions: 3
> .
> 2016-07-20T15:05:26,795  INFO [HiveServer2-Handler-Pool: Thread-55] 
> thrift.ThriftCLIService: Opened a session SessionHandle 
> [b4bb8b86-74e1-4e3c-babb-674d34ad1caf], current sessions: 4
> 2016-07-20T15:05:28,160  INFO [HiveServer2-Handler-Pool: Thread-60] 
> thrift.ThriftCLIService: Opened a session SessionHandle 
> [6d3c3ed9-fadb-4673-8c15-3315b7e2995d], current sessions: 5
> .
> 2016-07-20T15:05:29,136  INFO [HiveServer2-Handler-Pool: Thread-65] 
> thrift.ThriftCLIService: Opened a session SessionHandle 
> [88b630c0-f272-427d-8263-febfef8d], current sessions: 6
> {code}
> When I CNTRL-C the beeline process, in the HS2 logs I see
> {code}
> 2016-07-20T15:11:37,858  INFO [HiveServer2-Handler-Pool: Thread-55] 
> thrift.ThriftCLIService: Session disconnected without closing properly. 
> 2016-07-20T15:11:37,858  INFO [HiveServer2-Handler-Pool: Thread-40] 
> thrift.ThriftCLIService: Session disconnected without closing properly. 
> 2016-07-20T15:11:37,858  INFO [HiveServer2-Handler-Pool: Thread-65] 
> thrift.ThriftCLIService: Session disconnected without closing properly. 
> 2016-07-20T15:11:37,858  INFO [HiveServer2-Handler-Pool: Thread-60] 
> thrift.ThriftCLIService: Session disconnected without closing properly. 
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-50] 
> thrift.ThriftCLIService: Session disconnected without closing properly. 
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-45] 
> thrift.ThriftCLIService: Session disconnected without closing properly. 
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-55] 
> thrift.ThriftCLIService: Closing the session: SessionHandle 
> [b4bb8b86-74e1-4e3c-babb-674d34ad1caf]
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-40] 
> thrift.ThriftCLIService: Closing the session: SessionHandle 
> [28b225ee-204f-4b3e-b4fd-0039ef8e276e]
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-65] 
> thrift.ThriftCLIService: Closing the session: SessionHandle 
> [88b630c0-f272-427d-8263-febfef8d]
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-60] 
> thrift.ThriftCLIService: Closing the session: SessionHandle 
> [6d3c3ed9-fadb-4673-8c15-3315b7e2995d]
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-45] 
> thrift.ThriftCLIService: Closing the session: SessionHandle 
> [1d267de8-ff9a-4e76-ac5c-e82c871588e7]
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-50] 
> thrift.ThriftCLIService: Closing the session: SessionHandle 
> [04d53deb-8965-464b-aa3f-7042304cfb54]
> {code}
> The next time I connect to HS2 via beeline, I see
> {code}
> 2016-07-20T15:14:33,679  

[jira] [Updated] (HIVE-14263) Log message when HS2 query is waiting on compile lock

2016-07-20 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-14263:
-
Assignee: Tao Li  (was: Thejas M Nair)

> Log message when HS2 query is waiting on compile lock
> -
>
> Key: HIVE-14263
> URL: https://issues.apache.org/jira/browse/HIVE-14263
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Thejas M Nair
>Assignee: Tao Li
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14296) Session count is not decremented when HS2 clients do not shutdown cleanly.

2016-07-20 Thread Naveen Gangam (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386536#comment-15386536
 ] 

Naveen Gangam commented on HIVE-14296:
--

[~mohitsabharwal] Good point. It does seem redundant in terms of a count. One 
seems a bit more explicit than the other. The sessionCount currently is just 
being used for this log message. We can just as easily retrieve it from the 
SessionManager.

Also I think there is some value in publishing this metric to the metrics 
system, just like open_connections. What do you think? Thanks

> Session count is not decremented when HS2 clients do not shutdown cleanly.
> --
>
> Key: HIVE-14296
> URL: https://issues.apache.org/jira/browse/HIVE-14296
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 2.0.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Attachments: HIVE-14296.patch
>
>
> When a JDBC client like beeline abruptly disconnects from HS2, the session 
> gets closed on the serverside but the session count reported in the logs is 
> incorrect. It never gets decremented.
> For example, I created 6 connections from the same instance of beeline to HS2.
> {code}
> 2016-07-20T15:05:17,987  INFO [HiveServer2-Handler-Pool: Thread-40] 
> thrift.ThriftCLIService: Opened a session SessionHandle 
> [28b225ee-204f-4b3e-b4fd-0039ef8e276e], current sessions: 1
> .
> 2016-07-20T15:05:24,239  INFO [HiveServer2-Handler-Pool: Thread-45] 
> thrift.ThriftCLIService: Opened a session SessionHandle 
> [1d267de8-ff9a-4e76-ac5c-e82c871588e7], current sessions: 2
> .
> 2016-07-20T15:05:25,710  INFO [HiveServer2-Handler-Pool: Thread-50] 
> thrift.ThriftCLIService: Opened a session SessionHandle 
> [04d53deb-8965-464b-aa3f-7042304cfb54], current sessions: 3
> .
> 2016-07-20T15:05:26,795  INFO [HiveServer2-Handler-Pool: Thread-55] 
> thrift.ThriftCLIService: Opened a session SessionHandle 
> [b4bb8b86-74e1-4e3c-babb-674d34ad1caf], current sessions: 4
> 2016-07-20T15:05:28,160  INFO [HiveServer2-Handler-Pool: Thread-60] 
> thrift.ThriftCLIService: Opened a session SessionHandle 
> [6d3c3ed9-fadb-4673-8c15-3315b7e2995d], current sessions: 5
> .
> 2016-07-20T15:05:29,136  INFO [HiveServer2-Handler-Pool: Thread-65] 
> thrift.ThriftCLIService: Opened a session SessionHandle 
> [88b630c0-f272-427d-8263-febfef8d], current sessions: 6
> {code}
> When I CNTRL-C the beeline process, in the HS2 logs I see
> {code}
> 2016-07-20T15:11:37,858  INFO [HiveServer2-Handler-Pool: Thread-55] 
> thrift.ThriftCLIService: Session disconnected without closing properly. 
> 2016-07-20T15:11:37,858  INFO [HiveServer2-Handler-Pool: Thread-40] 
> thrift.ThriftCLIService: Session disconnected without closing properly. 
> 2016-07-20T15:11:37,858  INFO [HiveServer2-Handler-Pool: Thread-65] 
> thrift.ThriftCLIService: Session disconnected without closing properly. 
> 2016-07-20T15:11:37,858  INFO [HiveServer2-Handler-Pool: Thread-60] 
> thrift.ThriftCLIService: Session disconnected without closing properly. 
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-50] 
> thrift.ThriftCLIService: Session disconnected without closing properly. 
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-45] 
> thrift.ThriftCLIService: Session disconnected without closing properly. 
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-55] 
> thrift.ThriftCLIService: Closing the session: SessionHandle 
> [b4bb8b86-74e1-4e3c-babb-674d34ad1caf]
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-40] 
> thrift.ThriftCLIService: Closing the session: SessionHandle 
> [28b225ee-204f-4b3e-b4fd-0039ef8e276e]
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-65] 
> thrift.ThriftCLIService: Closing the session: SessionHandle 
> [88b630c0-f272-427d-8263-febfef8d]
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-60] 
> thrift.ThriftCLIService: Closing the session: SessionHandle 
> [6d3c3ed9-fadb-4673-8c15-3315b7e2995d]
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-45] 
> thrift.ThriftCLIService: Closing the session: SessionHandle 
> [1d267de8-ff9a-4e76-ac5c-e82c871588e7]
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-50] 
> thrift.ThriftCLIService: Closing the session: SessionHandle 
> [04d53deb-8965-464b-aa3f-7042304cfb54]
> {code}
> The next time I connect to HS2 via beeline, I see
> {code}
> 2016-07-20T15:14:33,679  INFO [HiveServer2-Handler-Pool: Thread-50] 
> thrift.ThriftCLIService: Client protocol version: HIVE_CLI_SERVICE_PROTOCOL_V8
> 2016-07-20T15:14:33,710  INFO [HiveServer2-Handler-Pool: Thread-50] 
> session.SessionState: Created HDFS directory: 
> /tmp/hive/hive/d47759e8-df3a-4504-9f28-99ff5247352c
> 

[jira] [Commented] (HIVE-14296) Session count is not decremented when HS2 clients do not shutdown cleanly.

2016-07-20 Thread Mohit Sabharwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386506#comment-15386506
 ] 

Mohit Sabharwal commented on HIVE-14296:


I'm wondering if use of ThriftCLIService#sessionCount is redundant.

Shouldn't we be using SessionManager#getOpenSessionCount() count instead ?

ThriftBinaryCLIService#deleteContext is already closing the session which will 
remove the value from SessionManager#handleToSession 

So, it seems to me that ThriftCLIService#sessionCount is not telling us anything
that SessionManager#getOpenSessionCount() isn't already.

> Session count is not decremented when HS2 clients do not shutdown cleanly.
> --
>
> Key: HIVE-14296
> URL: https://issues.apache.org/jira/browse/HIVE-14296
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 2.0.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Attachments: HIVE-14296.patch
>
>
> When a JDBC client like beeline abruptly disconnects from HS2, the session 
> gets closed on the serverside but the session count reported in the logs is 
> incorrect. It never gets decremented.
> For example, I created 6 connections from the same instance of beeline to HS2.
> {code}
> 2016-07-20T15:05:17,987  INFO [HiveServer2-Handler-Pool: Thread-40] 
> thrift.ThriftCLIService: Opened a session SessionHandle 
> [28b225ee-204f-4b3e-b4fd-0039ef8e276e], current sessions: 1
> .
> 2016-07-20T15:05:24,239  INFO [HiveServer2-Handler-Pool: Thread-45] 
> thrift.ThriftCLIService: Opened a session SessionHandle 
> [1d267de8-ff9a-4e76-ac5c-e82c871588e7], current sessions: 2
> .
> 2016-07-20T15:05:25,710  INFO [HiveServer2-Handler-Pool: Thread-50] 
> thrift.ThriftCLIService: Opened a session SessionHandle 
> [04d53deb-8965-464b-aa3f-7042304cfb54], current sessions: 3
> .
> 2016-07-20T15:05:26,795  INFO [HiveServer2-Handler-Pool: Thread-55] 
> thrift.ThriftCLIService: Opened a session SessionHandle 
> [b4bb8b86-74e1-4e3c-babb-674d34ad1caf], current sessions: 4
> 2016-07-20T15:05:28,160  INFO [HiveServer2-Handler-Pool: Thread-60] 
> thrift.ThriftCLIService: Opened a session SessionHandle 
> [6d3c3ed9-fadb-4673-8c15-3315b7e2995d], current sessions: 5
> .
> 2016-07-20T15:05:29,136  INFO [HiveServer2-Handler-Pool: Thread-65] 
> thrift.ThriftCLIService: Opened a session SessionHandle 
> [88b630c0-f272-427d-8263-febfef8d], current sessions: 6
> {code}
> When I CNTRL-C the beeline process, in the HS2 logs I see
> {code}
> 2016-07-20T15:11:37,858  INFO [HiveServer2-Handler-Pool: Thread-55] 
> thrift.ThriftCLIService: Session disconnected without closing properly. 
> 2016-07-20T15:11:37,858  INFO [HiveServer2-Handler-Pool: Thread-40] 
> thrift.ThriftCLIService: Session disconnected without closing properly. 
> 2016-07-20T15:11:37,858  INFO [HiveServer2-Handler-Pool: Thread-65] 
> thrift.ThriftCLIService: Session disconnected without closing properly. 
> 2016-07-20T15:11:37,858  INFO [HiveServer2-Handler-Pool: Thread-60] 
> thrift.ThriftCLIService: Session disconnected without closing properly. 
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-50] 
> thrift.ThriftCLIService: Session disconnected without closing properly. 
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-45] 
> thrift.ThriftCLIService: Session disconnected without closing properly. 
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-55] 
> thrift.ThriftCLIService: Closing the session: SessionHandle 
> [b4bb8b86-74e1-4e3c-babb-674d34ad1caf]
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-40] 
> thrift.ThriftCLIService: Closing the session: SessionHandle 
> [28b225ee-204f-4b3e-b4fd-0039ef8e276e]
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-65] 
> thrift.ThriftCLIService: Closing the session: SessionHandle 
> [88b630c0-f272-427d-8263-febfef8d]
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-60] 
> thrift.ThriftCLIService: Closing the session: SessionHandle 
> [6d3c3ed9-fadb-4673-8c15-3315b7e2995d]
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-45] 
> thrift.ThriftCLIService: Closing the session: SessionHandle 
> [1d267de8-ff9a-4e76-ac5c-e82c871588e7]
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-50] 
> thrift.ThriftCLIService: Closing the session: SessionHandle 
> [04d53deb-8965-464b-aa3f-7042304cfb54]
> {code}
> The next time I connect to HS2 via beeline, I see
> {code}
> 2016-07-20T15:14:33,679  INFO [HiveServer2-Handler-Pool: Thread-50] 
> thrift.ThriftCLIService: Client protocol version: HIVE_CLI_SERVICE_PROTOCOL_V8
> 2016-07-20T15:14:33,710  INFO [HiveServer2-Handler-Pool: Thread-50] 
> session.SessionState: Created HDFS directory: 
> 

[jira] [Comment Edited] (HIVE-11516) Fix JDBC compliance issues

2016-07-20 Thread Tao Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386494#comment-15386494
 ] 

Tao Li edited comment on HIVE-11516 at 7/20/16 7:42 PM:


We probably want to try to implement these methods below as a higher priority 
compared with other methods, since it was reported in both Mondrian discussion 
and [https://issues.apache.org/jira/browse/HIVE-4806].

HiveDatabaseMetaData.isReadOnly()
HiveDatabaseMetaData.supportsResultSetConcurrency()



was (Author: taoli-hwx):
We probably want to try to implement these methods below as a higher priority 
compared with other methods, since it was reported in both Mondrian discussion 
and [#4806].

HiveDatabaseMetaData.isReadOnly()
HiveDatabaseMetaData.supportsResultSetConcurrency()


> Fix JDBC compliance issues
> --
>
> Key: HIVE-11516
> URL: https://issues.apache.org/jira/browse/HIVE-11516
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Reporter: Thejas M Nair
>Assignee: Tao Li
>
> There are several methods in JDBC driver implementation that still throw 
> UnSupportedException. This and other jdbc spec non compliant behavior causes 
> issues when JDBC driver is used with external tools and libraries.
> For example, Jmeter calls HiveStatement.setQueryTimeout and this was 
> resulting in an exception. HIVE-10726 makes it possible to have a workaround 
> for this.
> Creating this jira for ease of tracking such issues. Please mark new jiras as 
> blocking this one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11516) Fix JDBC compliance issues

2016-07-20 Thread Tao Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386494#comment-15386494
 ] 

Tao Li commented on HIVE-11516:
---

We probably want to try to implement these methods below as a higher priority 
compared with other methods, since it was reported in both Mondrian discussion 
and [#4806].

HiveDatabaseMetaData.isReadOnly()
HiveDatabaseMetaData.supportsResultSetConcurrency()


> Fix JDBC compliance issues
> --
>
> Key: HIVE-11516
> URL: https://issues.apache.org/jira/browse/HIVE-11516
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Reporter: Thejas M Nair
>Assignee: Tao Li
>
> There are several methods in JDBC driver implementation that still throw 
> UnSupportedException. This and other jdbc spec non compliant behavior causes 
> issues when JDBC driver is used with external tools and libraries.
> For example, Jmeter calls HiveStatement.setQueryTimeout and this was 
> resulting in an exception. HIVE-10726 makes it possible to have a workaround 
> for this.
> Creating this jira for ease of tracking such issues. Please mark new jiras as 
> blocking this one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14297) OrcRecordUpdater floods logs trying to create _orc_acid_version file

2016-07-20 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-14297:
--
Status: Patch Available  (was: Open)

> OrcRecordUpdater floods logs trying to create _orc_acid_version file
> 
>
> Key: HIVE-14297
> URL: https://issues.apache.org/jira/browse/HIVE-14297
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.3.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-14297.patch
>
>
> {noformat}
> try {
>   FSDataOutputStream strm = fs.create(new Path(path, ACID_FORMAT), false);
>   strm.writeInt(ORC_ACID_VERSION);
>   strm.close();
> } catch (IOException ioe) {
>   if (LOG.isDebugEnabled()) {
> LOG.debug("Failed to create " + path + "/" + ACID_FORMAT + " with " +
> ioe);
>   }
> }
> {noformat}
> this file is created in the table/partition dir.  So in streaming ingest 
> cases this happens repeatedly and HDFS prints long stack trace with a WARN
> {noformat}
> 2016-07-18 09:22:13.051 o.a.h.i.r.RetryInvocationHandler [WARN] Exception 
> while invoking ClientNamenodeProtocolTranslatorPB.create over null. Not 
> retrying because try once and fail.
> org.apache.hadoop.ipc.RemoteException: 
> /apps/hive/warehouse/stormdb.db/store_sales/dt=2016%2F07%2F18/_orc_acid_version
>  for client 172.22.111.42 already exists
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2639)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2526)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2410)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:729)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:405)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552) 
> ~[stormjar.jar:?]
>   at org.apache.hadoop.ipc.Client.call(Client.java:1496) ~[stormjar.jar:?]
>   at org.apache.hadoop.ipc.Client.call(Client.java:1396) ~[stormjar.jar:?]
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
>  ~[stormjar.jar:?]
>   at com.sun.proxy.$Proxy44.create(Unknown Source) ~[?:?]
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:311)
>  ~[stormjar.jar:?]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_77]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_77]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_77]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_77]
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
>  [stormjar.jar:?]
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)
>  [stormjar.jar:?]
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)
>  [stormjar.jar:?]
>   at com.sun.proxy.$Proxy45.create(Unknown Source) [?:?]
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1719)
>  [stormjar.jar:?]
>   at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1699) 
> [stormjar.jar:?]
>   at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1634) 
> [stormjar.jar:?]
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:478)
>  [stormjar.jar:?]
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:474)
>  [stormjar.jar:?]
>   at 
> 

[jira] [Updated] (HIVE-14297) OrcRecordUpdater floods logs trying to create _orc_acid_version file

2016-07-20 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-14297:
--
Attachment: HIVE-14297.patch

> OrcRecordUpdater floods logs trying to create _orc_acid_version file
> 
>
> Key: HIVE-14297
> URL: https://issues.apache.org/jira/browse/HIVE-14297
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.3.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-14297.patch
>
>
> {noformat}
> try {
>   FSDataOutputStream strm = fs.create(new Path(path, ACID_FORMAT), false);
>   strm.writeInt(ORC_ACID_VERSION);
>   strm.close();
> } catch (IOException ioe) {
>   if (LOG.isDebugEnabled()) {
> LOG.debug("Failed to create " + path + "/" + ACID_FORMAT + " with " +
> ioe);
>   }
> }
> {noformat}
> this file is created in the table/partition dir.  So in streaming ingest 
> cases this happens repeatedly and HDFS prints long stack trace with a WARN
> {noformat}
> 2016-07-18 09:22:13.051 o.a.h.i.r.RetryInvocationHandler [WARN] Exception 
> while invoking ClientNamenodeProtocolTranslatorPB.create over null. Not 
> retrying because try once and fail.
> org.apache.hadoop.ipc.RemoteException: 
> /apps/hive/warehouse/stormdb.db/store_sales/dt=2016%2F07%2F18/_orc_acid_version
>  for client 172.22.111.42 already exists
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2639)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2526)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2410)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:729)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:405)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552) 
> ~[stormjar.jar:?]
>   at org.apache.hadoop.ipc.Client.call(Client.java:1496) ~[stormjar.jar:?]
>   at org.apache.hadoop.ipc.Client.call(Client.java:1396) ~[stormjar.jar:?]
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
>  ~[stormjar.jar:?]
>   at com.sun.proxy.$Proxy44.create(Unknown Source) ~[?:?]
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:311)
>  ~[stormjar.jar:?]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_77]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_77]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_77]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_77]
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
>  [stormjar.jar:?]
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)
>  [stormjar.jar:?]
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)
>  [stormjar.jar:?]
>   at com.sun.proxy.$Proxy45.create(Unknown Source) [?:?]
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1719)
>  [stormjar.jar:?]
>   at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1699) 
> [stormjar.jar:?]
>   at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1634) 
> [stormjar.jar:?]
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:478)
>  [stormjar.jar:?]
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:474)
>  [stormjar.jar:?]
>   at 
> 

[jira] [Updated] (HIVE-14296) Session count is not decremented when HS2 clients do not shutdown cleanly.

2016-07-20 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-14296:
-
Status: Patch Available  (was: Open)

> Session count is not decremented when HS2 clients do not shutdown cleanly.
> --
>
> Key: HIVE-14296
> URL: https://issues.apache.org/jira/browse/HIVE-14296
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 2.0.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Attachments: HIVE-14296.patch
>
>
> When a JDBC client like beeline abruptly disconnects from HS2, the session 
> gets closed on the serverside but the session count reported in the logs is 
> incorrect. It never gets decremented.
> For example, I created 6 connections from the same instance of beeline to HS2.
> {code}
> 2016-07-20T15:05:17,987  INFO [HiveServer2-Handler-Pool: Thread-40] 
> thrift.ThriftCLIService: Opened a session SessionHandle 
> [28b225ee-204f-4b3e-b4fd-0039ef8e276e], current sessions: 1
> .
> 2016-07-20T15:05:24,239  INFO [HiveServer2-Handler-Pool: Thread-45] 
> thrift.ThriftCLIService: Opened a session SessionHandle 
> [1d267de8-ff9a-4e76-ac5c-e82c871588e7], current sessions: 2
> .
> 2016-07-20T15:05:25,710  INFO [HiveServer2-Handler-Pool: Thread-50] 
> thrift.ThriftCLIService: Opened a session SessionHandle 
> [04d53deb-8965-464b-aa3f-7042304cfb54], current sessions: 3
> .
> 2016-07-20T15:05:26,795  INFO [HiveServer2-Handler-Pool: Thread-55] 
> thrift.ThriftCLIService: Opened a session SessionHandle 
> [b4bb8b86-74e1-4e3c-babb-674d34ad1caf], current sessions: 4
> 2016-07-20T15:05:28,160  INFO [HiveServer2-Handler-Pool: Thread-60] 
> thrift.ThriftCLIService: Opened a session SessionHandle 
> [6d3c3ed9-fadb-4673-8c15-3315b7e2995d], current sessions: 5
> .
> 2016-07-20T15:05:29,136  INFO [HiveServer2-Handler-Pool: Thread-65] 
> thrift.ThriftCLIService: Opened a session SessionHandle 
> [88b630c0-f272-427d-8263-febfef8d], current sessions: 6
> {code}
> When I CNTRL-C the beeline process, in the HS2 logs I see
> {code}
> 2016-07-20T15:11:37,858  INFO [HiveServer2-Handler-Pool: Thread-55] 
> thrift.ThriftCLIService: Session disconnected without closing properly. 
> 2016-07-20T15:11:37,858  INFO [HiveServer2-Handler-Pool: Thread-40] 
> thrift.ThriftCLIService: Session disconnected without closing properly. 
> 2016-07-20T15:11:37,858  INFO [HiveServer2-Handler-Pool: Thread-65] 
> thrift.ThriftCLIService: Session disconnected without closing properly. 
> 2016-07-20T15:11:37,858  INFO [HiveServer2-Handler-Pool: Thread-60] 
> thrift.ThriftCLIService: Session disconnected without closing properly. 
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-50] 
> thrift.ThriftCLIService: Session disconnected without closing properly. 
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-45] 
> thrift.ThriftCLIService: Session disconnected without closing properly. 
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-55] 
> thrift.ThriftCLIService: Closing the session: SessionHandle 
> [b4bb8b86-74e1-4e3c-babb-674d34ad1caf]
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-40] 
> thrift.ThriftCLIService: Closing the session: SessionHandle 
> [28b225ee-204f-4b3e-b4fd-0039ef8e276e]
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-65] 
> thrift.ThriftCLIService: Closing the session: SessionHandle 
> [88b630c0-f272-427d-8263-febfef8d]
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-60] 
> thrift.ThriftCLIService: Closing the session: SessionHandle 
> [6d3c3ed9-fadb-4673-8c15-3315b7e2995d]
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-45] 
> thrift.ThriftCLIService: Closing the session: SessionHandle 
> [1d267de8-ff9a-4e76-ac5c-e82c871588e7]
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-50] 
> thrift.ThriftCLIService: Closing the session: SessionHandle 
> [04d53deb-8965-464b-aa3f-7042304cfb54]
> {code}
> The next time I connect to HS2 via beeline, I see
> {code}
> 2016-07-20T15:14:33,679  INFO [HiveServer2-Handler-Pool: Thread-50] 
> thrift.ThriftCLIService: Client protocol version: HIVE_CLI_SERVICE_PROTOCOL_V8
> 2016-07-20T15:14:33,710  INFO [HiveServer2-Handler-Pool: Thread-50] 
> session.SessionState: Created HDFS directory: 
> /tmp/hive/hive/d47759e8-df3a-4504-9f28-99ff5247352c
> 2016-07-20T15:14:33,725  INFO [HiveServer2-Handler-Pool: Thread-50] 
> session.SessionState: Created local directory: 
> /var/folders/_3/0w477k4j5bjd6h967rw4vflwgp/T/ngangam/d47759e8-df3a-4504-9f28-99ff5247352c
> 2016-07-20T15:14:33,735  INFO [HiveServer2-Handler-Pool: Thread-50] 
> session.SessionState: Created HDFS directory: 
> 

[jira] [Assigned] (HIVE-14297) OrcRecordUpdater floods logs trying to create _orc_acid_version file

2016-07-20 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman reassigned HIVE-14297:
-

Assignee: Eugene Koifman

> OrcRecordUpdater floods logs trying to create _orc_acid_version file
> 
>
> Key: HIVE-14297
> URL: https://issues.apache.org/jira/browse/HIVE-14297
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.3.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>
> {noformat}
> try {
>   FSDataOutputStream strm = fs.create(new Path(path, ACID_FORMAT), false);
>   strm.writeInt(ORC_ACID_VERSION);
>   strm.close();
> } catch (IOException ioe) {
>   if (LOG.isDebugEnabled()) {
> LOG.debug("Failed to create " + path + "/" + ACID_FORMAT + " with " +
> ioe);
>   }
> }
> {noformat}
> this file is created in the table/partition dir.  So in streaming ingest 
> cases this happens repeatedly and HDFS prints long stack trace with a WARN
> {noformat}
> 2016-07-18 09:22:13.051 o.a.h.i.r.RetryInvocationHandler [WARN] Exception 
> while invoking ClientNamenodeProtocolTranslatorPB.create over null. Not 
> retrying because try once and fail.
> org.apache.hadoop.ipc.RemoteException: 
> /apps/hive/warehouse/stormdb.db/store_sales/dt=2016%2F07%2F18/_orc_acid_version
>  for client 172.22.111.42 already exists
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2639)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2526)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2410)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:729)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:405)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552) 
> ~[stormjar.jar:?]
>   at org.apache.hadoop.ipc.Client.call(Client.java:1496) ~[stormjar.jar:?]
>   at org.apache.hadoop.ipc.Client.call(Client.java:1396) ~[stormjar.jar:?]
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
>  ~[stormjar.jar:?]
>   at com.sun.proxy.$Proxy44.create(Unknown Source) ~[?:?]
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:311)
>  ~[stormjar.jar:?]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_77]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_77]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_77]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_77]
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
>  [stormjar.jar:?]
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)
>  [stormjar.jar:?]
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)
>  [stormjar.jar:?]
>   at com.sun.proxy.$Proxy45.create(Unknown Source) [?:?]
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1719)
>  [stormjar.jar:?]
>   at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1699) 
> [stormjar.jar:?]
>   at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1634) 
> [stormjar.jar:?]
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:478)
>  [stormjar.jar:?]
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:474)
>  [stormjar.jar:?]
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  

[jira] [Updated] (HIVE-14296) Session count is not decremented when HS2 clients do not shutdown cleanly.

2016-07-20 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-14296:
-
Attachment: HIVE-14296.patch

> Session count is not decremented when HS2 clients do not shutdown cleanly.
> --
>
> Key: HIVE-14296
> URL: https://issues.apache.org/jira/browse/HIVE-14296
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 2.0.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Attachments: HIVE-14296.patch
>
>
> When a JDBC client like beeline abruptly disconnects from HS2, the session 
> gets closed on the serverside but the session count reported in the logs is 
> incorrect. It never gets decremented.
> For example, I created 6 connections from the same instance of beeline to HS2.
> {code}
> 2016-07-20T15:05:17,987  INFO [HiveServer2-Handler-Pool: Thread-40] 
> thrift.ThriftCLIService: Opened a session SessionHandle 
> [28b225ee-204f-4b3e-b4fd-0039ef8e276e], current sessions: 1
> .
> 2016-07-20T15:05:24,239  INFO [HiveServer2-Handler-Pool: Thread-45] 
> thrift.ThriftCLIService: Opened a session SessionHandle 
> [1d267de8-ff9a-4e76-ac5c-e82c871588e7], current sessions: 2
> .
> 2016-07-20T15:05:25,710  INFO [HiveServer2-Handler-Pool: Thread-50] 
> thrift.ThriftCLIService: Opened a session SessionHandle 
> [04d53deb-8965-464b-aa3f-7042304cfb54], current sessions: 3
> .
> 2016-07-20T15:05:26,795  INFO [HiveServer2-Handler-Pool: Thread-55] 
> thrift.ThriftCLIService: Opened a session SessionHandle 
> [b4bb8b86-74e1-4e3c-babb-674d34ad1caf], current sessions: 4
> 2016-07-20T15:05:28,160  INFO [HiveServer2-Handler-Pool: Thread-60] 
> thrift.ThriftCLIService: Opened a session SessionHandle 
> [6d3c3ed9-fadb-4673-8c15-3315b7e2995d], current sessions: 5
> .
> 2016-07-20T15:05:29,136  INFO [HiveServer2-Handler-Pool: Thread-65] 
> thrift.ThriftCLIService: Opened a session SessionHandle 
> [88b630c0-f272-427d-8263-febfef8d], current sessions: 6
> {code}
> When I CNTRL-C the beeline process, in the HS2 logs I see
> {code}
> 2016-07-20T15:11:37,858  INFO [HiveServer2-Handler-Pool: Thread-55] 
> thrift.ThriftCLIService: Session disconnected without closing properly. 
> 2016-07-20T15:11:37,858  INFO [HiveServer2-Handler-Pool: Thread-40] 
> thrift.ThriftCLIService: Session disconnected without closing properly. 
> 2016-07-20T15:11:37,858  INFO [HiveServer2-Handler-Pool: Thread-65] 
> thrift.ThriftCLIService: Session disconnected without closing properly. 
> 2016-07-20T15:11:37,858  INFO [HiveServer2-Handler-Pool: Thread-60] 
> thrift.ThriftCLIService: Session disconnected without closing properly. 
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-50] 
> thrift.ThriftCLIService: Session disconnected without closing properly. 
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-45] 
> thrift.ThriftCLIService: Session disconnected without closing properly. 
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-55] 
> thrift.ThriftCLIService: Closing the session: SessionHandle 
> [b4bb8b86-74e1-4e3c-babb-674d34ad1caf]
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-40] 
> thrift.ThriftCLIService: Closing the session: SessionHandle 
> [28b225ee-204f-4b3e-b4fd-0039ef8e276e]
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-65] 
> thrift.ThriftCLIService: Closing the session: SessionHandle 
> [88b630c0-f272-427d-8263-febfef8d]
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-60] 
> thrift.ThriftCLIService: Closing the session: SessionHandle 
> [6d3c3ed9-fadb-4673-8c15-3315b7e2995d]
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-45] 
> thrift.ThriftCLIService: Closing the session: SessionHandle 
> [1d267de8-ff9a-4e76-ac5c-e82c871588e7]
> 2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-50] 
> thrift.ThriftCLIService: Closing the session: SessionHandle 
> [04d53deb-8965-464b-aa3f-7042304cfb54]
> {code}
> The next time I connect to HS2 via beeline, I see
> {code}
> 2016-07-20T15:14:33,679  INFO [HiveServer2-Handler-Pool: Thread-50] 
> thrift.ThriftCLIService: Client protocol version: HIVE_CLI_SERVICE_PROTOCOL_V8
> 2016-07-20T15:14:33,710  INFO [HiveServer2-Handler-Pool: Thread-50] 
> session.SessionState: Created HDFS directory: 
> /tmp/hive/hive/d47759e8-df3a-4504-9f28-99ff5247352c
> 2016-07-20T15:14:33,725  INFO [HiveServer2-Handler-Pool: Thread-50] 
> session.SessionState: Created local directory: 
> /var/folders/_3/0w477k4j5bjd6h967rw4vflwgp/T/ngangam/d47759e8-df3a-4504-9f28-99ff5247352c
> 2016-07-20T15:14:33,735  INFO [HiveServer2-Handler-Pool: Thread-50] 
> session.SessionState: Created HDFS directory: 
> 

[jira] [Comment Edited] (HIVE-14251) Union All of different types resolves to incorrect data

2016-07-20 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386476#comment-15386476
 ] 

Aihua Xu edited comment on HIVE-14251 at 7/20/16 7:21 PM:
--

Thanks for the suggestion. I went through the sql standard. Seems like the 
standard is treating union as another union join, but doesn't explicitly 
mention the behavior of mismatched type. Oracle requires the union types are in 
the same type group from the doc (I don't have one either to try out), but the 
doc is pretty clear that it won't do implicit type conversion across type 
groups (https://docs.oracle.com/cd/B19306_01/server.102/b14200/queries004.htm). 

 


was (Author: aihuaxu):
Thanks for the suggestion. I went through the sql standard. Seems like the 
standard is treating union as another union join, but doesn't explicitly 
mention the behavior of mismatched type. Oracle requires the union types are in 
the same type group from the doc (I don't have one either to try out either), 
but the doc is pretty clear that it won't do implicit type conversion across 
type groups 
(https://docs.oracle.com/cd/B19306_01/server.102/b14200/queries004.htm). 

 

> Union All of different types resolves to incorrect data
> ---
>
> Key: HIVE-14251
> URL: https://issues.apache.org/jira/browse/HIVE-14251
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 2.0.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-14251.1.patch
>
>
> create table src(c1 date, c2 int, c3 double);
> insert into src values ('2016-01-01',5,1.25);
> select * from 
> (select c1 from src union all
> select c2 from src union all
> select c3 from src) t;
> It will return NULL for the c1 values. Seems the common data type is resolved 
> to the last c3 which is double.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14251) Union All of different types resolves to incorrect data

2016-07-20 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386476#comment-15386476
 ] 

Aihua Xu commented on HIVE-14251:
-

Thanks for the suggestion. I went through the sql standard. Seems like the 
standard is treating union as another union join, but doesn't explicitly 
mention the behavior of mismatched type. Oracle requires the union types are in 
the same type group from the doc (I don't have one either to try out either), 
but the doc is pretty clear that it won't do implicit type conversion across 
type groups 
(https://docs.oracle.com/cd/B19306_01/server.102/b14200/queries004.htm). 

 

> Union All of different types resolves to incorrect data
> ---
>
> Key: HIVE-14251
> URL: https://issues.apache.org/jira/browse/HIVE-14251
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 2.0.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-14251.1.patch
>
>
> create table src(c1 date, c2 int, c3 double);
> insert into src values ('2016-01-01',5,1.25);
> select * from 
> (select c1 from src union all
> select c2 from src union all
> select c3 from src) t;
> It will return NULL for the c1 values. Seems the common data type is resolved 
> to the last c3 which is double.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14275) LineageState#clear throws NullPointerException on branch-1

2016-07-20 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386471#comment-15386471
 ] 

Thejas M Nair commented on HIVE-14275:
--

+1

> LineageState#clear throws NullPointerException on branch-1
> --
>
> Key: HIVE-14275
> URL: https://issues.apache.org/jira/browse/HIVE-14275
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0, 1.2.1
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-14275.1.patch
>
>
> We'll need to add a null check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13879) add HiveAuthzContext to grant/revoke methods in HiveAuthorizer api

2016-07-20 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386451#comment-15386451
 ] 

Thejas M Nair commented on HIVE-13879:
--

[~madhan.neethiraj] Can you please review the api change ?


> add HiveAuthzContext to grant/revoke methods in HiveAuthorizer api
> --
>
> Key: HIVE-13879
> URL: https://issues.apache.org/jira/browse/HIVE-13879
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Attachments: HIVE-13879.1.patch
>
>
> HiveAuthzContext provides useful information about the context of the 
> commands, such as the command string and ip address information. However, 
> this is available to only checkPrivileges and filterListCmdObjects api calls.
> This should be made available for other api calls such as grant/revoke 
> methods and role management methods.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14292) ACID table creation fails on mysql with MySQLIntegrityConstraintViolationException

2016-07-20 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-14292:
--
Attachment: HIVE-14292.2.patch

patch 2 is the same - previous one disappeared form build queue

> ACID table creation fails on mysql with 
> MySQLIntegrityConstraintViolationException
> --
>
> Key: HIVE-14292
> URL: https://issues.apache.org/jira/browse/HIVE-14292
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.3.0, 2.1.0
> Environment: MySQL
>Reporter: Deepesh Khandelwal
>Assignee: Eugene Koifman
> Attachments: HIVE-14292.2.patch, HIVE-14292.patch
>
>
> While creating a ACID table ran into the following error:
> {noformat}
> >>>  create table acidcount1 (id int) 
> clustered by (id) into 2 buckets 
> stored as orc 
> tblproperties('transactional'='true');
> INFO  : Compiling 
> command(queryId=hive_20160719105944_bfe65377-59fa-4e17-941e-1f86b8daca15): 
> create table acidcount1 (id int) 
> clustered by (id) into 2 buckets 
> stored as orc 
> tblproperties('transactional'='true')
> INFO  : Semantic Analysis Completed
> INFO  : Returning Hive schema: Schema(fieldSchemas:null, properties:null)
> INFO  : Completed compiling 
> command(queryId=hive_20160719105944_bfe65377-59fa-4e17-941e-1f86b8daca15); 
> Time taken: 0.111 seconds
> Error: Error running query: java.lang.RuntimeException: Unable to lock 
> 'CheckLock' due to: Duplicate entry 'CheckLock-0' for key 'PRIMARY' 
> (SQLState=23000, ErrorCode=1062) (state=,code=0)
> Aborting command set because "force" is false and command failed: "create 
> table acidcount1 (id int) 
> clustered by (id) into 2 buckets 
> stored as orc 
> tblproperties('transactional'='true');"
> {noformat}
> Saw the following detailed stack in the server log:
> {noformat}
> 2016-07-19T10:59:46,213 ERROR [HiveServer2-Background-Pool: Thread-463]: 
> metastore.RetryingHMSHandler (RetryingHMSHandler.java:invokeInternal(196)) - 
> java.lang.RuntimeException: Unable to lock 'CheckLock' due to: Duplicate 
> entry 'CheckLock-0' for key 'PRIMARY' (SQLState=23000, ErrorCode=1062)
> at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.acquireLock(TxnHandler.java:3235)
> at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.checkLock(TxnHandler.java:2309)
> at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.checkLockWithRetry(TxnHandler.java:1012)
> at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:784)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.lock(HiveMetaStore.java:5941)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99)
> at com.sun.proxy.$Proxy26.lock(Unknown Source)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.lock(HiveMetaStoreClient.java:2109)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:154)
> at com.sun.proxy.$Proxy28.lock(Unknown Source)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2259)
> at com.sun.proxy.$Proxy28.lock(Unknown Source)
> at 
> org.apache.hadoop.hive.ql.lockmgr.DbTxnManager$SynchronizedMetaStoreClient.lock(DbTxnManager.java:740)
> at 
> org.apache.hadoop.hive.ql.lockmgr.DbLockManager.lock(DbLockManager.java:103)
> at 
> org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocks(DbTxnManager.java:341)
> at 
> org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocksWithHeartbeatDelay(DbTxnManager.java:357)
>   

[jira] [Updated] (HIVE-14292) ACID table creation fails on mysql with MySQLIntegrityConstraintViolationException

2016-07-20 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-14292:
--
Status: Patch Available  (was: Open)

> ACID table creation fails on mysql with 
> MySQLIntegrityConstraintViolationException
> --
>
> Key: HIVE-14292
> URL: https://issues.apache.org/jira/browse/HIVE-14292
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 2.1.0, 1.3.0
> Environment: MySQL
>Reporter: Deepesh Khandelwal
>Assignee: Eugene Koifman
> Attachments: HIVE-14292.2.patch, HIVE-14292.patch
>
>
> While creating a ACID table ran into the following error:
> {noformat}
> >>>  create table acidcount1 (id int) 
> clustered by (id) into 2 buckets 
> stored as orc 
> tblproperties('transactional'='true');
> INFO  : Compiling 
> command(queryId=hive_20160719105944_bfe65377-59fa-4e17-941e-1f86b8daca15): 
> create table acidcount1 (id int) 
> clustered by (id) into 2 buckets 
> stored as orc 
> tblproperties('transactional'='true')
> INFO  : Semantic Analysis Completed
> INFO  : Returning Hive schema: Schema(fieldSchemas:null, properties:null)
> INFO  : Completed compiling 
> command(queryId=hive_20160719105944_bfe65377-59fa-4e17-941e-1f86b8daca15); 
> Time taken: 0.111 seconds
> Error: Error running query: java.lang.RuntimeException: Unable to lock 
> 'CheckLock' due to: Duplicate entry 'CheckLock-0' for key 'PRIMARY' 
> (SQLState=23000, ErrorCode=1062) (state=,code=0)
> Aborting command set because "force" is false and command failed: "create 
> table acidcount1 (id int) 
> clustered by (id) into 2 buckets 
> stored as orc 
> tblproperties('transactional'='true');"
> {noformat}
> Saw the following detailed stack in the server log:
> {noformat}
> 2016-07-19T10:59:46,213 ERROR [HiveServer2-Background-Pool: Thread-463]: 
> metastore.RetryingHMSHandler (RetryingHMSHandler.java:invokeInternal(196)) - 
> java.lang.RuntimeException: Unable to lock 'CheckLock' due to: Duplicate 
> entry 'CheckLock-0' for key 'PRIMARY' (SQLState=23000, ErrorCode=1062)
> at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.acquireLock(TxnHandler.java:3235)
> at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.checkLock(TxnHandler.java:2309)
> at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.checkLockWithRetry(TxnHandler.java:1012)
> at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:784)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.lock(HiveMetaStore.java:5941)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99)
> at com.sun.proxy.$Proxy26.lock(Unknown Source)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.lock(HiveMetaStoreClient.java:2109)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:154)
> at com.sun.proxy.$Proxy28.lock(Unknown Source)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2259)
> at com.sun.proxy.$Proxy28.lock(Unknown Source)
> at 
> org.apache.hadoop.hive.ql.lockmgr.DbTxnManager$SynchronizedMetaStoreClient.lock(DbTxnManager.java:740)
> at 
> org.apache.hadoop.hive.ql.lockmgr.DbLockManager.lock(DbLockManager.java:103)
> at 
> org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocks(DbTxnManager.java:341)
> at 
> org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocksWithHeartbeatDelay(DbTxnManager.java:357)
> at 
> 

[jira] [Updated] (HIVE-14292) ACID table creation fails on mysql with MySQLIntegrityConstraintViolationException

2016-07-20 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-14292:
--
Status: Open  (was: Patch Available)

> ACID table creation fails on mysql with 
> MySQLIntegrityConstraintViolationException
> --
>
> Key: HIVE-14292
> URL: https://issues.apache.org/jira/browse/HIVE-14292
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 2.1.0, 1.3.0
> Environment: MySQL
>Reporter: Deepesh Khandelwal
>Assignee: Eugene Koifman
> Attachments: HIVE-14292.patch
>
>
> While creating a ACID table ran into the following error:
> {noformat}
> >>>  create table acidcount1 (id int) 
> clustered by (id) into 2 buckets 
> stored as orc 
> tblproperties('transactional'='true');
> INFO  : Compiling 
> command(queryId=hive_20160719105944_bfe65377-59fa-4e17-941e-1f86b8daca15): 
> create table acidcount1 (id int) 
> clustered by (id) into 2 buckets 
> stored as orc 
> tblproperties('transactional'='true')
> INFO  : Semantic Analysis Completed
> INFO  : Returning Hive schema: Schema(fieldSchemas:null, properties:null)
> INFO  : Completed compiling 
> command(queryId=hive_20160719105944_bfe65377-59fa-4e17-941e-1f86b8daca15); 
> Time taken: 0.111 seconds
> Error: Error running query: java.lang.RuntimeException: Unable to lock 
> 'CheckLock' due to: Duplicate entry 'CheckLock-0' for key 'PRIMARY' 
> (SQLState=23000, ErrorCode=1062) (state=,code=0)
> Aborting command set because "force" is false and command failed: "create 
> table acidcount1 (id int) 
> clustered by (id) into 2 buckets 
> stored as orc 
> tblproperties('transactional'='true');"
> {noformat}
> Saw the following detailed stack in the server log:
> {noformat}
> 2016-07-19T10:59:46,213 ERROR [HiveServer2-Background-Pool: Thread-463]: 
> metastore.RetryingHMSHandler (RetryingHMSHandler.java:invokeInternal(196)) - 
> java.lang.RuntimeException: Unable to lock 'CheckLock' due to: Duplicate 
> entry 'CheckLock-0' for key 'PRIMARY' (SQLState=23000, ErrorCode=1062)
> at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.acquireLock(TxnHandler.java:3235)
> at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.checkLock(TxnHandler.java:2309)
> at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.checkLockWithRetry(TxnHandler.java:1012)
> at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:784)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.lock(HiveMetaStore.java:5941)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99)
> at com.sun.proxy.$Proxy26.lock(Unknown Source)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.lock(HiveMetaStoreClient.java:2109)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:154)
> at com.sun.proxy.$Proxy28.lock(Unknown Source)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2259)
> at com.sun.proxy.$Proxy28.lock(Unknown Source)
> at 
> org.apache.hadoop.hive.ql.lockmgr.DbTxnManager$SynchronizedMetaStoreClient.lock(DbTxnManager.java:740)
> at 
> org.apache.hadoop.hive.ql.lockmgr.DbLockManager.lock(DbLockManager.java:103)
> at 
> org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocks(DbTxnManager.java:341)
> at 
> org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocksWithHeartbeatDelay(DbTxnManager.java:357)
> at 
> 

[jira] [Commented] (HIVE-14279) fix mvn test TestHiveMetaStore.testTransactionalValidation

2016-07-20 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386275#comment-15386275
 ] 

Hive QA commented on HIVE-14279:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12818804/HIVE-14279.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 10 failed/errored test(s), 10335 tests 
executed
*Failed tests:*
{noformat}
TestMsgBusConnection - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_acid_globallimit
org.apache.hadoop.hive.llap.daemon.impl.TestLlapTokenChecker.testCheckPermissions
org.apache.hadoop.hive.llap.daemon.impl.TestLlapTokenChecker.testGetToken
org.apache.hadoop.hive.metastore.TestMetaStoreMetrics.testConnections
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/581/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/581/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-581/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 10 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12818804 - PreCommit-HIVE-MASTER-Build

> fix mvn test TestHiveMetaStore.testTransactionalValidation 
> ---
>
> Key: HIVE-14279
> URL: https://issues.apache.org/jira/browse/HIVE-14279
> Project: Hive
>  Issue Type: Improvement
>  Components: Tests
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Minor
> Attachments: HIVE-14279.1.patch
>
>
> This test doesn't drop it's table. And because there are a few subclasses of 
> it...the second one will fail - because the table already exists. for example:
> {code}
> mvn clean package  -Pitests 
> -Dtest=TestSetUGIOnBothClientServer,TestSetUGIOnOnlyClient
> {code}
> will cause:
> {code}
> org.apache.hadoop.hive.metastore.api.AlreadyExistsException: Table acidTable 
> already exists
> {code}
> for the second test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14242) Backport ORC-53 to Hive

2016-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386255#comment-15386255
 ] 

ASF GitHub Bot commented on HIVE-14242:
---

Github user omalley closed the pull request at:

https://github.com/apache/hive/pull/86


> Backport ORC-53 to Hive
> ---
>
> Key: HIVE-14242
> URL: https://issues.apache.org/jira/browse/HIVE-14242
> Project: Hive
>  Issue Type: Bug
>  Components: ORC
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Fix For: 2.2.0
>
> Attachments: HIVE-14242.patch
>
>
> ORC-53 was mostly about the mapreduce shims for ORC, but it fixed a problem 
> in TypeDescription that should be backported to Hive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14242) Backport ORC-53 to Hive

2016-07-20 Thread Owen O'Malley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley updated HIVE-14242:
-
   Resolution: Fixed
Fix Version/s: 2.2.0
   Status: Resolved  (was: Patch Available)

I just committed this. Thanks for the review, Prasanth.

> Backport ORC-53 to Hive
> ---
>
> Key: HIVE-14242
> URL: https://issues.apache.org/jira/browse/HIVE-14242
> Project: Hive
>  Issue Type: Bug
>  Components: ORC
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Fix For: 2.2.0
>
> Attachments: HIVE-14242.patch
>
>
> ORC-53 was mostly about the mapreduce shims for ORC, but it fixed a problem 
> in TypeDescription that should be backported to Hive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-13464) Backport changes to storage-api into branch 2 for release into 2.0.1

2016-07-20 Thread Owen O'Malley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley resolved HIVE-13464.
--
Resolution: Won't Fix

> Backport changes to storage-api into branch 2 for release into 2.0.1
> 
>
> Key: HIVE-13464
> URL: https://issues.apache.org/jira/browse/HIVE-13464
> Project: Hive
>  Issue Type: Bug
>  Components: storage-api
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
>
> To release ORC as a separate project, backporting the safe changes for 
> storage-api to 2.0.1 will minimize the disruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14275) LineageState#clear throws NullPointerException on branch-1

2016-07-20 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386245#comment-15386245
 ] 

Vaibhav Gumashta commented on HIVE-14275:
-

[~thejas] Small patch for branch-1. Can you take a look please?

> LineageState#clear throws NullPointerException on branch-1
> --
>
> Key: HIVE-14275
> URL: https://issues.apache.org/jira/browse/HIVE-14275
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0, 1.2.1
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-14275.1.patch
>
>
> We'll need to add a null check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14275) LineageState#clear throws NullPointerException on branch-1

2016-07-20 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-14275:

Attachment: HIVE-14275.1.patch

> LineageState#clear throws NullPointerException on branch-1
> --
>
> Key: HIVE-14275
> URL: https://issues.apache.org/jira/browse/HIVE-14275
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0, 1.2.1
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-14275.1.patch
>
>
> We'll need to add a null check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10574) Metastore to handle expired tokens inline

2016-07-20 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386117#comment-15386117
 ] 

Aihua Xu commented on HIVE-10574:
-

I spent some time understanding the logic. Seems like we are only doing the 
token validation when the client connects to the metastore. After the client 
gets the token, it's free to talk to metastore, while the token could have 
expired. 

To fully solve the issue, similar to http request, we need to carry token for 
each request to the metastore, the metastore needs to validate the token before 
processing each request. If the token is expired, then the client should get 
notified and try to get a new token. If the token is near-expired (e.g., 
half-way of the token life time), then we should renew the token to expend to a 
full life time. 

Hope it makes sense.

> Metastore to handle expired tokens inline
> -
>
> Key: HIVE-10574
> URL: https://issues.apache.org/jira/browse/HIVE-10574
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Xuefu Zhang
>Assignee: Aihua Xu
>
> This is a followup for HIVE-9625.
> Metastore has a garbage collection thread that removes expired tokens. 
> However that still leaves a window (1 hour by default) where clients could 
> retrieve a token that's expired or about to expire. An option is for 
> metastore handle expired tokens inline. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14204) Optimize loading dynamic partitions

2016-07-20 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386112#comment-15386112
 ] 

Ashutosh Chauhan commented on HIVE-14204:
-

I see you added synchronized for metastore calls. This is as I expected. Better 
path here could be to repurpose DbTxnManager::SynchronizedMetaStoreClient() as 
a generic synchronized client. 
But my question is this will hamper perf. But it will be good to measure it 
since if there are very little gains after this then we may need to take 
different approach.

> Optimize loading dynamic partitions 
> 
>
> Key: HIVE-14204
> URL: https://issues.apache.org/jira/browse/HIVE-14204
> Project: Hive
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HIVE-14204.1.patch, HIVE-14204.3.patch
>
>
> Lots of time is spent in sequential fashion to load dynamic partitioned 
> dataset in driver side. E.g simple dynamic partitioned load as follows takes 
> 300+ seconds
> {noformat}
> INSERT INTO web_sales_test partition(ws_sold_date_sk) select * from 
> tpcds_bin_partitioned_orc_200.web_sales;
> Time taken to load dynamic partitions: 309.22 seconds
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14251) Union All of different types resolves to incorrect data

2016-07-20 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386062#comment-15386062
 ] 

Ashutosh Chauhan commented on HIVE-14251:
-

When we are making such semantic changes we should make change which takes us 
closer to standard. So, it should help to read that to see what standard has to 
say here. 
I took this query and ran it against few databases:
* MySQL : same result as you are trying to achieve
* Postgres : exception : ERROR: UNION types date and integer cannot be matched 
Position: 53
* SQLServer: Different result set. It figured common type as date 2016-01-01 
00:00:00.000 1900-01-06 00:00:00.000 1900-01-02 06:00:00.000

Couldn't try on oracle as I didnt had it handy. That would be good experiment 
too.
Clearly  its not consistent. My suggestion would be to read standard and try to 
emulate that as much as possible.

> Union All of different types resolves to incorrect data
> ---
>
> Key: HIVE-14251
> URL: https://issues.apache.org/jira/browse/HIVE-14251
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 2.0.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-14251.1.patch
>
>
> create table src(c1 date, c2 int, c3 double);
> insert into src values ('2016-01-01',5,1.25);
> select * from 
> (select c1 from src union all
> select c2 from src union all
> select c3 from src) t;
> It will return NULL for the c1 values. Seems the common data type is resolved 
> to the last c3 which is double.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14295) Some metastore event listeners always initialize deleteData as false

2016-07-20 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386038#comment-15386038
 ] 

Sergio Peña commented on HIVE-14295:


The patch looks good.
+1

Is this patch mean to 1.3 or 2.2? I see fix version 1.3

> Some metastore event listeners always initialize deleteData as false
> 
>
> Key: HIVE-14295
> URL: https://issues.apache.org/jira/browse/HIVE-14295
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 1.3.0, 2.1.0
>Reporter: niklaus xiao
>Assignee: niklaus xiao
>Priority: Minor
> Fix For: 1.3.0
>
> Attachments: HIVE-14295.1.patch
>
>
> DropTableEvent:
> {code}
>   public DropTableEvent(Table table, boolean status, boolean deleteData, 
> HMSHandler handler) {
> super(status, handler);
> this.table = table;
> // In HiveMetaStore, the deleteData flag indicates whether DFS data 
> should be
> // removed on a drop.
> this.deleteData = false;
>   }
> {code}
> Same as PreDropPartitionEvent and PreDropTableEvent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14251) Union All of different types resolves to incorrect data

2016-07-20 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386026#comment-15386026
 ] 

Aihua Xu commented on HIVE-14251:
-

Yeah. Actually implicitConvertible() is not completely accurate I feel. I just 
put the comment why I try to avoid reusing the same function. 

So for comparison of string and double, then we should compare them in double 
type. That's implicitConvertible() is trying to do, returning true for string 
=> double conversion. 

I feel comparison and union do need separate functions though.

> Union All of different types resolves to incorrect data
> ---
>
> Key: HIVE-14251
> URL: https://issues.apache.org/jira/browse/HIVE-14251
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 2.0.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-14251.1.patch
>
>
> create table src(c1 date, c2 int, c3 double);
> insert into src values ('2016-01-01',5,1.25);
> select * from 
> (select c1 from src union all
> select c2 from src union all
> select c3 from src) t;
> It will return NULL for the c1 values. Seems the common data type is resolved 
> to the last c3 which is double.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14251) Union All of different types resolves to incorrect data

2016-07-20 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386015#comment-15386015
 ] 

Aihua Xu commented on HIVE-14251:
-

Thanks for reviewing. 

Originally I thought of reusing, but later I decided not to and gave a new name 
to completely differentiate from implicitConvertible().

implicitConvertible() is used in the data comparison and isCommonTypeOf() is 
used in union all operator. In fact, they could have different behaviors, I 
guess, not only for string and double, but for any types from different groups, 
seems they may or may have opposite result. e.g., for void and string,  I'm not 
sure what should return for comparison; but for union, seems reasonable to 
return string. If it's numeric types, like int, double, then both should return 
double. 

Right now I only changed the union behavior of string and double and haven't 
touched the others. I feel we need to evaluate them as well but I will defer 
when we have the complains.   

> Union All of different types resolves to incorrect data
> ---
>
> Key: HIVE-14251
> URL: https://issues.apache.org/jira/browse/HIVE-14251
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 2.0.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-14251.1.patch
>
>
> create table src(c1 date, c2 int, c3 double);
> insert into src values ('2016-01-01',5,1.25);
> select * from 
> (select c1 from src union all
> select c2 from src union all
> select c3 from src) t;
> It will return NULL for the c1 values. Seems the common data type is resolved 
> to the last c3 which is double.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14251) Union All of different types resolves to incorrect data

2016-07-20 Thread Chaoyu Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385959#comment-15385959
 ] 

Chaoyu Tang commented on HIVE-14251:


[~aihuaxu] Do you know why and under what situation the string could be 
considered to be implicitly converted to double? I believe it breaks your case 
because the date string could be converted to double. If we add a flag to 
disable this implicit conversion generally in getCommonClassForUnionAll, will 
it bring in possible backward type incompatibility in some queries with union 
all for type string and double?

> Union All of different types resolves to incorrect data
> ---
>
> Key: HIVE-14251
> URL: https://issues.apache.org/jira/browse/HIVE-14251
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 2.0.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-14251.1.patch
>
>
> create table src(c1 date, c2 int, c3 double);
> insert into src values ('2016-01-01',5,1.25);
> select * from 
> (select c1 from src union all
> select c2 from src union all
> select c3 from src) t;
> It will return NULL for the c1 values. Seems the common data type is resolved 
> to the last c3 which is double.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14267) HS2 open_operations metrics not decremented when an operation gets timed out

2016-07-20 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-14267:
-
Attachment: HIVE-14267.2.patch

Re-attaching the same patch as its not being picked up by the pre-commit builds 
for some reason.

> HS2 open_operations metrics not decremented when an operation gets timed out
> 
>
> Key: HIVE-14267
> URL: https://issues.apache.org/jira/browse/HIVE-14267
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: David Karoly
>Assignee: Naveen Gangam
>Priority: Minor
> Attachments: HIVE-14267.2.patch, HIVE-14267.patch
>
>
> When an operation gets timed out, it is removed from handleToOperation hash 
> map in OperationManager.removeTimedOutOperation(). However OPEN_OPERATIONS 
> counter is not decremented. 
> This can result in an inaccurate open operations metrics value being 
> reported. Especially when submitting queries to Hive from Hue with 
> close_queries=false option, this results in misleading HS2 metrics charts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14267) HS2 open_operations metrics not decremented when an operation gets timed out

2016-07-20 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-14267:
-
Status: Patch Available  (was: Open)

> HS2 open_operations metrics not decremented when an operation gets timed out
> 
>
> Key: HIVE-14267
> URL: https://issues.apache.org/jira/browse/HIVE-14267
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: David Karoly
>Assignee: Naveen Gangam
>Priority: Minor
> Attachments: HIVE-14267.2.patch, HIVE-14267.patch
>
>
> When an operation gets timed out, it is removed from handleToOperation hash 
> map in OperationManager.removeTimedOutOperation(). However OPEN_OPERATIONS 
> counter is not decremented. 
> This can result in an inaccurate open operations metrics value being 
> reported. Especially when submitting queries to Hive from Hue with 
> close_queries=false option, this results in misleading HS2 metrics charts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14267) HS2 open_operations metrics not decremented when an operation gets timed out

2016-07-20 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-14267:
-
Status: Open  (was: Patch Available)

> HS2 open_operations metrics not decremented when an operation gets timed out
> 
>
> Key: HIVE-14267
> URL: https://issues.apache.org/jira/browse/HIVE-14267
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: David Karoly
>Assignee: Naveen Gangam
>Priority: Minor
> Attachments: HIVE-14267.patch
>
>
> When an operation gets timed out, it is removed from handleToOperation hash 
> map in OperationManager.removeTimedOutOperation(). However OPEN_OPERATIONS 
> counter is not decremented. 
> This can result in an inaccurate open operations metrics value being 
> reported. Especially when submitting queries to Hive from Hue with 
> close_queries=false option, this results in misleading HS2 metrics charts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14267) HS2 open_operations metrics not decremented when an operation gets timed out

2016-07-20 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-14267:
-
Attachment: (was: HIVE-14267.2.patch)

> HS2 open_operations metrics not decremented when an operation gets timed out
> 
>
> Key: HIVE-14267
> URL: https://issues.apache.org/jira/browse/HIVE-14267
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: David Karoly
>Assignee: Naveen Gangam
>Priority: Minor
> Attachments: HIVE-14267.patch
>
>
> When an operation gets timed out, it is removed from handleToOperation hash 
> map in OperationManager.removeTimedOutOperation(). However OPEN_OPERATIONS 
> counter is not decremented. 
> This can result in an inaccurate open operations metrics value being 
> reported. Especially when submitting queries to Hive from Hue with 
> close_queries=false option, this results in misleading HS2 metrics charts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14249) Add simple materialized views with manual rebuilds

2016-07-20 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-14249:
---
Attachment: HIVE-14249.03.patch

Triggering QA with the full patch.

> Add simple materialized views with manual rebuilds
> --
>
> Key: HIVE-14249
> URL: https://issues.apache.org/jira/browse/HIVE-14249
> Project: Hive
>  Issue Type: Sub-task
>  Components: Parser, Views
>Reporter: Alan Gates
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-10459.2.patch, HIVE-14249.03.patch
>
>
> This patch is a start at implementing simple views. It doesn't have enough 
> testing yet (e.g. there's no negative testing). And I know it fails in the 
> partitioned case. I suspect things like security and locking don't work 
> properly yet either. But I'm posting it as a starting point.
> In this initial patch I'm just handling simple materialized views with manual 
> rebuilds. In later JIRAs we can add features such as allowing the optimizer 
> to rewrite queries to use materialized views rather than tables named in the 
> queries, giving the optimizer the ability to determine when a materialized 
> view is stale, etc.
> Also, I didn't rebase this patch against trunk after the migration from 
> svn->git so it may not apply cleanly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HIVE-14249) Add simple materialized views with manual rebuilds

2016-07-20 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-14249 started by Jesus Camacho Rodriguez.
--
> Add simple materialized views with manual rebuilds
> --
>
> Key: HIVE-14249
> URL: https://issues.apache.org/jira/browse/HIVE-14249
> Project: Hive
>  Issue Type: Sub-task
>  Components: Parser, Views
>Reporter: Alan Gates
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-10459.2.patch
>
>
> This patch is a start at implementing simple views. It doesn't have enough 
> testing yet (e.g. there's no negative testing). And I know it fails in the 
> partitioned case. I suspect things like security and locking don't work 
> properly yet either. But I'm posting it as a starting point.
> In this initial patch I'm just handling simple materialized views with manual 
> rebuilds. In later JIRAs we can add features such as allowing the optimizer 
> to rewrite queries to use materialized views rather than tables named in the 
> queries, giving the optimizer the ability to determine when a materialized 
> view is stale, etc.
> Also, I didn't rebase this patch against trunk after the migration from 
> svn->git so it may not apply cleanly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14249) Add simple materialized views with manual rebuilds

2016-07-20 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-14249:
---
Status: Patch Available  (was: In Progress)

> Add simple materialized views with manual rebuilds
> --
>
> Key: HIVE-14249
> URL: https://issues.apache.org/jira/browse/HIVE-14249
> Project: Hive
>  Issue Type: Sub-task
>  Components: Parser, Views
>Reporter: Alan Gates
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-10459.2.patch
>
>
> This patch is a start at implementing simple views. It doesn't have enough 
> testing yet (e.g. there's no negative testing). And I know it fails in the 
> partitioned case. I suspect things like security and locking don't work 
> properly yet either. But I'm posting it as a starting point.
> In this initial patch I'm just handling simple materialized views with manual 
> rebuilds. In later JIRAs we can add features such as allowing the optimizer 
> to rewrite queries to use materialized views rather than tables named in the 
> queries, giving the optimizer the ability to determine when a materialized 
> view is stale, etc.
> Also, I didn't rebase this patch against trunk after the migration from 
> svn->git so it may not apply cleanly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-14249) Add simple materialized views with manual rebuilds

2016-07-20 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez reassigned HIVE-14249:
--

Assignee: Jesus Camacho Rodriguez  (was: Alan Gates)

> Add simple materialized views with manual rebuilds
> --
>
> Key: HIVE-14249
> URL: https://issues.apache.org/jira/browse/HIVE-14249
> Project: Hive
>  Issue Type: Sub-task
>  Components: Parser, Views
>Reporter: Alan Gates
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-10459.2.patch
>
>
> This patch is a start at implementing simple views. It doesn't have enough 
> testing yet (e.g. there's no negative testing). And I know it fails in the 
> partitioned case. I suspect things like security and locking don't work 
> properly yet either. But I'm posting it as a starting point.
> In this initial patch I'm just handling simple materialized views with manual 
> rebuilds. In later JIRAs we can add features such as allowing the optimizer 
> to rewrite queries to use materialized views rather than tables named in the 
> queries, giving the optimizer the ability to determine when a materialized 
> view is stale, etc.
> Also, I didn't rebase this patch against trunk after the migration from 
> svn->git so it may not apply cleanly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14249) Add simple materialized views with manual rebuilds

2016-07-20 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385776#comment-15385776
 ] 

Jesus Camacho Rodriguez commented on HIVE-14249:


[~alangates], I have created a pull request in 
https://github.com/apache/hive/pull/91 .

I have rebased your initial patch so it would apply cleanly to master. Initial 
patch contained a lot of code already, in particular support for CREATE 
MATERIALIZED VIEW mv ..., ALTER MATERIALIZED VIEW mv REBUILD, and DROP 
MATERIALIZED VIEW mv. It also contained authorization bits for 
creating/accessing the materialized views. In addition to positive/negative 
tests for these cases.

I created a second commit that extends the original patch with some useful 
features. In particular, being able to add properties to the MV, use a custom 
StorageHandler, and specifying a custom location to store the data: all these 
features will be useful if we want to integrate MVs with other external systems 
e.g. Druid. In addition, I enabled Calcite optimization of the MV query, as 
before we were bypassing the optimizer. Finally, I extended existing tests and 
added new tests. Could you review this second commit? Thanks

I think those two commits have the initial blocks for MVs in place. One of the 
remaining features that I wanted to add was the support of partitioning for 
MVs, as I think it would be quite useful for performance and follow-up 
maintenance implementation; however, I checked the code in SemanticAnalyzer, 
etc. a bit and I think this is not straightforward. If you have a clear idea in 
mind on the bits that we need to implement to support partitioning in MVs, 
please let me know.

Once the patch goes in (the support for partitioning is not needed), I can 
create a follow-up issue to start the integration with Calcite and its views 
service, hence starting to experiment with its query rewriting capabilities 
using materialized views.

> Add simple materialized views with manual rebuilds
> --
>
> Key: HIVE-14249
> URL: https://issues.apache.org/jira/browse/HIVE-14249
> Project: Hive
>  Issue Type: Sub-task
>  Components: Parser, Views
>Reporter: Alan Gates
>Assignee: Alan Gates
> Attachments: HIVE-10459.2.patch
>
>
> This patch is a start at implementing simple views. It doesn't have enough 
> testing yet (e.g. there's no negative testing). And I know it fails in the 
> partitioned case. I suspect things like security and locking don't work 
> properly yet either. But I'm posting it as a starting point.
> In this initial patch I'm just handling simple materialized views with manual 
> rebuilds. In later JIRAs we can add features such as allowing the optimizer 
> to rewrite queries to use materialized views rather than tables named in the 
> queries, giving the optimizer the ability to determine when a materialized 
> view is stale, etc.
> Also, I didn't rebase this patch against trunk after the migration from 
> svn->git so it may not apply cleanly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14249) Add simple materialized views with manual rebuilds

2016-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385774#comment-15385774
 ] 

ASF GitHub Bot commented on HIVE-14249:
---

GitHub user jcamachor opened a pull request:

https://github.com/apache/hive/pull/91

HIVE-14249: Add simple materialized views with manual rebuilds



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jcamachor/hive HIVE-MVs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hive/pull/91.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #91


commit fc5e6e3b0e826ff9a0b3437ae8e05eb9484a3856
Author: Alan Gates 
Date:   2016-07-20T11:37:31Z

HIVE-14249: Add simple materialized views with manual rebuilds (Alan Gates, 
reviewed by Jesus Camacho Rodriguez)

commit 86648e2f3440f7f01c18ff4819a07c7b02050f08
Author: Jesus Camacho Rodriguez 
Date:   2016-07-20T11:38:09Z

HIVE-14249: Add simple materialized views with manual rebuilds




> Add simple materialized views with manual rebuilds
> --
>
> Key: HIVE-14249
> URL: https://issues.apache.org/jira/browse/HIVE-14249
> Project: Hive
>  Issue Type: Sub-task
>  Components: Parser, Views
>Reporter: Alan Gates
>Assignee: Alan Gates
> Attachments: HIVE-10459.2.patch
>
>
> This patch is a start at implementing simple views. It doesn't have enough 
> testing yet (e.g. there's no negative testing). And I know it fails in the 
> partitioned case. I suspect things like security and locking don't work 
> properly yet either. But I'm posting it as a starting point.
> In this initial patch I'm just handling simple materialized views with manual 
> rebuilds. In later JIRAs we can add features such as allowing the optimizer 
> to rewrite queries to use materialized views rather than tables named in the 
> queries, giving the optimizer the ability to determine when a materialized 
> view is stale, etc.
> Also, I didn't rebase this patch against trunk after the migration from 
> svn->git so it may not apply cleanly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14295) Some metastore event listeners always initialize deleteData as false

2016-07-20 Thread niklaus xiao (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385717#comment-15385717
 ] 

niklaus xiao commented on HIVE-14295:
-

Small patch. Could you take a look ? [~ashutoshc] Thank you.

> Some metastore event listeners always initialize deleteData as false
> 
>
> Key: HIVE-14295
> URL: https://issues.apache.org/jira/browse/HIVE-14295
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 1.3.0, 2.1.0
>Reporter: niklaus xiao
>Assignee: niklaus xiao
>Priority: Minor
> Fix For: 1.3.0
>
> Attachments: HIVE-14295.1.patch
>
>
> DropTableEvent:
> {code}
>   public DropTableEvent(Table table, boolean status, boolean deleteData, 
> HMSHandler handler) {
> super(status, handler);
> this.table = table;
> // In HiveMetaStore, the deleteData flag indicates whether DFS data 
> should be
> // removed on a drop.
> this.deleteData = false;
>   }
> {code}
> Same as PreDropPartitionEvent and PreDropTableEvent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14295) Some metastore event listeners always initialize deleteData as false

2016-07-20 Thread niklaus xiao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

niklaus xiao updated HIVE-14295:

Attachment: HIVE-14295.1.patch

> Some metastore event listeners always initialize deleteData as false
> 
>
> Key: HIVE-14295
> URL: https://issues.apache.org/jira/browse/HIVE-14295
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 1.3.0, 2.1.0
>Reporter: niklaus xiao
>Assignee: niklaus xiao
>Priority: Minor
> Fix For: 1.3.0
>
> Attachments: HIVE-14295.1.patch
>
>
> DropTableEvent:
> {code}
>   public DropTableEvent(Table table, boolean status, boolean deleteData, 
> HMSHandler handler) {
> super(status, handler);
> this.table = table;
> // In HiveMetaStore, the deleteData flag indicates whether DFS data 
> should be
> // removed on a drop.
> this.deleteData = false;
>   }
> {code}
> Same as PreDropPartitionEvent and PreDropTableEvent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14295) Some metastore event listeners always initialize deleteData as false

2016-07-20 Thread niklaus xiao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

niklaus xiao updated HIVE-14295:

Fix Version/s: 1.3.0
   Status: Patch Available  (was: Open)

> Some metastore event listeners always initialize deleteData as false
> 
>
> Key: HIVE-14295
> URL: https://issues.apache.org/jira/browse/HIVE-14295
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 2.1.0, 1.3.0
>Reporter: niklaus xiao
>Assignee: niklaus xiao
>Priority: Minor
> Fix For: 1.3.0
>
> Attachments: HIVE-14295.1.patch
>
>
> DropTableEvent:
> {code}
>   public DropTableEvent(Table table, boolean status, boolean deleteData, 
> HMSHandler handler) {
> super(status, handler);
> this.table = table;
> // In HiveMetaStore, the deleteData flag indicates whether DFS data 
> should be
> // removed on a drop.
> this.deleteData = false;
>   }
> {code}
> Same as PreDropPartitionEvent and PreDropTableEvent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14204) Optimize loading dynamic partitions

2016-07-20 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HIVE-14204:

Attachment: HIVE-14204.3.patch

> Optimize loading dynamic partitions 
> 
>
> Key: HIVE-14204
> URL: https://issues.apache.org/jira/browse/HIVE-14204
> Project: Hive
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HIVE-14204.1.patch, HIVE-14204.3.patch
>
>
> Lots of time is spent in sequential fashion to load dynamic partitioned 
> dataset in driver side. E.g simple dynamic partitioned load as follows takes 
> 300+ seconds
> {noformat}
> INSERT INTO web_sales_test partition(ws_sold_date_sk) select * from 
> tpcds_bin_partitioned_orc_200.web_sales;
> Time taken to load dynamic partitions: 309.22 seconds
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14214) ORC Schema Evolution and Predicate Push Down do not work together (no rows returned)

2016-07-20 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-14214:

Status: In Progress  (was: Patch Available)

> ORC Schema Evolution and Predicate Push Down do not work together (no rows 
> returned)
> 
>
> Key: HIVE-14214
> URL: https://issues.apache.org/jira/browse/HIVE-14214
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-14214.01.patch, HIVE-14214.02.patch, 
> HIVE-14214.03.patch, HIVE-14214.04.patch, HIVE-14214.05.patch, 
> HIVE-14214.WIP.patch
>
>
> In Schema Evolution, the reader schema is different than the file schema 
> which is used to evaluate predicate push down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14294) HiveSchemaConverter for Parquet doesn't translate TINYINT and SMALLINT into proper Parquet types

2016-07-20 Thread Cheng Lian (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385452#comment-15385452
 ] 

Cheng Lian commented on HIVE-14294:
---

Hit this issue while investigating SPARK-16632.

> HiveSchemaConverter for Parquet doesn't translate TINYINT and SMALLINT into 
> proper Parquet types
> 
>
> Key: HIVE-14294
> URL: https://issues.apache.org/jira/browse/HIVE-14294
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1, 2.1.0
>Reporter: Cheng Lian
>
> To reproduce this issue, run the following DDL:
> {code:sql}
> CREATE TABLE foo STORED AS PARQUET AS SELECT CAST(1 AS TINYINT);
> {code}
> And then check the schema of the written Parquet file:
> {noformat}
> $ parquet-schema $WAREHOUSE_PATH/foo/00_0
> message hive_schema {
>   optional int32 _c0;
> }
> {noformat}
> When translating Hive types into Parquet types, {{TINYINT}} and {{SMALLINT}} 
> should be translated into the {{int32 (INT_8)}} and {{int32 (INT_16)}} 
> respectively. However, {{HiveSchemaConverter}} converts all of {{TINYINT}}, 
> {{SMALLINT}}, and {{INT}} into Parquet {{int32}}. This causes problem when 
> accessing Parquet files generated by Hive in other systems since type 
> information gets wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14284) HiveAuthorizer: Pass HiveAuthzContext to grant/revoke/role apis as well

2016-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385445#comment-15385445
 ] 

ASF GitHub Bot commented on HIVE-14284:
---

Github user thejasmn closed the pull request at:

https://github.com/apache/hive/pull/87


> HiveAuthorizer: Pass HiveAuthzContext to grant/revoke/role apis as well
> ---
>
> Key: HIVE-14284
> URL: https://issues.apache.org/jira/browse/HIVE-14284
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization, Security
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Attachments: HIVE-14284.1.patch
>
>
> HiveAuthzContext provides useful information about the context of the 
> commands, such as the command string and ip address information. However, 
> this is available to only checkPrivileges and filterListCmdObjects api calls.
> This should be made available for other api calls such as grant/revoke 
> methods and role management methods.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >