[jira] [Commented] (HIVE-9907) insert into table values() when UTF-8 character is not correct

2015-03-10 Thread lfh (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14354967#comment-14354967
 ] 

lfh commented on HIVE-9907:
---

insert into table test_acid
select 1 , '中文1' , '中文2'
from dual ;

that is ok .

but update ... values ( '中文' )  is not ok 

this problem make me crazy . 

 insert into table values()   when UTF-8 character is not correct
 

 Key: HIVE-9907
 URL: https://issues.apache.org/jira/browse/HIVE-9907
 Project: Hive
  Issue Type: Bug
  Components: CLI, Clients, JDBC
Affects Versions: 0.14.0, 0.13.1, 1.0.0
 Environment: centos 6   LANG=zh_CN.UTF-8
 hadoop 2.6
 hive 1.1.0
Reporter: lfh
Priority: Critical

 insert into table test_acid partition(pt='pt_2')
 values( 2, '中文_2' , 'city_2' )
 ;
 hive select *
  from test_acid 
  ;
 OK
 2 -�_2city_2  pt_2
 Time taken: 0.237 seconds, Fetched: 1 row(s)
 hive 
 CREATE TABLE test_acid(id INT, 
 name STRING, 
 city STRING) 
 PARTITIONED BY (pt STRING)
 clustered by (id) into 1 buckets
 stored as ORCFILE
 TBLPROPERTIES('transactional'='true')
 ;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9919) upgrade scripts don't work on some auto-created DBs due to absence of tables

2015-03-10 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-9919:
---
Attachment: HIVE-9919.patch

 upgrade scripts don't work on some auto-created DBs due to absence of tables
 

 Key: HIVE-9919
 URL: https://issues.apache.org/jira/browse/HIVE-9919
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HIVE-9919.patch


 DataNucleus in its infinite wisdom doesn't create all tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9659) 'Error while trying to create table container' occurs during hive query case execution when hive.optimize.skewjoin set to 'true' [Spark Branch]

2015-03-10 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14356155#comment-14356155
 ] 

Xuefu Zhang commented on HIVE-9659:
---

HIVE-9918 is resolved. [~lirui], could you reattach the patch to have another 
test run? Thanks.

 'Error while trying to create table container' occurs during hive query case 
 execution when hive.optimize.skewjoin set to 'true' [Spark Branch]
 ---

 Key: HIVE-9659
 URL: https://issues.apache.org/jira/browse/HIVE-9659
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xin Hao
Assignee: Rui Li
 Attachments: HIVE-9659.1-spark.patch, HIVE-9659.2-spark.patch, 
 HIVE-9659.3-spark.patch, HIVE-9659.4-spark.patch


 We found that 'Error while trying to create table container'  occurs during 
 Big-Bench Q12 case execution when hive.optimize.skewjoin set to 'true'.
 If hive.optimize.skewjoin set to 'false', the case could pass.
 How to reproduce:
 1. set hive.optimize.skewjoin=true;
 2. Run BigBench case Q12 and it will fail. 
 Check the executor log (e.g. /usr/lib/spark/work/app-/2/stderr) and you 
 will found error 'Error while trying to create table container' in the log 
 and also a NullPointerException near the end of the log.
 (a) Detail error message for 'Error while trying to create table container':
 {noformat}
 15/02/12 01:29:49 ERROR SparkMapRecordHandler: Error processing row: 
 org.apache.hadoop.hive.ql.metadata.HiveException: 
 org.apache.hadoop.hive.ql.metadata.HiveException: Error while trying to 
 create table container
 org.apache.hadoop.hive.ql.metadata.HiveException: 
 org.apache.hadoop.hive.ql.metadata.HiveException: Error while trying to 
 create table container
   at 
 org.apache.hadoop.hive.ql.exec.spark.HashTableLoader.load(HashTableLoader.java:118)
   at 
 org.apache.hadoop.hive.ql.exec.MapJoinOperator.loadHashTable(MapJoinOperator.java:193)
   at 
 org.apache.hadoop.hive.ql.exec.MapJoinOperator.cleanUpInputFileChangedOp(MapJoinOperator.java:219)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1051)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
   at 
 org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:486)
   at 
 org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:141)
   at 
 org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:47)
   at 
 org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27)
   at 
 org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:98)
   at 
 scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
   at 
 org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:217)
   at 
 org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:65)
   at 
 org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
   at 
 org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
   at org.apache.spark.scheduler.Task.run(Task.scala:56)
   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error while 
 trying to create table container
   at 
 org.apache.hadoop.hive.ql.exec.persistence.MapJoinTableContainerSerDe.load(MapJoinTableContainerSerDe.java:158)
   at 
 org.apache.hadoop.hive.ql.exec.spark.HashTableLoader.load(HashTableLoader.java:115)
   ... 21 more
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error, not a 
 directory: 
 hdfs://bhx1:8020/tmp/hive/root/d22ef465-bff5-4edb-a822-0a9f1c25b66c/hive_2015-02-12_01-28-10_008_6897031694580088767-1/-mr-10009/HashTable-Stage-6/MapJoin-mapfile01--.hashtable
   at 
 org.apache.hadoop.hive.ql.exec.persistence.MapJoinTableContainerSerDe.load(MapJoinTableContainerSerDe.java:106)
   ... 22 more
 15/02/12 01:29:49 INFO SparkRecordHandler: maximum memory = 40939028480
 15/02/12 01:29:49 INFO PerfLogger: PERFLOG 

[jira] [Updated] (HIVE-9857) Create Factorial UDF

2015-03-10 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9857:
--
Attachment: HIVE-9857.2.patch

patch #2 - fix typo

 Create Factorial UDF
 

 Key: HIVE-9857
 URL: https://issues.apache.org/jira/browse/HIVE-9857
 Project: Hive
  Issue Type: Improvement
  Components: UDF
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
 Attachments: HIVE-9857.1.patch, HIVE-9857.2.patch


 Function signature: factorial(int a): bigint
 For example 5!= 5*4*3*2*1=120
 {code}
 select factorial(5);
 OK
 120
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9659) 'Error while trying to create table container' occurs during hive query case execution when hive.optimize.skewjoin set to 'true' [Spark Branch]

2015-03-10 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-9659:
-
Attachment: HIVE-9659.4-spark.patch

Add golden file for MR

 'Error while trying to create table container' occurs during hive query case 
 execution when hive.optimize.skewjoin set to 'true' [Spark Branch]
 ---

 Key: HIVE-9659
 URL: https://issues.apache.org/jira/browse/HIVE-9659
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xin Hao
Assignee: Rui Li
 Attachments: HIVE-9659.1-spark.patch, HIVE-9659.2-spark.patch, 
 HIVE-9659.3-spark.patch, HIVE-9659.4-spark.patch


 We found that 'Error while trying to create table container'  occurs during 
 Big-Bench Q12 case execution when hive.optimize.skewjoin set to 'true'.
 If hive.optimize.skewjoin set to 'false', the case could pass.
 How to reproduce:
 1. set hive.optimize.skewjoin=true;
 2. Run BigBench case Q12 and it will fail. 
 Check the executor log (e.g. /usr/lib/spark/work/app-/2/stderr) and you 
 will found error 'Error while trying to create table container' in the log 
 and also a NullPointerException near the end of the log.
 (a) Detail error message for 'Error while trying to create table container':
 {noformat}
 15/02/12 01:29:49 ERROR SparkMapRecordHandler: Error processing row: 
 org.apache.hadoop.hive.ql.metadata.HiveException: 
 org.apache.hadoop.hive.ql.metadata.HiveException: Error while trying to 
 create table container
 org.apache.hadoop.hive.ql.metadata.HiveException: 
 org.apache.hadoop.hive.ql.metadata.HiveException: Error while trying to 
 create table container
   at 
 org.apache.hadoop.hive.ql.exec.spark.HashTableLoader.load(HashTableLoader.java:118)
   at 
 org.apache.hadoop.hive.ql.exec.MapJoinOperator.loadHashTable(MapJoinOperator.java:193)
   at 
 org.apache.hadoop.hive.ql.exec.MapJoinOperator.cleanUpInputFileChangedOp(MapJoinOperator.java:219)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1051)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
   at 
 org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:486)
   at 
 org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:141)
   at 
 org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:47)
   at 
 org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27)
   at 
 org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:98)
   at 
 scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
   at 
 org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:217)
   at 
 org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:65)
   at 
 org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
   at 
 org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
   at org.apache.spark.scheduler.Task.run(Task.scala:56)
   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error while 
 trying to create table container
   at 
 org.apache.hadoop.hive.ql.exec.persistence.MapJoinTableContainerSerDe.load(MapJoinTableContainerSerDe.java:158)
   at 
 org.apache.hadoop.hive.ql.exec.spark.HashTableLoader.load(HashTableLoader.java:115)
   ... 21 more
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error, not a 
 directory: 
 hdfs://bhx1:8020/tmp/hive/root/d22ef465-bff5-4edb-a822-0a9f1c25b66c/hive_2015-02-12_01-28-10_008_6897031694580088767-1/-mr-10009/HashTable-Stage-6/MapJoin-mapfile01--.hashtable
   at 
 org.apache.hadoop.hive.ql.exec.persistence.MapJoinTableContainerSerDe.load(MapJoinTableContainerSerDe.java:106)
   ... 22 more
 15/02/12 01:29:49 INFO SparkRecordHandler: maximum memory = 40939028480
 15/02/12 01:29:49 INFO PerfLogger: PERFLOG method=SparkInitializeOperators 
 from=org.apache.hadoop.hive.ql.exec.spark.SparkRecordHandler
 {noformat}
 (b) 

[jira] [Updated] (HIVE-9813) Hive JDBC - DatabaseMetaData.getColumns method cannot find classes added with add jar command

2015-03-10 Thread Yongzhi Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongzhi Chen updated HIVE-9813:
---
Attachment: HIVE-9813.1.patch

 Hive JDBC - DatabaseMetaData.getColumns method cannot find classes added with 
 add jar command
 ---

 Key: HIVE-9813
 URL: https://issues.apache.org/jira/browse/HIVE-9813
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Reporter: Yongzhi Chen
Assignee: Yongzhi Chen
 Attachments: HIVE-9813.1.patch


 Execute following JDBC client program:
 {code}
 import java.sql.*;
 public class TestAddJar {
 private static Connection makeConnection(String connString, String 
 classPath) throws ClassNotFoundException, SQLException
 {
 System.out.println(Current Connection info: + connString);
 Class.forName(classPath);
 System.out.println(Current driver info: + classPath);
 return DriverManager.getConnection(connString);
 }
 public static void main(String[] args)
 {
 if(2 != args.length)
 {
 System.out.println(Two arguments needed: connection string, path 
 to jar to be added (include jar name));
 System.out.println(Example: java -jar TestApp.jar 
 jdbc:hive2://192.168.111.111 /tmp/json-serde-1.3-jar-with-dependencies.jar);
 return;
 }
 Connection conn;
 try
 {
 conn = makeConnection(args[0], org.apache.hive.jdbc.HiveDriver);
 
 System.out.println(---);
 System.out.println(DONE);
 
 System.out.println(---);
 System.out.println(Execute query: add jar  + args[1] + ;);
 Statement stmt = conn.createStatement();
 int c = stmt.executeUpdate(add jar  + args[1]);
 System.out.println(Returned value is: [ + c + ]\n);
 
 System.out.println(---);
 final String createTableQry = Create table if not exists 
 json_test(id int, content string)  +
 row format serde 'org.openx.data.jsonserde.JsonSerDe';
 System.out.println(Execute query: + createTableQry + ;);
 stmt.execute(createTableQry);
 
 System.out.println(---);
 System.out.println(getColumn() 
 Call---\n);
 DatabaseMetaData md = conn.getMetaData();
 System.out.println(Test get all column in a schema:);
 ResultSet rs = md.getColumns(Hive, default, json_test, 
 null);
 while (rs.next()) {
 System.out.println(rs.getString(1));
 }
 conn.close();
 }
 catch (ClassNotFoundException e)
 {
 e.printStackTrace();
 }
 catch (SQLException e)
 {
 e.printStackTrace();
 }
 }
 }
 {code}
 Get Exception, and from metastore log:
 7:41:30.316 PMERROR   hive.log
 error in initSerDe: java.lang.ClassNotFoundException Class 
 org.openx.data.jsonserde.JsonSerDe not found
 java.lang.ClassNotFoundException: Class org.openx.data.jsonserde.JsonSerDe 
 not found
 at 
 org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1803)
 at 
 org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:183)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_fields(HiveMetaStore.java:2487)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_schema(HiveMetaStore.java:2542)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
 at com.sun.proxy.$Proxy5.get_schema(Unknown Source)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_schema.getResult(ThriftHiveMetastore.java:6425)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_schema.getResult(ThriftHiveMetastore.java:6409)
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
 at 
 org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)
 at 
 org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:107)
 at 

[jira] [Commented] (HIVE-9813) Hive JDBC - DatabaseMetaData.getColumns method cannot find classes added with add jar command

2015-03-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14356293#comment-14356293
 ] 

Hive QA commented on HIVE-9813:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12703818/HIVE-9813.1.patch

{color:green}SUCCESS:{color} +1 7762 tests passed

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2998/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2998/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2998/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12703818 - PreCommit-HIVE-TRUNK-Build

 Hive JDBC - DatabaseMetaData.getColumns method cannot find classes added with 
 add jar command
 ---

 Key: HIVE-9813
 URL: https://issues.apache.org/jira/browse/HIVE-9813
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Reporter: Yongzhi Chen
Assignee: Yongzhi Chen
 Attachments: HIVE-9813.1.patch


 Execute following JDBC client program:
 {code}
 import java.sql.*;
 public class TestAddJar {
 private static Connection makeConnection(String connString, String 
 classPath) throws ClassNotFoundException, SQLException
 {
 System.out.println(Current Connection info: + connString);
 Class.forName(classPath);
 System.out.println(Current driver info: + classPath);
 return DriverManager.getConnection(connString);
 }
 public static void main(String[] args)
 {
 if(2 != args.length)
 {
 System.out.println(Two arguments needed: connection string, path 
 to jar to be added (include jar name));
 System.out.println(Example: java -jar TestApp.jar 
 jdbc:hive2://192.168.111.111 /tmp/json-serde-1.3-jar-with-dependencies.jar);
 return;
 }
 Connection conn;
 try
 {
 conn = makeConnection(args[0], org.apache.hive.jdbc.HiveDriver);
 
 System.out.println(---);
 System.out.println(DONE);
 
 System.out.println(---);
 System.out.println(Execute query: add jar  + args[1] + ;);
 Statement stmt = conn.createStatement();
 int c = stmt.executeUpdate(add jar  + args[1]);
 System.out.println(Returned value is: [ + c + ]\n);
 
 System.out.println(---);
 final String createTableQry = Create table if not exists 
 json_test(id int, content string)  +
 row format serde 'org.openx.data.jsonserde.JsonSerDe';
 System.out.println(Execute query: + createTableQry + ;);
 stmt.execute(createTableQry);
 
 System.out.println(---);
 System.out.println(getColumn() 
 Call---\n);
 DatabaseMetaData md = conn.getMetaData();
 System.out.println(Test get all column in a schema:);
 ResultSet rs = md.getColumns(Hive, default, json_test, 
 null);
 while (rs.next()) {
 System.out.println(rs.getString(1));
 }
 conn.close();
 }
 catch (ClassNotFoundException e)
 {
 e.printStackTrace();
 }
 catch (SQLException e)
 {
 e.printStackTrace();
 }
 }
 }
 {code}
 Get Exception, and from metastore log:
 7:41:30.316 PMERROR   hive.log
 error in initSerDe: java.lang.ClassNotFoundException Class 
 org.openx.data.jsonserde.JsonSerDe not found
 java.lang.ClassNotFoundException: Class org.openx.data.jsonserde.JsonSerDe 
 not found
 at 
 org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1803)
 at 
 org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:183)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_fields(HiveMetaStore.java:2487)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_schema(HiveMetaStore.java:2542)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 

[jira] [Commented] (HIVE-9658) Reduce parquet memory use by bypassing java primitive objects on ETypeConverter

2015-03-10 Thread Chao (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355195#comment-14355195
 ] 

Chao commented on HIVE-9658:


[~spena], OK, I'll try. I never did merge before, so I need take some time to 
learn this...

 Reduce parquet memory use by bypassing java primitive objects on 
 ETypeConverter
 ---

 Key: HIVE-9658
 URL: https://issues.apache.org/jira/browse/HIVE-9658
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergio Peña
Assignee: Sergio Peña
 Attachments: HIVE-9658.1.patch, HIVE-9658.2.patch


 The ETypeConverter class passes Writable objects to the collection converters 
 in order to be read later by the map/reduce functions. These objects are all 
 wrapped in a unique ArrayWritable object.
 We can save some memory by returning the java primitive objects instead in 
 order to prevent memory allocation. The only writable object needed by 
 map/reduce is ArrayWritable. If we create another writable class where to 
 store primitive objects (Object), then we can stop using all primitive 
 wirtables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9909) Specify hive branch to use on jenkins hms tests

2015-03-10 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355181#comment-14355181
 ] 

Brock Noland commented on HIVE-9909:


+1

 Specify hive branch to use on jenkins hms tests
 ---

 Key: HIVE-9909
 URL: https://issues.apache.org/jira/browse/HIVE-9909
 Project: Hive
  Issue Type: Improvement
Reporter: Sergio Peña
Assignee: Sergio Peña
 Attachments: HIVE-9909.1.patch


 The HMS metastore upgrade scripts work with 'trunk' branch only. 
 We should allow to checkout any branch specified on Jenkins job in order to 
 allow branch users test their changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9828) Semantic analyzer does not capture view parent entity for tables referred in view with union all

2015-03-10 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355173#comment-14355173
 ] 

Xuefu Zhang commented on HIVE-9828:
---

+1

 Semantic analyzer does not capture view parent entity for tables referred in 
 view with union all 
 -

 Key: HIVE-9828
 URL: https://issues.apache.org/jira/browse/HIVE-9828
 Project: Hive
  Issue Type: Bug
  Components: Parser
Affects Versions: 1.1.0
Reporter: Prasad Mujumdar
 Fix For: 1.2.0

 Attachments: HIVE-9828.1-npf.patch


 Hive compiler adds tables used in a view definition in the input entity list, 
 with the view as parent entity for the table.
 In case of a view with union all query, this is not being done property. For 
 example,
 {noformat}
 create view view1 as select t.id from (select tab1.id from db.tab1 union all 
 select tab2.id from db.tab2 ) t;
 {noformat}
 This query will capture tab1 and tab2 as read entity without view1 as parent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9658) Reduce parquet memory use by bypassing java primitive objects on ETypeConverter

2015-03-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-9658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355176#comment-14355176
 ] 

Sergio Peña commented on HIVE-9658:
---

[~csun] Could you merge latest trunk changes into parquet branch?
I need it so that I can update this patch, and it can be merged to parquet.

 Reduce parquet memory use by bypassing java primitive objects on 
 ETypeConverter
 ---

 Key: HIVE-9658
 URL: https://issues.apache.org/jira/browse/HIVE-9658
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergio Peña
Assignee: Sergio Peña
 Attachments: HIVE-9658.1.patch, HIVE-9658.2.patch


 The ETypeConverter class passes Writable objects to the collection converters 
 in order to be read later by the map/reduce functions. These objects are all 
 wrapped in a unique ArrayWritable object.
 We can save some memory by returning the java primitive objects instead in 
 order to prevent memory allocation. The only writable object needed by 
 map/reduce is ArrayWritable. If we create another writable class where to 
 store primitive objects (Object), then we can stop using all primitive 
 wirtables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9909) Specify hive branch to use on jenkins hms tests

2015-03-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-9909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-9909:
--
Attachment: HIVE-9909.1.patch

 Specify hive branch to use on jenkins hms tests
 ---

 Key: HIVE-9909
 URL: https://issues.apache.org/jira/browse/HIVE-9909
 Project: Hive
  Issue Type: Improvement
Reporter: Sergio Peña
Assignee: Sergio Peña
 Attachments: HIVE-9909.1.patch


 The HMS metastore upgrade scripts work with 'trunk' branch only. 
 We should allow to checkout any branch specified on Jenkins job in order to 
 allow branch users test their changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-6617) Reduce ambiguity in grammar

2015-03-10 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355206#comment-14355206
 ] 

Pengcheng Xiong commented on HIVE-6617:
---

[~ashutoshc], HIVE-6617.25 is the patch that is pending to be checked in, 
rather than HIVE-6617.26.

 Reduce ambiguity in grammar
 ---

 Key: HIVE-6617
 URL: https://issues.apache.org/jira/browse/HIVE-6617
 Project: Hive
  Issue Type: Task
Reporter: Ashutosh Chauhan
Assignee: Pengcheng Xiong
 Attachments: HIVE-6617.01.patch, HIVE-6617.02.patch, 
 HIVE-6617.03.patch, HIVE-6617.04.patch, HIVE-6617.05.patch, 
 HIVE-6617.06.patch, HIVE-6617.07.patch, HIVE-6617.08.patch, 
 HIVE-6617.09.patch, HIVE-6617.10.patch, HIVE-6617.11.patch, 
 HIVE-6617.12.patch, HIVE-6617.13.patch, HIVE-6617.14.patch, 
 HIVE-6617.15.patch, HIVE-6617.16.patch, HIVE-6617.17.patch, 
 HIVE-6617.18.patch, HIVE-6617.19.patch, HIVE-6617.20.patch, 
 HIVE-6617.21.patch, HIVE-6617.22.patch, HIVE-6617.23.patch, 
 HIVE-6617.24.patch, HIVE-6617.25.patch, HIVE-6617.26.patch, parser.png


 CLEAR LIBRARY CACHE
 As of today, antlr reports 214 warnings. Need to bring down this number, 
 ideally to 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-6617) Reduce ambiguity in grammar

2015-03-10 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355200#comment-14355200
 ] 

Pengcheng Xiong commented on HIVE-6617:
---

[~ashutoshc], both patches passed. I updated the RB as well. I think it is safe 
to go. Thanks.

 Reduce ambiguity in grammar
 ---

 Key: HIVE-6617
 URL: https://issues.apache.org/jira/browse/HIVE-6617
 Project: Hive
  Issue Type: Task
Reporter: Ashutosh Chauhan
Assignee: Pengcheng Xiong
 Attachments: HIVE-6617.01.patch, HIVE-6617.02.patch, 
 HIVE-6617.03.patch, HIVE-6617.04.patch, HIVE-6617.05.patch, 
 HIVE-6617.06.patch, HIVE-6617.07.patch, HIVE-6617.08.patch, 
 HIVE-6617.09.patch, HIVE-6617.10.patch, HIVE-6617.11.patch, 
 HIVE-6617.12.patch, HIVE-6617.13.patch, HIVE-6617.14.patch, 
 HIVE-6617.15.patch, HIVE-6617.16.patch, HIVE-6617.17.patch, 
 HIVE-6617.18.patch, HIVE-6617.19.patch, HIVE-6617.20.patch, 
 HIVE-6617.21.patch, HIVE-6617.22.patch, HIVE-6617.23.patch, 
 HIVE-6617.24.patch, HIVE-6617.25.patch, HIVE-6617.26.patch, parser.png


 CLEAR LIBRARY CACHE
 As of today, antlr reports 214 warnings. Need to bring down this number, 
 ideally to 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9659) 'Error while trying to create table container' occurs during hive query case execution when hive.optimize.skewjoin set to 'true' [Spark Branch]

2015-03-10 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14356088#comment-14356088
 ] 

Rui Li commented on HIVE-9659:
--

Xuefu - Thanks very much for the explanation! I'll generate the incorrect MR 
output and file another JIRA to fix it.

 'Error while trying to create table container' occurs during hive query case 
 execution when hive.optimize.skewjoin set to 'true' [Spark Branch]
 ---

 Key: HIVE-9659
 URL: https://issues.apache.org/jira/browse/HIVE-9659
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xin Hao
Assignee: Rui Li
 Attachments: HIVE-9659.1-spark.patch, HIVE-9659.2-spark.patch, 
 HIVE-9659.3-spark.patch


 We found that 'Error while trying to create table container'  occurs during 
 Big-Bench Q12 case execution when hive.optimize.skewjoin set to 'true'.
 If hive.optimize.skewjoin set to 'false', the case could pass.
 How to reproduce:
 1. set hive.optimize.skewjoin=true;
 2. Run BigBench case Q12 and it will fail. 
 Check the executor log (e.g. /usr/lib/spark/work/app-/2/stderr) and you 
 will found error 'Error while trying to create table container' in the log 
 and also a NullPointerException near the end of the log.
 (a) Detail error message for 'Error while trying to create table container':
 {noformat}
 15/02/12 01:29:49 ERROR SparkMapRecordHandler: Error processing row: 
 org.apache.hadoop.hive.ql.metadata.HiveException: 
 org.apache.hadoop.hive.ql.metadata.HiveException: Error while trying to 
 create table container
 org.apache.hadoop.hive.ql.metadata.HiveException: 
 org.apache.hadoop.hive.ql.metadata.HiveException: Error while trying to 
 create table container
   at 
 org.apache.hadoop.hive.ql.exec.spark.HashTableLoader.load(HashTableLoader.java:118)
   at 
 org.apache.hadoop.hive.ql.exec.MapJoinOperator.loadHashTable(MapJoinOperator.java:193)
   at 
 org.apache.hadoop.hive.ql.exec.MapJoinOperator.cleanUpInputFileChangedOp(MapJoinOperator.java:219)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1051)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
   at 
 org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:486)
   at 
 org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:141)
   at 
 org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:47)
   at 
 org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27)
   at 
 org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:98)
   at 
 scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
   at 
 org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:217)
   at 
 org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:65)
   at 
 org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
   at 
 org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
   at org.apache.spark.scheduler.Task.run(Task.scala:56)
   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error while 
 trying to create table container
   at 
 org.apache.hadoop.hive.ql.exec.persistence.MapJoinTableContainerSerDe.load(MapJoinTableContainerSerDe.java:158)
   at 
 org.apache.hadoop.hive.ql.exec.spark.HashTableLoader.load(HashTableLoader.java:115)
   ... 21 more
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error, not a 
 directory: 
 hdfs://bhx1:8020/tmp/hive/root/d22ef465-bff5-4edb-a822-0a9f1c25b66c/hive_2015-02-12_01-28-10_008_6897031694580088767-1/-mr-10009/HashTable-Stage-6/MapJoin-mapfile01--.hashtable
   at 
 org.apache.hadoop.hive.ql.exec.persistence.MapJoinTableContainerSerDe.load(MapJoinTableContainerSerDe.java:106)
   ... 22 more
 15/02/12 01:29:49 INFO SparkRecordHandler: maximum memory = 40939028480
 15/02/12 01:29:49 INFO PerfLogger: PERFLOG method=SparkInitializeOperators 

[jira] [Updated] (HIVE-9813) Hive JDBC - DatabaseMetaData.getColumns method cannot find classes added with add jar command

2015-03-10 Thread Yongzhi Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongzhi Chen updated HIVE-9813:
---
Attachment: (was: HIVE-9813.1.patch)

 Hive JDBC - DatabaseMetaData.getColumns method cannot find classes added with 
 add jar command
 ---

 Key: HIVE-9813
 URL: https://issues.apache.org/jira/browse/HIVE-9813
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Reporter: Yongzhi Chen
Assignee: Yongzhi Chen

 Execute following JDBC client program:
 {code}
 import java.sql.*;
 public class TestAddJar {
 private static Connection makeConnection(String connString, String 
 classPath) throws ClassNotFoundException, SQLException
 {
 System.out.println(Current Connection info: + connString);
 Class.forName(classPath);
 System.out.println(Current driver info: + classPath);
 return DriverManager.getConnection(connString);
 }
 public static void main(String[] args)
 {
 if(2 != args.length)
 {
 System.out.println(Two arguments needed: connection string, path 
 to jar to be added (include jar name));
 System.out.println(Example: java -jar TestApp.jar 
 jdbc:hive2://192.168.111.111 /tmp/json-serde-1.3-jar-with-dependencies.jar);
 return;
 }
 Connection conn;
 try
 {
 conn = makeConnection(args[0], org.apache.hive.jdbc.HiveDriver);
 
 System.out.println(---);
 System.out.println(DONE);
 
 System.out.println(---);
 System.out.println(Execute query: add jar  + args[1] + ;);
 Statement stmt = conn.createStatement();
 int c = stmt.executeUpdate(add jar  + args[1]);
 System.out.println(Returned value is: [ + c + ]\n);
 
 System.out.println(---);
 final String createTableQry = Create table if not exists 
 json_test(id int, content string)  +
 row format serde 'org.openx.data.jsonserde.JsonSerDe';
 System.out.println(Execute query: + createTableQry + ;);
 stmt.execute(createTableQry);
 
 System.out.println(---);
 System.out.println(getColumn() 
 Call---\n);
 DatabaseMetaData md = conn.getMetaData();
 System.out.println(Test get all column in a schema:);
 ResultSet rs = md.getColumns(Hive, default, json_test, 
 null);
 while (rs.next()) {
 System.out.println(rs.getString(1));
 }
 conn.close();
 }
 catch (ClassNotFoundException e)
 {
 e.printStackTrace();
 }
 catch (SQLException e)
 {
 e.printStackTrace();
 }
 }
 }
 {code}
 Get Exception, and from metastore log:
 7:41:30.316 PMERROR   hive.log
 error in initSerDe: java.lang.ClassNotFoundException Class 
 org.openx.data.jsonserde.JsonSerDe not found
 java.lang.ClassNotFoundException: Class org.openx.data.jsonserde.JsonSerDe 
 not found
 at 
 org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1803)
 at 
 org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:183)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_fields(HiveMetaStore.java:2487)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_schema(HiveMetaStore.java:2542)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
 at com.sun.proxy.$Proxy5.get_schema(Unknown Source)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_schema.getResult(ThriftHiveMetastore.java:6425)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_schema.getResult(ThriftHiveMetastore.java:6409)
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
 at 
 org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)
 at 
 org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:107)
 at 

[jira] [Updated] (HIVE-9555) assorted ORC refactorings for LLAP on trunk

2015-03-10 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-9555:
---
Attachment: HIVE-9555.07.patch

recent CR feedback addressed

 assorted ORC refactorings for LLAP on trunk
 ---

 Key: HIVE-9555
 URL: https://issues.apache.org/jira/browse/HIVE-9555
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HIVE-9555.01.patch, HIVE-9555.02.patch, 
 HIVE-9555.03.patch, HIVE-9555.04.patch, HIVE-9555.05.patch, 
 HIVE-9555.06.patch, HIVE-9555.07.patch, HIVE-9555.patch


 To minimize conflicts and given that ORC is being developed rapidly on trunk, 
 I would like to refactor some parts of ORC in advance based on the changes 
 in LLAP branch. Mostly it concerns making parts of ORC code (esp. SARG, but 
 also some internal methods) more modular and easier to use from alternative 
 codepaths. There's also significant change to how data reading is handled - 
 BufferChunk inherits from DiskRange; the reader receives a list of 
 DiskRange-s (as before), but instead of making a list of buffer chunks it 
 replaces ranges with buffer chunks in the original (linked) list. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9918) Spark branch build is failing due to unknown url [Spark Branch]

2015-03-10 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-9918:
--
Summary: Spark branch build is failing due to unknown url [Spark Branch]  
(was: Spark branch build is failing due to unknown url)

 Spark branch build is failing due to unknown url [Spark Branch]
 ---

 Key: HIVE-9918
 URL: https://issues.apache.org/jira/browse/HIVE-9918
 Project: Hive
  Issue Type: Bug
  Components: Spark, spark-branch
Reporter: Sergio Peña
Assignee: Sergio Peña
Priority: Blocker
 Attachments: HIVE-9918.1-spark.patch, HIVE-9918.1.patch


 Spark branch is failing due to an URL that does not exist anymore. This is 
 URL contains all spark jars used to build.
 The spark jars versions are not on the official maven repository.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9920) DROP DATABASE IF EXISTS throws exception if database does not exist

2015-03-10 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang updated HIVE-9920:
--
Attachment: HIVE-9920.patch

Do not log out the error for get_database in RetryingHMSHander.java. With this 
change, we only got one line warning:
15/03/11 00:55:22 WARN metastore.ObjectStore: Failed to get database 
nonexisting, returning NoSuchObjectException
HIVE-7737 and HIVE-8564 solved the similar issue for get_table to an 
nonexisting table.
Please review the patch and thanks

 DROP DATABASE IF EXISTS throws exception if database does not exist
 ---

 Key: HIVE-9920
 URL: https://issues.apache.org/jira/browse/HIVE-9920
 Project: Hive
  Issue Type: Bug
  Components: Logging, Metastore
Affects Versions: 1.0.0
Reporter: Chaoyu Tang
Assignee: Chaoyu Tang
Priority: Minor
 Attachments: HIVE-9920.patch


 drop database if exists noexistingdb throws and logs full exception if the 
 database (noexistingdb) does not exist:
 15/03/10 22:47:22 WARN metastore.ObjectStore: Failed to get database 
 statsdb2, returning NoSuchObjectException
 15/03/11 00:19:55 ERROR metastore.RetryingHMSHandler: 
 NoSuchObjectException(message:statsdb2)
   at 
 org.apache.hadoop.hive.metastore.ObjectStore.getDatabase(ObjectStore.java:569)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98)
   at com.sun.proxy.$Proxy6.getDatabase(Unknown Source)
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_database_core(HiveMetaStore.java:953)
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_database(HiveMetaStore.java:927)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
   at com.sun.proxy.$Proxy8.get_database(Unknown Source)
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDatabase(HiveMetaStoreClient.java:1150)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:91)
   at com.sun.proxy.$Proxy9.getDatabase(Unknown Source)
   at org.apache.hadoop.hive.ql.metadata.Hive.getDatabase(Hive.java:1291)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.getDatabase(BaseSemanticAnalyzer.java:1364)
   at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeDropDatabase(DDLSemanticAnalyzer.java:777)
   at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:427)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:224)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:425)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:309)
   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1116)
   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1164)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1053)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1043)
   at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:207)
   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:159)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:370)
   at 
 org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:754)
   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675)
   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9555) assorted ORC refactorings for LLAP on trunk

2015-03-10 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14356051#comment-14356051
 ] 

Prasanth Jayachandran commented on HIVE-9555:
-

LGTM, +1. Pending tests on the new patch.

 assorted ORC refactorings for LLAP on trunk
 ---

 Key: HIVE-9555
 URL: https://issues.apache.org/jira/browse/HIVE-9555
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HIVE-9555.01.patch, HIVE-9555.02.patch, 
 HIVE-9555.03.patch, HIVE-9555.04.patch, HIVE-9555.05.patch, 
 HIVE-9555.06.patch, HIVE-9555.07.patch, HIVE-9555.patch


 To minimize conflicts and given that ORC is being developed rapidly on trunk, 
 I would like to refactor some parts of ORC in advance based on the changes 
 in LLAP branch. Mostly it concerns making parts of ORC code (esp. SARG, but 
 also some internal methods) more modular and easier to use from alternative 
 codepaths. There's also significant change to how data reading is handled - 
 BufferChunk inherits from DiskRange; the reader receives a list of 
 DiskRange-s (as before), but instead of making a list of buffer chunks it 
 replaces ranges with buffer chunks in the original (linked) list. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9857) Create Factorial UDF

2015-03-10 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14356071#comment-14356071
 ] 

Jason Dere commented on HIVE-9857:
--

+1

 Create Factorial UDF
 

 Key: HIVE-9857
 URL: https://issues.apache.org/jira/browse/HIVE-9857
 Project: Hive
  Issue Type: Improvement
  Components: UDF
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
 Attachments: HIVE-9857.1.patch, HIVE-9857.2.patch


 Function signature: factorial(int a): bigint
 For example 5!= 5*4*3*2*1=120
 {code}
 select factorial(5);
 OK
 120
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9893) HiveServer2 java.lang.OutOfMemoryError: Java heap space

2015-03-10 Thread Nemon Lou (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14356070#comment-14356070
 ] 

Nemon Lou commented on HIVE-9893:
-

Do you see any failed queries? If so,give HIVE-9839 a try.

 HiveServer2 java.lang.OutOfMemoryError: Java heap space
 ---

 Key: HIVE-9893
 URL: https://issues.apache.org/jira/browse/HIVE-9893
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.14.0
 Environment: Vmware Pseudo cluster 
 HDP 2.2 Ambari vanilla install 
 Centos 6.5
Reporter: Rupert Bailey
  Labels: Hadoop, Hive, Tez, Thrift

 Everything runs but dies after a few days with the Java heap space memory 
 error - and with no activity on cluster. It failed most resently after 5 days.
 I tried to fix it after noticing the SLF4J library duplication, wondering if 
 conflicting libraries would be causing the error, but still fails.
 Nagios reports HiveServer2 Flapping
 HiveServer2.log output:
 22488-2015-03-07 22:19:08,359 INFO  [HiveServer2-Handler-Pool: Thread-13139]: 
 thrift.ThriftCLIService (ThriftCLIService.java:OpenSession(232)) - Client 
 protocol version: HIVE_CLI_SERVICE_PROTOCOL_V6
 22489-2015-03-07 22:19:19,056 ERROR [HiveServer2-Handler-Pool: Thread-13139]: 
 thrift.ProcessFunction (ProcessFunction.java:process(41)) - Internal error 
 processing OpenSession
 22490:java.lang.OutOfMemoryError: Java heap space
 22491-2015-03-07 22:19:22,515 INFO  [Thread-6]: server.HiveServer2 
 (HiveServer2.java:stop(299)) - Shutting down HiveServer2
 22492-2015-03-07 22:19:22,516 INFO  [Thread-6]: thrift.ThriftCLIService 
 (ThriftCLIService.java:stop(137)) - Thrift server has stopped
 22493-2015-03-07 22:19:22,516 INFO  [Thread-6]: service.AbstractService 
 (AbstractService.java:stop(125)) - Service:ThriftBinaryCLIService is stopped.
 22494-2015-03-07 22:19:27,078 INFO  [Thread-6]: service.AbstractService 
 (AbstractService.java:stop(125)) - Service:OperationManager is stopped.
 22495-2015-03-07 22:19:27,078 INFO  [Thread-6]: service.AbstractService 
 (AbstractService.java:stop(125)) - Service:SessionManager is stopped.
 22496:2015-03-07 22:19:36,096 WARN  [Thread-0]: util.ShutdownHookManager 
 (ShutdownHookManager.java:run(56)) - ShutdownHook 'ClientFinalizer' failed, 
 java.lang.OutOfMemoryError: Java heap space
 22497:java.lang.OutOfMemoryError: Java heap space



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9555) assorted ORC refactorings for LLAP on trunk

2015-03-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14356177#comment-14356177
 ] 

Hive QA commented on HIVE-9555:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12703802/HIVE-9555.07.patch

{color:green}SUCCESS:{color} +1 7762 tests passed

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2997/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2997/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2997/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12703802 - PreCommit-HIVE-TRUNK-Build

 assorted ORC refactorings for LLAP on trunk
 ---

 Key: HIVE-9555
 URL: https://issues.apache.org/jira/browse/HIVE-9555
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HIVE-9555.01.patch, HIVE-9555.02.patch, 
 HIVE-9555.03.patch, HIVE-9555.04.patch, HIVE-9555.05.patch, 
 HIVE-9555.06.patch, HIVE-9555.07.patch, HIVE-9555.patch


 To minimize conflicts and given that ORC is being developed rapidly on trunk, 
 I would like to refactor some parts of ORC in advance based on the changes 
 in LLAP branch. Mostly it concerns making parts of ORC code (esp. SARG, but 
 also some internal methods) more modular and easier to use from alternative 
 codepaths. There's also significant change to how data reading is handled - 
 BufferChunk inherits from DiskRange; the reader receives a list of 
 DiskRange-s (as before), but instead of making a list of buffer chunks it 
 replaces ranges with buffer chunks in the original (linked) list. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9895) Update hive people page with recent changes

2015-03-10 Thread Chao (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355104#comment-14355104
 ] 

Chao commented on HIVE-9895:


+1

 Update hive people page with recent changes
 ---

 Key: HIVE-9895
 URL: https://issues.apache.org/jira/browse/HIVE-9895
 Project: Hive
  Issue Type: Task
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-9895.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-9895) Update hive people page with recent changes

2015-03-10 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland resolved HIVE-9895.

Resolution: Fixed

Thx Chao! I committed to the website.

 Update hive people page with recent changes
 ---

 Key: HIVE-9895
 URL: https://issues.apache.org/jira/browse/HIVE-9895
 Project: Hive
  Issue Type: Task
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-9895.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9906) Add timeout mechanism in RawStoreProxy

2015-03-10 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355097#comment-14355097
 ] 

Brock Noland commented on HIVE-9906:


+1

 Add timeout mechanism in RawStoreProxy
 --

 Key: HIVE-9906
 URL: https://issues.apache.org/jira/browse/HIVE-9906
 Project: Hive
  Issue Type: Sub-task
  Components: Metastore
Reporter: Dong Chen
Assignee: Dong Chen
 Attachments: HIVE-9906.patch


 In HIVE-9253, we add a timeout mechanism in HMS. We start the timer in 
 RetryingHMSHandler.invoke, and then - RawStoreProxy.invoke - 
 ObjectStore.xxxMethod. The timer is stopped after methods complete.
 It was found that, the methods of ObjectStore might be invoked directly in 
 o.a.h.h.ql.txn.compactor.CompactorThread, but not throught HMSHandler. This 
 will cause timeout checking to throw exception. We need fix this bug here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9916) Fix TestSparkSessionManagerImpl [Spark Branch]

2015-03-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355653#comment-14355653
 ] 

Hive QA commented on HIVE-9916:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12703735/HIVE-9916.1-spark.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/777/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/777/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-777/

Messages:
{noformat}
 This message was trimmed, see log for full details 
warning(200): IdentifiersParser.g:68:4: 
Decision can match input such as LPAREN KW_NULL KW_AND using multiple 
alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:68:4: 
Decision can match input such as LPAREN KW_TIMESTAMP StringLiteral using 
multiple alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:68:4: 
Decision can match input such as LPAREN CharSetName CharSetLiteral using 
multiple alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:68:4: 
Decision can match input such as LPAREN KW_NULL LESSTHANOREQUALTO using 
multiple alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:68:4: 
Decision can match input such as LPAREN LPAREN StringLiteral using multiple 
alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:68:4: 
Decision can match input such as LPAREN KW_NULL LESSTHAN using multiple 
alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:68:4: 
Decision can match input such as LPAREN KW_CASE KW_EXISTS using multiple 
alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:68:4: 
Decision can match input such as LPAREN KW_NULL GREATERTHANOREQUALTO using 
multiple alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:68:4: 
Decision can match input such as LPAREN KW_DATE StringLiteral using multiple 
alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:68:4: 
Decision can match input such as LPAREN KW_NULL GREATERTHAN using multiple 
alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:68:4: 
Decision can match input such as LPAREN KW_NULL BITWISEXOR using multiple 
alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:68:4: 
Decision can match input such as LPAREN KW_CASE KW_ARRAY using multiple 
alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:68:4: 
Decision can match input such as LPAREN KW_NULL KW_BETWEEN using multiple 
alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:68:4: 
Decision can match input such as LPAREN KW_CASE KW_STRUCT using multiple 
alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:115:5: 
Decision can match input such as KW_CLUSTER KW_BY LPAREN using multiple 
alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:127:5: 
Decision can match input such as KW_PARTITION KW_BY LPAREN using multiple 
alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:138:5: 
Decision can match input such as KW_DISTRIBUTE KW_BY LPAREN using multiple 
alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:149:5: 
Decision can match input such as KW_SORT KW_BY LPAREN using multiple 
alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:166:7: 
Decision can match input such as STAR using multiple alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:194:5: 
Decision can match input such as KW_ARRAY using multiple alternatives: 2, 6

As a result, alternative(s) 6 were disabled for that input
warning(200): IdentifiersParser.g:194:5: 
Decision can match input such as KW_UNIONTYPE using multiple alternatives: 5, 
6

As a result, alternative(s) 6 were disabled for that input
warning(200): IdentifiersParser.g:194:5: 
Decision can match input such as 

[jira] [Commented] (HIVE-9555) assorted ORC refactorings for LLAP on trunk

2015-03-10 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355689#comment-14355689
 ] 

Sergey Shelukhin commented on HIVE-9555:


Please ignore the one in b/ directory.

 assorted ORC refactorings for LLAP on trunk
 ---

 Key: HIVE-9555
 URL: https://issues.apache.org/jira/browse/HIVE-9555
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HIVE-9555.01.patch, HIVE-9555.02.patch, 
 HIVE-9555.03.patch, HIVE-9555.04.patch, HIVE-9555.05.patch, 
 HIVE-9555.06.patch, HIVE-9555.patch


 To minimize conflicts and given that ORC is being developed rapidly on trunk, 
 I would like to refactor some parts of ORC in advance based on the changes 
 in LLAP branch. Mostly it concerns making parts of ORC code (esp. SARG, but 
 also some internal methods) more modular and easier to use from alternative 
 codepaths. There's also significant change to how data reading is handled - 
 BufferChunk inherits from DiskRange; the reader receives a list of 
 DiskRange-s (as before), but instead of making a list of buffer chunks it 
 replaces ranges with buffer chunks in the original (linked) list. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9916) Fix TestSparkSessionManagerImpl [Spark Branch]

2015-03-10 Thread Chao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao updated HIVE-9916:
---
Attachment: HIVE-9916.1-spark.patch

 Fix TestSparkSessionManagerImpl [Spark Branch]
 --

 Key: HIVE-9916
 URL: https://issues.apache.org/jira/browse/HIVE-9916
 Project: Hive
  Issue Type: Bug
  Components: spark-branch
Affects Versions: spark-branch
Reporter: Chao
Assignee: Chao
 Attachments: HIVE-9916.1-spark.patch


 Looks like in HIVE-9872, wrong patch is committed, and therefore 
 TestSparkSessionManagerImpl will still fail. This JIRA should fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9555) assorted ORC refactorings for LLAP on trunk

2015-03-10 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-9555:
---
Attachment: HIVE-9555.06.patch

CR feedback, test fixes, some code changes from recent LLAP changes

 assorted ORC refactorings for LLAP on trunk
 ---

 Key: HIVE-9555
 URL: https://issues.apache.org/jira/browse/HIVE-9555
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HIVE-9555.01.patch, HIVE-9555.02.patch, 
 HIVE-9555.03.patch, HIVE-9555.04.patch, HIVE-9555.05.patch, 
 HIVE-9555.06.patch, HIVE-9555.patch


 To minimize conflicts and given that ORC is being developed rapidly on trunk, 
 I would like to refactor some parts of ORC in advance based on the changes 
 in LLAP branch. Mostly it concerns making parts of ORC code (esp. SARG, but 
 also some internal methods) more modular and easier to use from alternative 
 codepaths. There's also significant change to how data reading is handled - 
 BufferChunk inherits from DiskRange; the reader receives a list of 
 DiskRange-s (as before), but instead of making a list of buffer chunks it 
 replaces ranges with buffer chunks in the original (linked) list. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9858) Create cbrt (cube root) UDF

2015-03-10 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355628#comment-14355628
 ] 

Jason Dere commented on HIVE-9858:
--

+1

 Create cbrt (cube root) UDF
 ---

 Key: HIVE-9858
 URL: https://issues.apache.org/jira/browse/HIVE-9858
 Project: Hive
  Issue Type: Improvement
  Components: UDF
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
 Attachments: HIVE-9858.1.patch, HIVE-9858.1.patch, HIVE-9858.2.patch


 returns the cube root of a double value
 cbrt(double a) : double
 For example:
 {code}
 select cbrt(87860583272930481.0);
 OK
 444561.0
 {code}
 I noticed that Math.pow(a, 1.0/3.0) and hive power UDF return 
 444560.965 for the example above.
 However Math.cbrt returns 444561.0
 This is why we should have hive cbrt function in hive



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9555) assorted ORC refactorings for LLAP on trunk

2015-03-10 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355665#comment-14355665
 ] 

Prasanth Jayachandran commented on HIVE-9555:
-

I think there is something wrong in the way you generate patch or there is some 
stray directory. I can see DiskRange.java repeated twice in diff.

 assorted ORC refactorings for LLAP on trunk
 ---

 Key: HIVE-9555
 URL: https://issues.apache.org/jira/browse/HIVE-9555
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HIVE-9555.01.patch, HIVE-9555.02.patch, 
 HIVE-9555.03.patch, HIVE-9555.04.patch, HIVE-9555.05.patch, 
 HIVE-9555.06.patch, HIVE-9555.patch


 To minimize conflicts and given that ORC is being developed rapidly on trunk, 
 I would like to refactor some parts of ORC in advance based on the changes 
 in LLAP branch. Mostly it concerns making parts of ORC code (esp. SARG, but 
 also some internal methods) more modular and easier to use from alternative 
 codepaths. There's also significant change to how data reading is handled - 
 BufferChunk inherits from DiskRange; the reader receives a list of 
 DiskRange-s (as before), but instead of making a list of buffer chunks it 
 replaces ranges with buffer chunks in the original (linked) list. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9905) Investigate ways to improve NDV calculations during stats aggregation [hbase-metastore branch]

2015-03-10 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355686#comment-14355686
 ] 

Prasanth Jayachandran commented on HIVE-9905:
-

We need a way to store the bit vectors per partition in metastore to have a 
more accurate NDV value. HIVE-9689 should get us there.

 Investigate ways to improve NDV calculations during stats aggregation 
 [hbase-metastore branch]
 --

 Key: HIVE-9905
 URL: https://issues.apache.org/jira/browse/HIVE-9905
 Project: Hive
  Issue Type: Sub-task
  Components: Metastore
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-9910) LLAP: Update usage of APIs changed by TEZ-2175 and TEZ-2187

2015-03-10 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth resolved HIVE-9910.
--
Resolution: Fixed

Already committed.

 LLAP: Update usage of APIs changed by TEZ-2175 and TEZ-2187
 ---

 Key: HIVE-9910
 URL: https://issues.apache.org/jira/browse/HIVE-9910
 Project: Hive
  Issue Type: Sub-task
Reporter: Siddharth Seth
Assignee: Siddharth Seth
 Fix For: llap

 Attachments: HIVE-9910.1.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9910) LLAP: Update usage of APIs changed by TEZ-2175 and TEZ-2187

2015-03-10 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-9910:
-
Attachment: HIVE-9910.1.patch

Trivial patch.

 LLAP: Update usage of APIs changed by TEZ-2175 and TEZ-2187
 ---

 Key: HIVE-9910
 URL: https://issues.apache.org/jira/browse/HIVE-9910
 Project: Hive
  Issue Type: Sub-task
Reporter: Siddharth Seth
Assignee: Siddharth Seth
 Fix For: llap

 Attachments: HIVE-9910.1.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9903) Update calcite version

2015-03-10 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-9903:
---
Attachment: HIVE-9903.1.patch

Fixed test fails.

 Update calcite version
 --

 Key: HIVE-9903
 URL: https://issues.apache.org/jira/browse/HIVE-9903
 Project: Hive
  Issue Type: Task
  Components: CBO, Logical Optimizer
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: HIVE-9903.1.patch, HIVE-9903.patch


 Calcite-1.1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9915) Allow specifying file format for managed tables

2015-03-10 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-9915:
-
Attachment: HIVE-9915.1.patch

 Allow specifying file format for managed tables
 ---

 Key: HIVE-9915
 URL: https://issues.apache.org/jira/browse/HIVE-9915
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Attachments: HIVE-9915.1.patch


 We already allow setting a system wide default format. In some cases it's 
 useful though to specify this only for managed tables, or distinguish 
 external and managed via two variables. You might want to set a more 
 efficient (than text) format for managed tables, but leave external to text 
 (as they often are log files etc.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-6617) Reduce ambiguity in grammar

2015-03-10 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-6617:
--
Attachment: HIVE-6617.26.patch

test backward compatibility again.

 Reduce ambiguity in grammar
 ---

 Key: HIVE-6617
 URL: https://issues.apache.org/jira/browse/HIVE-6617
 Project: Hive
  Issue Type: Task
Reporter: Ashutosh Chauhan
Assignee: Pengcheng Xiong
 Attachments: HIVE-6617.01.patch, HIVE-6617.02.patch, 
 HIVE-6617.03.patch, HIVE-6617.04.patch, HIVE-6617.05.patch, 
 HIVE-6617.06.patch, HIVE-6617.07.patch, HIVE-6617.08.patch, 
 HIVE-6617.09.patch, HIVE-6617.10.patch, HIVE-6617.11.patch, 
 HIVE-6617.12.patch, HIVE-6617.13.patch, HIVE-6617.14.patch, 
 HIVE-6617.15.patch, HIVE-6617.16.patch, HIVE-6617.17.patch, 
 HIVE-6617.18.patch, HIVE-6617.19.patch, HIVE-6617.20.patch, 
 HIVE-6617.21.patch, HIVE-6617.22.patch, HIVE-6617.23.patch, 
 HIVE-6617.24.patch, HIVE-6617.25.patch, HIVE-6617.26.patch, parser.png


 CLEAR LIBRARY CACHE
 As of today, antlr reports 214 warnings. Need to bring down this number, 
 ideally to 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9858) Create cbrt (cube root) UDF

2015-03-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14354453#comment-14354453
 ] 

Hive QA commented on HIVE-9858:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12703570/HIVE-9858.2.patch

{color:green}SUCCESS:{color} +1 7615 tests passed

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2986/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2986/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2986/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12703570 - PreCommit-HIVE-TRUNK-Build

 Create cbrt (cube root) UDF
 ---

 Key: HIVE-9858
 URL: https://issues.apache.org/jira/browse/HIVE-9858
 Project: Hive
  Issue Type: Improvement
  Components: UDF
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
 Attachments: HIVE-9858.1.patch, HIVE-9858.1.patch, HIVE-9858.2.patch


 returns the cube root of a double value
 cbrt(double a) : double
 For example:
 {code}
 select cbrt(87860583272930481.0);
 OK
 444561.0
 {code}
 I noticed that Math.pow(a, 1.0/3.0) and hive power UDF return 
 444560.965 for the example above.
 However Math.cbrt returns 444561.0
 This is why we should have hive cbrt function in hive



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9828) Semantic analyzer does not capture view parent entity for tables referred in view with union all

2015-03-10 Thread Prasad Mujumdar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasad Mujumdar updated HIVE-9828:
--
Attachment: HIVE-9828.1-npf.patch

 Semantic analyzer does not capture view parent entity for tables referred in 
 view with union all 
 -

 Key: HIVE-9828
 URL: https://issues.apache.org/jira/browse/HIVE-9828
 Project: Hive
  Issue Type: Bug
  Components: Parser
Affects Versions: 1.1.0
Reporter: Prasad Mujumdar
 Attachments: HIVE-9828.1-npf.patch


 Hive compiler adds tables used in a view definition in the input entity list, 
 with the view as parent entity for the table.
 In case of a view with union all query, this is not being done property. For 
 example,
 {noformat}
 create view view1 as select t.id from (select tab1.id from db.tab1 union all 
 select tab2.id from db.tab2 ) t;
 {noformat}
 This query will capture tab1 and tab2 as read entity without view1 as parent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-3454) Problem with CAST(BIGINT as TIMESTAMP)

2015-03-10 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-3454:
---
Attachment: (was: HIVE-3454.4.patch)

 Problem with CAST(BIGINT as TIMESTAMP)
 --

 Key: HIVE-3454
 URL: https://issues.apache.org/jira/browse/HIVE-3454
 Project: Hive
  Issue Type: Bug
  Components: Types, UDF
Affects Versions: 0.8.0, 0.8.1, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 
 0.13.1
Reporter: Ryan Harris
Assignee: Aihua Xu
  Labels: newbie, newdev, patch
 Attachments: HIVE-3454.1.patch.txt, HIVE-3454.2.patch, 
 HIVE-3454.3.patch, HIVE-3454.patch


 Ran into an issue while working with timestamp conversion.
 CAST(unix_timestamp() as TIMESTAMP) should create a timestamp for the current 
 time from the BIGINT returned by unix_timestamp()
 Instead, however, a 1970-01-16 timestamp is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-6617) Reduce ambiguity in grammar

2015-03-10 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-6617:
---
Affects Version/s: 1.1.0
   0.14.0
   1.0.0

 Reduce ambiguity in grammar
 ---

 Key: HIVE-6617
 URL: https://issues.apache.org/jira/browse/HIVE-6617
 Project: Hive
  Issue Type: Task
  Components: Parser
Affects Versions: 0.14.0, 1.0.0, 1.1.0
Reporter: Ashutosh Chauhan
Assignee: Pengcheng Xiong
 Fix For: 1.2.0

 Attachments: HIVE-6617.01.patch, HIVE-6617.02.patch, 
 HIVE-6617.03.patch, HIVE-6617.04.patch, HIVE-6617.05.patch, 
 HIVE-6617.06.patch, HIVE-6617.07.patch, HIVE-6617.08.patch, 
 HIVE-6617.09.patch, HIVE-6617.10.patch, HIVE-6617.11.patch, 
 HIVE-6617.12.patch, HIVE-6617.13.patch, HIVE-6617.14.patch, 
 HIVE-6617.15.patch, HIVE-6617.16.patch, HIVE-6617.17.patch, 
 HIVE-6617.18.patch, HIVE-6617.19.patch, HIVE-6617.20.patch, 
 HIVE-6617.21.patch, HIVE-6617.22.patch, HIVE-6617.23.patch, 
 HIVE-6617.24.patch, HIVE-6617.25.patch, HIVE-6617.26.patch, parser.png


 CLEAR LIBRARY CACHE
 As of today, antlr reports 214 warnings. Need to bring down this number, 
 ideally to 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-6617) Reduce ambiguity in grammar

2015-03-10 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan resolved HIVE-6617.

Resolution: Fixed

 Reduce ambiguity in grammar
 ---

 Key: HIVE-6617
 URL: https://issues.apache.org/jira/browse/HIVE-6617
 Project: Hive
  Issue Type: Task
  Components: Parser
Affects Versions: 0.14.0, 1.0.0, 1.1.0
Reporter: Ashutosh Chauhan
Assignee: Pengcheng Xiong
 Fix For: 1.2.0

 Attachments: HIVE-6617.01.patch, HIVE-6617.02.patch, 
 HIVE-6617.03.patch, HIVE-6617.04.patch, HIVE-6617.05.patch, 
 HIVE-6617.06.patch, HIVE-6617.07.patch, HIVE-6617.08.patch, 
 HIVE-6617.09.patch, HIVE-6617.10.patch, HIVE-6617.11.patch, 
 HIVE-6617.12.patch, HIVE-6617.13.patch, HIVE-6617.14.patch, 
 HIVE-6617.15.patch, HIVE-6617.16.patch, HIVE-6617.17.patch, 
 HIVE-6617.18.patch, HIVE-6617.19.patch, HIVE-6617.20.patch, 
 HIVE-6617.21.patch, HIVE-6617.22.patch, HIVE-6617.23.patch, 
 HIVE-6617.24.patch, HIVE-6617.25.patch, HIVE-6617.26.patch, parser.png


 CLEAR LIBRARY CACHE
 As of today, antlr reports 214 warnings. Need to bring down this number, 
 ideally to 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HIVE-6617) Reduce ambiguity in grammar

2015-03-10 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan reopened HIVE-6617:


 Reduce ambiguity in grammar
 ---

 Key: HIVE-6617
 URL: https://issues.apache.org/jira/browse/HIVE-6617
 Project: Hive
  Issue Type: Task
  Components: Parser
Affects Versions: 0.14.0, 1.0.0, 1.1.0
Reporter: Ashutosh Chauhan
Assignee: Pengcheng Xiong
 Fix For: 1.2.0

 Attachments: HIVE-6617.01.patch, HIVE-6617.02.patch, 
 HIVE-6617.03.patch, HIVE-6617.04.patch, HIVE-6617.05.patch, 
 HIVE-6617.06.patch, HIVE-6617.07.patch, HIVE-6617.08.patch, 
 HIVE-6617.09.patch, HIVE-6617.10.patch, HIVE-6617.11.patch, 
 HIVE-6617.12.patch, HIVE-6617.13.patch, HIVE-6617.14.patch, 
 HIVE-6617.15.patch, HIVE-6617.16.patch, HIVE-6617.17.patch, 
 HIVE-6617.18.patch, HIVE-6617.19.patch, HIVE-6617.20.patch, 
 HIVE-6617.21.patch, HIVE-6617.22.patch, HIVE-6617.23.patch, 
 HIVE-6617.24.patch, HIVE-6617.25.patch, HIVE-6617.26.patch, parser.png


 CLEAR LIBRARY CACHE
 As of today, antlr reports 214 warnings. Need to bring down this number, 
 ideally to 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HIVE-6617) Reduce ambiguity in grammar

2015-03-10 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong reopened HIVE-6617:
---

 Reduce ambiguity in grammar
 ---

 Key: HIVE-6617
 URL: https://issues.apache.org/jira/browse/HIVE-6617
 Project: Hive
  Issue Type: Task
  Components: Parser
Affects Versions: 0.14.0, 1.0.0, 1.1.0
Reporter: Ashutosh Chauhan
Assignee: Pengcheng Xiong
 Fix For: 1.2.0

 Attachments: HIVE-6617.01.patch, HIVE-6617.02.patch, 
 HIVE-6617.03.patch, HIVE-6617.04.patch, HIVE-6617.05.patch, 
 HIVE-6617.06.patch, HIVE-6617.07.patch, HIVE-6617.08.patch, 
 HIVE-6617.09.patch, HIVE-6617.10.patch, HIVE-6617.11.patch, 
 HIVE-6617.12.patch, HIVE-6617.13.patch, HIVE-6617.14.patch, 
 HIVE-6617.15.patch, HIVE-6617.16.patch, HIVE-6617.17.patch, 
 HIVE-6617.18.patch, HIVE-6617.19.patch, HIVE-6617.20.patch, 
 HIVE-6617.21.patch, HIVE-6617.22.patch, HIVE-6617.23.patch, 
 HIVE-6617.24.patch, HIVE-6617.25.patch, HIVE-6617.26.patch, parser.png


 CLEAR LIBRARY CACHE
 As of today, antlr reports 214 warnings. Need to bring down this number, 
 ideally to 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-6617) Reduce ambiguity in grammar

2015-03-10 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355822#comment-14355822
 ] 

Pengcheng Xiong commented on HIVE-6617:
---

The 74 reserved key words are 
ALL, ALTER, ARRAY, AS, AUTHORIZATION, BETWEEN, BIGINT, BINARY, BOOLEAN, BOTH, 
BY, CREATE, CUBE, CURRENT_DATE, CURRENT_TIMESTAMP, CURSOR, DATE, DECIMAL, 
DELETE, DESCRIBE, DOUBLE, DROP, EXISTS, EXTERNAL, FALSE, FETCH, FLOAT, FOR, 
FULL, GRANT, GROUP, GROUPING, IMPORT, IN, INNER, INSERT, INT, INTERSECT, INTO, 
IS, LATERAL, LEFT, LIKE, LOCAL, NONE, NULL, OF, ORDER, OUT, OUTER, PARTITION, 
PERCENT, PROCEDURE, RANGE, READS, REVOKE, RIGHT, ROLLUP, ROW, ROWS, SET, 
SMALLINT, TABLE, TIMESTAMP, TO, TRIGGER, TRUE, TRUNCATE, UNION, UPDATE, USER, 
USING, VALUES, WITH

 Reduce ambiguity in grammar
 ---

 Key: HIVE-6617
 URL: https://issues.apache.org/jira/browse/HIVE-6617
 Project: Hive
  Issue Type: Task
  Components: Parser
Affects Versions: 0.14.0, 1.0.0, 1.1.0
Reporter: Ashutosh Chauhan
Assignee: Pengcheng Xiong
 Fix For: 1.2.0

 Attachments: HIVE-6617.01.patch, HIVE-6617.02.patch, 
 HIVE-6617.03.patch, HIVE-6617.04.patch, HIVE-6617.05.patch, 
 HIVE-6617.06.patch, HIVE-6617.07.patch, HIVE-6617.08.patch, 
 HIVE-6617.09.patch, HIVE-6617.10.patch, HIVE-6617.11.patch, 
 HIVE-6617.12.patch, HIVE-6617.13.patch, HIVE-6617.14.patch, 
 HIVE-6617.15.patch, HIVE-6617.16.patch, HIVE-6617.17.patch, 
 HIVE-6617.18.patch, HIVE-6617.19.patch, HIVE-6617.20.patch, 
 HIVE-6617.21.patch, HIVE-6617.22.patch, HIVE-6617.23.patch, 
 HIVE-6617.24.patch, HIVE-6617.25.patch, HIVE-6617.26.patch, parser.png


 CLEAR LIBRARY CACHE
 As of today, antlr reports 214 warnings. Need to bring down this number, 
 ideally to 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9918) Spark branch build is failing due to unknown url

2015-03-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-9918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-9918:
--
Component/s: spark-branch
 Spark

 Spark branch build is failing due to unknown url
 

 Key: HIVE-9918
 URL: https://issues.apache.org/jira/browse/HIVE-9918
 Project: Hive
  Issue Type: Bug
  Components: Spark, spark-branch
Reporter: Sergio Peña
Assignee: Sergio Peña
Priority: Blocker
 Attachments: HIVE-9918.1.patch


 Spark branch is failing due to an URL that does not exist anymore. This is 
 URL contains all spark jars used to build.
 The spark jars versions are not on the official maven repository.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9918) Spark branch build is failing due to unknown url

2015-03-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-9918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-9918:
--
Attachment: HIVE-9918.1.patch

 Spark branch build is failing due to unknown url
 

 Key: HIVE-9918
 URL: https://issues.apache.org/jira/browse/HIVE-9918
 Project: Hive
  Issue Type: Bug
Reporter: Sergio Peña
Assignee: Sergio Peña
 Attachments: HIVE-9918.1.patch


 Spark branch is failing due to an URL that does not exist anymore. This is 
 URL contains all spark jars used to build.
 The spark jars versions are not on the official maven repository.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9918) Spark branch build is failing due to unknown url

2015-03-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-9918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-9918:
--
Priority: Blocker  (was: Major)

 Spark branch build is failing due to unknown url
 

 Key: HIVE-9918
 URL: https://issues.apache.org/jira/browse/HIVE-9918
 Project: Hive
  Issue Type: Bug
Reporter: Sergio Peña
Assignee: Sergio Peña
Priority: Blocker
 Attachments: HIVE-9918.1.patch


 Spark branch is failing due to an URL that does not exist anymore. This is 
 URL contains all spark jars used to build.
 The spark jars versions are not on the official maven repository.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9918) Spark branch build is failing due to unknown url

2015-03-10 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355839#comment-14355839
 ] 

Xuefu Zhang commented on HIVE-9918:
---

+1 pending on test.

 Spark branch build is failing due to unknown url
 

 Key: HIVE-9918
 URL: https://issues.apache.org/jira/browse/HIVE-9918
 Project: Hive
  Issue Type: Bug
  Components: Spark, spark-branch
Reporter: Sergio Peña
Assignee: Sergio Peña
Priority: Blocker
 Attachments: HIVE-9918.1.patch


 Spark branch is failing due to an URL that does not exist anymore. This is 
 URL contains all spark jars used to build.
 The spark jars versions are not on the official maven repository.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9813) Hive JDBC - DatabaseMetaData.getColumns method cannot find classes added with add jar command

2015-03-10 Thread Yongzhi Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355852#comment-14355852
 ] 

Yongzhi Chen commented on HIVE-9813:


The failure should not be related to the patch. 

 Hive JDBC - DatabaseMetaData.getColumns method cannot find classes added with 
 add jar command
 ---

 Key: HIVE-9813
 URL: https://issues.apache.org/jira/browse/HIVE-9813
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Reporter: Yongzhi Chen
Assignee: Yongzhi Chen
 Attachments: HIVE-9813.1.patch


 Execute following JDBC client program:
 {code}
 import java.sql.*;
 public class TestAddJar {
 private static Connection makeConnection(String connString, String 
 classPath) throws ClassNotFoundException, SQLException
 {
 System.out.println(Current Connection info: + connString);
 Class.forName(classPath);
 System.out.println(Current driver info: + classPath);
 return DriverManager.getConnection(connString);
 }
 public static void main(String[] args)
 {
 if(2 != args.length)
 {
 System.out.println(Two arguments needed: connection string, path 
 to jar to be added (include jar name));
 System.out.println(Example: java -jar TestApp.jar 
 jdbc:hive2://192.168.111.111 /tmp/json-serde-1.3-jar-with-dependencies.jar);
 return;
 }
 Connection conn;
 try
 {
 conn = makeConnection(args[0], org.apache.hive.jdbc.HiveDriver);
 
 System.out.println(---);
 System.out.println(DONE);
 
 System.out.println(---);
 System.out.println(Execute query: add jar  + args[1] + ;);
 Statement stmt = conn.createStatement();
 int c = stmt.executeUpdate(add jar  + args[1]);
 System.out.println(Returned value is: [ + c + ]\n);
 
 System.out.println(---);
 final String createTableQry = Create table if not exists 
 json_test(id int, content string)  +
 row format serde 'org.openx.data.jsonserde.JsonSerDe';
 System.out.println(Execute query: + createTableQry + ;);
 stmt.execute(createTableQry);
 
 System.out.println(---);
 System.out.println(getColumn() 
 Call---\n);
 DatabaseMetaData md = conn.getMetaData();
 System.out.println(Test get all column in a schema:);
 ResultSet rs = md.getColumns(Hive, default, json_test, 
 null);
 while (rs.next()) {
 System.out.println(rs.getString(1));
 }
 conn.close();
 }
 catch (ClassNotFoundException e)
 {
 e.printStackTrace();
 }
 catch (SQLException e)
 {
 e.printStackTrace();
 }
 }
 }
 {code}
 Get Exception, and from metastore log:
 7:41:30.316 PMERROR   hive.log
 error in initSerDe: java.lang.ClassNotFoundException Class 
 org.openx.data.jsonserde.JsonSerDe not found
 java.lang.ClassNotFoundException: Class org.openx.data.jsonserde.JsonSerDe 
 not found
 at 
 org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1803)
 at 
 org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:183)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_fields(HiveMetaStore.java:2487)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_schema(HiveMetaStore.java:2542)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
 at com.sun.proxy.$Proxy5.get_schema(Unknown Source)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_schema.getResult(ThriftHiveMetastore.java:6425)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_schema.getResult(ThriftHiveMetastore.java:6409)
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
 at 
 org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)
 at 
 

[jira] [Updated] (HIVE-9906) Add timeout mechanism in RawStoreProxy

2015-03-10 Thread Dong Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Chen updated HIVE-9906:

Attachment: HIVE-9906.patch

Uploaded a patch.
If the Deadline timer is not started or registered in RetryingHMSHandler, we 
will start it here. Otherwise, keep the original logic.

 Add timeout mechanism in RawStoreProxy
 --

 Key: HIVE-9906
 URL: https://issues.apache.org/jira/browse/HIVE-9906
 Project: Hive
  Issue Type: Sub-task
  Components: Metastore
Reporter: Dong Chen
Assignee: Dong Chen
 Attachments: HIVE-9906.patch


 In HIVE-9253, we add a timeout mechanism in HMS. We start the timer in 
 RetryingHMSHandler.invoke, and then - RawStoreProxy.invoke - 
 ObjectStore.xxxMethod. The timer is stopped after methods complete.
 It was found that, the methods of ObjectStore might be invoked directly in 
 o.a.h.h.ql.txn.compactor.CompactorThread, but not throught HMSHandler. This 
 will cause timeout checking to throw exception. We need fix this bug here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-6617) Reduce ambiguity in grammar

2015-03-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14354555#comment-14354555
 ] 

Hive QA commented on HIVE-6617:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12703608/HIVE-6617.26.patch

{color:green}SUCCESS:{color} +1 7613 tests passed

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2987/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2987/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2987/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12703608 - PreCommit-HIVE-TRUNK-Build

 Reduce ambiguity in grammar
 ---

 Key: HIVE-6617
 URL: https://issues.apache.org/jira/browse/HIVE-6617
 Project: Hive
  Issue Type: Task
Reporter: Ashutosh Chauhan
Assignee: Pengcheng Xiong
 Attachments: HIVE-6617.01.patch, HIVE-6617.02.patch, 
 HIVE-6617.03.patch, HIVE-6617.04.patch, HIVE-6617.05.patch, 
 HIVE-6617.06.patch, HIVE-6617.07.patch, HIVE-6617.08.patch, 
 HIVE-6617.09.patch, HIVE-6617.10.patch, HIVE-6617.11.patch, 
 HIVE-6617.12.patch, HIVE-6617.13.patch, HIVE-6617.14.patch, 
 HIVE-6617.15.patch, HIVE-6617.16.patch, HIVE-6617.17.patch, 
 HIVE-6617.18.patch, HIVE-6617.19.patch, HIVE-6617.20.patch, 
 HIVE-6617.21.patch, HIVE-6617.22.patch, HIVE-6617.23.patch, 
 HIVE-6617.24.patch, HIVE-6617.25.patch, HIVE-6617.26.patch, parser.png


 CLEAR LIBRARY CACHE
 As of today, antlr reports 214 warnings. Need to bring down this number, 
 ideally to 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9896) \N un-recognized in AVRO format Hive tables

2015-03-10 Thread Madhan Sundararajan Devaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Madhan Sundararajan Devaki updated HIVE-9896:
-
Description: 
We Sqooped (1.4.5) data from many RDBMS into HDFS in text format with options 
--null-non-string '\N' --null-string '\N'.
When we load these into Hive tables in text format the \N is properly 
recognized as NULL and we are able to use SQL clauses such as IS NULL and IS 
NOT NULL against columns.
However, when we convert the text files into AVRO (1.7.6) with SNAPPY 
compression and try to query using the above SQL clauses, the query does not 
return results as expected.
Further, we have to use column_name = '\N' or column_name  '\N' as a 
workaround.

  was:
We Sqooped (1.4.5) data from many RDBMS into HDFS in text format with options 
--null-non-string '\N' --null-string '\\N'.
When we load these into Hive tables in text format the \N is properly 
recognized as NULL and we are able to use SQL clauses such as IS NULL and IS 
NOT NULL against columns.
However, when we convert the text files into AVRO (1.7.6) with SNAPPY 
compression and try to query using the above SQL clauses, the query does not 
return results as expected.
Further, we have to use column_name = '\N' or column_name  '\N' as a 
workaround.


 \N un-recognized in AVRO format Hive tables
 ---

 Key: HIVE-9896
 URL: https://issues.apache.org/jira/browse/HIVE-9896
 Project: Hive
  Issue Type: Bug
  Components: Database/Schema, File Formats, Hive
Affects Versions: 0.13.0
 Environment: CDH5.2.1, RHEL6.5, Java 7
Reporter: Madhan Sundararajan Devaki

 We Sqooped (1.4.5) data from many RDBMS into HDFS in text format with options 
 --null-non-string '\N' --null-string '\N'.
 When we load these into Hive tables in text format the \N is properly 
 recognized as NULL and we are able to use SQL clauses such as IS NULL and IS 
 NOT NULL against columns.
 However, when we convert the text files into AVRO (1.7.6) with SNAPPY 
 compression and try to query using the above SQL clauses, the query does not 
 return results as expected.
 Further, we have to use column_name = '\N' or column_name  '\N' as a 
 workaround.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9897) Issue a warning when using an existing table/view name as an alias in a with statement.

2015-03-10 Thread Raunak Jhawar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14354546#comment-14354546
 ] 

Raunak Jhawar commented on HIVE-9897:
-

Prefer the cte should be the right way. This is how this issue is handled in 
popular RDBMS's such as MS SQL Server. Thoughts plz.

 Issue a warning when using an existing table/view name as an alias in a with 
 statement. 
 

 Key: HIVE-9897
 URL: https://issues.apache.org/jira/browse/HIVE-9897
 Project: Hive
  Issue Type: Improvement
  Components: Hive
Affects Versions: 0.13.1
 Environment: cdh5.3.0
Reporter: Mario Konschake
Priority: Minor

 Consider the following query:
 {code:sql}
 WITH
 table_a AS (
 SELECT
 'johndoe' AS name
 FROM
 my_table
 )
 SELECT
 DISTINCT name
 FROM
 table_a;
 {code}
 Observation: 
 If a table or a view with name `table_a` exists it is used instead of the one 
 defined in the WITH statement.
 Expectation:
 As the expectation is ambiguous (using the alias in the WITH statement vs. 
 using the existing table) issuing a warning when using a existing name in a 
 WITH statement is recommended.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9678) create timediff UDF

2015-03-10 Thread Raunak Jhawar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14354573#comment-14354573
 ] 

Raunak Jhawar commented on HIVE-9678:
-

Workaround:

Convert all date time objects to unix timestamp values. Now convert the 
difference of two unix time stamp into standard date time object and extract 
the time part from the object 

 create timediff UDF
 ---

 Key: HIVE-9678
 URL: https://issues.apache.org/jira/browse/HIVE-9678
 Project: Hive
  Issue Type: Improvement
  Components: UDF
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
 Attachments: HIVE-9678.1.patch, HIVE-9678.2.patch, HIVE-9678.3.patch, 
 HIVE-9678.4.patch, HIVE-9678.4.patch


 MySQL has very useful function timediff. We should have it in Hive
 {code}
 select timediff('2015-02-12 05:09:07.140', '2015-02-12 01:18:20');
 OK
 03:50:47.140
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9893) HiveServer2 java.lang.OutOfMemoryError: Java heap space

2015-03-10 Thread Rupert Bailey (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355975#comment-14355975
 ] 

Rupert Bailey commented on HIVE-9893:
-

Could this be a duplicate of:
https://issues.apache.org/jira/browse/HIVE-7353
?

 HiveServer2 java.lang.OutOfMemoryError: Java heap space
 ---

 Key: HIVE-9893
 URL: https://issues.apache.org/jira/browse/HIVE-9893
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.14.0
 Environment: Vmware Pseudo cluster 
 HDP 2.2 Ambari vanilla install 
 Centos 6.5
Reporter: Rupert Bailey
  Labels: Hadoop, Hive, Tez, Thrift

 Everything runs but dies after a few days with the Java heap space memory 
 error - and with no activity on cluster. It failed most resently after 5 days.
 I tried to fix it after noticing the SLF4J library duplication, wondering if 
 conflicting libraries would be causing the error, but still fails.
 Nagios reports HiveServer2 Flapping
 HiveServer2.log output:
 22488-2015-03-07 22:19:08,359 INFO  [HiveServer2-Handler-Pool: Thread-13139]: 
 thrift.ThriftCLIService (ThriftCLIService.java:OpenSession(232)) - Client 
 protocol version: HIVE_CLI_SERVICE_PROTOCOL_V6
 22489-2015-03-07 22:19:19,056 ERROR [HiveServer2-Handler-Pool: Thread-13139]: 
 thrift.ProcessFunction (ProcessFunction.java:process(41)) - Internal error 
 processing OpenSession
 22490:java.lang.OutOfMemoryError: Java heap space
 22491-2015-03-07 22:19:22,515 INFO  [Thread-6]: server.HiveServer2 
 (HiveServer2.java:stop(299)) - Shutting down HiveServer2
 22492-2015-03-07 22:19:22,516 INFO  [Thread-6]: thrift.ThriftCLIService 
 (ThriftCLIService.java:stop(137)) - Thrift server has stopped
 22493-2015-03-07 22:19:22,516 INFO  [Thread-6]: service.AbstractService 
 (AbstractService.java:stop(125)) - Service:ThriftBinaryCLIService is stopped.
 22494-2015-03-07 22:19:27,078 INFO  [Thread-6]: service.AbstractService 
 (AbstractService.java:stop(125)) - Service:OperationManager is stopped.
 22495-2015-03-07 22:19:27,078 INFO  [Thread-6]: service.AbstractService 
 (AbstractService.java:stop(125)) - Service:SessionManager is stopped.
 22496:2015-03-07 22:19:36,096 WARN  [Thread-0]: util.ShutdownHookManager 
 (ShutdownHookManager.java:run(56)) - ShutdownHook 'ClientFinalizer' failed, 
 java.lang.OutOfMemoryError: Java heap space
 22497:java.lang.OutOfMemoryError: Java heap space



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9918) Spark branch build is failing due to unknown url

2015-03-10 Thread Chao (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355986#comment-14355986
 ] 

Chao commented on HIVE-9918:


Should this apply to spark branch? trunk is still using 1.2.0..

 Spark branch build is failing due to unknown url
 

 Key: HIVE-9918
 URL: https://issues.apache.org/jira/browse/HIVE-9918
 Project: Hive
  Issue Type: Bug
  Components: Spark, spark-branch
Reporter: Sergio Peña
Assignee: Sergio Peña
Priority: Blocker
 Attachments: HIVE-9918.1.patch


 Spark branch is failing due to an URL that does not exist anymore. This is 
 URL contains all spark jars used to build.
 The spark jars versions are not on the official maven repository.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9918) Spark branch build is failing due to unknown url

2015-03-10 Thread Chao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao updated HIVE-9918:
---
Attachment: HIVE-9918.1-spark.patch

Reattaching the same patch for spark branch.

 Spark branch build is failing due to unknown url
 

 Key: HIVE-9918
 URL: https://issues.apache.org/jira/browse/HIVE-9918
 Project: Hive
  Issue Type: Bug
  Components: Spark, spark-branch
Reporter: Sergio Peña
Assignee: Sergio Peña
Priority: Blocker
 Attachments: HIVE-9918.1-spark.patch, HIVE-9918.1.patch


 Spark branch is failing due to an URL that does not exist anymore. This is 
 URL contains all spark jars used to build.
 The spark jars versions are not on the official maven repository.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)