[jira] [Commented] (HIVE-7704) Create tez task for fast file merging

2014-08-15 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14098226#comment-14098226
 ] 

Hive QA commented on HIVE-7704:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12662009/HIVE-7704.4.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/329/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/329/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-329/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Tests exited with: IllegalArgumentException: No propertifies found in file: 
mainProperties for property: spark.query.files
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12662009

 Create tez task for fast file merging
 -

 Key: HIVE-7704
 URL: https://issues.apache.org/jira/browse/HIVE-7704
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0
Reporter: Prasanth J
Assignee: Prasanth J
 Attachments: HIVE-7704.1.patch, HIVE-7704.2.patch, HIVE-7704.3.patch, 
 HIVE-7704.4.patch


 Currently tez falls back to MR task for merge file task. It will beneficial 
 to convert the merge file tasks to tez task to make use of the performance 
 gains from tez. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7738) tez select sum(decimal) from union all of decimal and null throws NPE

2014-08-15 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-7738:
--

Description: 
if run this query using tez engine then hive will throw NPE
{code}
select sum(a) from (
  select cast(1.1 as decimal) a from dual
  union all
  select cast(null as decimal) a from dual
) t;
{code}
hive select sum(a) from (
   select cast(1.1 as decimal) a from dual
   union all
   select cast(null as decimal) a from dual
 ) t;
Query ID = apivovarov_20140814200909_438385b2-4147-47bc-98a0-a01567bbb5c5
Total jobs = 1
Launching Job 1 out of 1


Status: Running (application id: application_1407388228332_5616)

Map 1: -/-  Map 4: -/-  Reducer 3: 0/1  
Map 1: 0/1  Map 4: 0/1  Reducer 3: 0/1  
Map 1: 0/1  Map 4: 0/1  Reducer 3: 0/1  
Map 1: 0/1  Map 4: 1/1  Reducer 3: 0/1  
Map 1: 0/1  Map 4: 1/1  Reducer 3: 0/1  
Map 1: 0/1  Map 4: 1/1  Reducer 3: 0/1  
Map 1: 0/1  Map 4: 1/1  Reducer 3: 0/1  
Map 1: 0/1  Map 4: 1/1  Reducer 3: 0/1  
Status: Failed
Vertex failed, vertexName=Map 1, vertexId=vertex_1407388228332_5616_1_02, 
diagnostics=[Task failed, taskId=task_1407388228332_5616_1_02_00, 
diagnostics=[AttemptID:attempt_1407388228332_5616_1_02_00_0 Info:Error: 
java.lang.RuntimeException: java.lang.RuntimeException: Map operator 
initialization failed
at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:188)
at 
org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:307)
at 
org.apache.hadoop.mapred.YarnTezDagChild$5.run(YarnTezDagChild.java:564)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
at 
org.apache.hadoop.mapred.YarnTezDagChild.main(YarnTezDagChild.java:553)
Caused by: java.lang.RuntimeException: Map operator initialization failed
at 
org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:145)
at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:164)
... 6 more
Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableConstantHiveDecimalObjectInspector.precision(WritableConstantHiveDecimalObjectInspector.java:61)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFSum$GenericUDAFSumHiveDecimal.init(GenericUDAFSum.java:106)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.initializeOp(GroupByOperator.java:362)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:460)
at 
org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:416)
at 
org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:67)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:460)
at 
org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:416)
at 
org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:67)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:460)
at 
org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:416)
at 
org.apache.hadoop.hive.ql.exec.TableScanOperator.initializeOp(TableScanOperator.java:189)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
at 
org.apache.hadoop.hive.ql.exec.MapOperator.initializeOp(MapOperator.java:425)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
at 
org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:121)
... 7 more

Container released by application, 
AttemptID:attempt_1407388228332_5616_1_02_00_1 Info:Error: 
java.lang.RuntimeException: java.lang.RuntimeException: Map operator 
initialization failed
at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:188)
at 
org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:307)
at 
org.apache.hadoop.mapred.YarnTezDagChild$5.run(YarnTezDagChild.java:564)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
at 
org.apache.hadoop.mapred.YarnTezDagChild.main(YarnTezDagChild.java:553)
Caused by: 

[jira] [Commented] (HIVE-7737) Hive logs full exception for table not found

2014-08-15 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14098233#comment-14098233
 ] 

Ashutosh Chauhan commented on HIVE-7737:


+1

 Hive logs full exception for table not found
 

 Key: HIVE-7737
 URL: https://issues.apache.org/jira/browse/HIVE-7737
 Project: Hive
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
Priority: Trivial
 Attachments: HIVE-7737.patch


 Table not found is generally user error, the call stack is annoying and 
 unnecessary. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7735) Implement Char, Varchar in ParquetSerDe

2014-08-15 Thread Mohit Sabharwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohit Sabharwal updated HIVE-7735:
--

Attachment: HIVE-7735.1.patch

 Implement Char, Varchar in ParquetSerDe
 ---

 Key: HIVE-7735
 URL: https://issues.apache.org/jira/browse/HIVE-7735
 Project: Hive
  Issue Type: Sub-task
  Components: Serializers/Deserializers
Reporter: Mohit Sabharwal
Assignee: Mohit Sabharwal
  Labels: Parquet
 Attachments: HIVE-7735.1.patch, HIVE-7735.patch


 This JIRA is to implement CHAR and VARCHAR support in Parquet SerDe.
 Both are represented in Parquet as PrimitiveType binary and OriginalType UTF8.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7735) Implement Char, Varchar in ParquetSerDe

2014-08-15 Thread Mohit Sabharwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14098238#comment-14098238
 ] 

Mohit Sabharwal commented on HIVE-7735:
---

Attaching patch after rebase.

 Implement Char, Varchar in ParquetSerDe
 ---

 Key: HIVE-7735
 URL: https://issues.apache.org/jira/browse/HIVE-7735
 Project: Hive
  Issue Type: Sub-task
  Components: Serializers/Deserializers
Reporter: Mohit Sabharwal
Assignee: Mohit Sabharwal
  Labels: Parquet
 Attachments: HIVE-7735.1.patch, HIVE-7735.patch


 This JIRA is to implement CHAR and VARCHAR support in Parquet SerDe.
 Both are represented in Parquet as PrimitiveType binary and OriginalType UTF8.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7169) HiveServer2 in Http Mode should have a configurable IdleMaxTime timeout

2014-08-15 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-7169:


Status: Open  (was: Patch Available)

 HiveServer2 in Http Mode should have a configurable IdleMaxTime timeout
 ---

 Key: HIVE-7169
 URL: https://issues.apache.org/jira/browse/HIVE-7169
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
 Attachments: HIVE-7169.1.patch, HIVE-7169.2.patch, HIVE-7169.3.patch


 Currently, in HiveServer2 we use Jetty Server to start the Http Server. The 
 connector used for this Thrift Http Cli Service has maximum idle time as the 
 default timeout as mentioned in 
 http://grepcode.com/file/repo1.maven.org/maven2/org.eclipse.jetty/jetty-server/7.0.0.v20091005/org/eclipse/jetty/server/AbstractConnector.java#AbstractConnector.0_maxIdleTime.
 This should be manually configurable using 
 connector.setMaxIdleTime(maxIdleTime);



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7169) HiveServer2 in Http Mode should have a configurable IdleMaxTime timeout

2014-08-15 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-7169:


Attachment: HIVE-7169.4.patch

 HiveServer2 in Http Mode should have a configurable IdleMaxTime timeout
 ---

 Key: HIVE-7169
 URL: https://issues.apache.org/jira/browse/HIVE-7169
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
 Attachments: HIVE-7169.1.patch, HIVE-7169.2.patch, HIVE-7169.3.patch, 
 HIVE-7169.4.patch


 Currently, in HiveServer2 we use Jetty Server to start the Http Server. The 
 connector used for this Thrift Http Cli Service has maximum idle time as the 
 default timeout as mentioned in 
 http://grepcode.com/file/repo1.maven.org/maven2/org.eclipse.jetty/jetty-server/7.0.0.v20091005/org/eclipse/jetty/server/AbstractConnector.java#AbstractConnector.0_maxIdleTime.
 This should be manually configurable using 
 connector.setMaxIdleTime(maxIdleTime);



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7169) HiveServer2 in Http Mode should have a configurable IdleMaxTime timeout

2014-08-15 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-7169:


Status: Patch Available  (was: Open)

 HiveServer2 in Http Mode should have a configurable IdleMaxTime timeout
 ---

 Key: HIVE-7169
 URL: https://issues.apache.org/jira/browse/HIVE-7169
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
 Attachments: HIVE-7169.1.patch, HIVE-7169.2.patch, HIVE-7169.3.patch, 
 HIVE-7169.4.patch


 Currently, in HiveServer2 we use Jetty Server to start the Http Server. The 
 connector used for this Thrift Http Cli Service has maximum idle time as the 
 default timeout as mentioned in 
 http://grepcode.com/file/repo1.maven.org/maven2/org.eclipse.jetty/jetty-server/7.0.0.v20091005/org/eclipse/jetty/server/AbstractConnector.java#AbstractConnector.0_maxIdleTime.
 This should be manually configurable using 
 connector.setMaxIdleTime(maxIdleTime);



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7735) Implement Char, Varchar in ParquetSerDe

2014-08-15 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14098245#comment-14098245
 ] 

Hive QA commented on HIVE-7735:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12662012/HIVE-7735.1.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/330/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/330/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-330/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Tests exited with: IllegalArgumentException: No propertifies found in file: 
mainProperties for property: spark.query.files
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12662012

 Implement Char, Varchar in ParquetSerDe
 ---

 Key: HIVE-7735
 URL: https://issues.apache.org/jira/browse/HIVE-7735
 Project: Hive
  Issue Type: Sub-task
  Components: Serializers/Deserializers
Reporter: Mohit Sabharwal
Assignee: Mohit Sabharwal
  Labels: Parquet
 Attachments: HIVE-7735.1.patch, HIVE-7735.patch


 This JIRA is to implement CHAR and VARCHAR support in Parquet SerDe.
 Both are represented in Parquet as PrimitiveType binary and OriginalType UTF8.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7169) HiveServer2 in Http Mode should have a configurable IdleMaxTime timeout

2014-08-15 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14098254#comment-14098254
 ] 

Hive QA commented on HIVE-7169:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12662013/HIVE-7169.4.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/331/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/331/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-331/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Tests exited with: IllegalArgumentException: No propertifies found in file: 
mainProperties for property: spark.query.files
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12662013

 HiveServer2 in Http Mode should have a configurable IdleMaxTime timeout
 ---

 Key: HIVE-7169
 URL: https://issues.apache.org/jira/browse/HIVE-7169
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
 Attachments: HIVE-7169.1.patch, HIVE-7169.2.patch, HIVE-7169.3.patch, 
 HIVE-7169.4.patch


 Currently, in HiveServer2 we use Jetty Server to start the Http Server. The 
 connector used for this Thrift Http Cli Service has maximum idle time as the 
 default timeout as mentioned in 
 http://grepcode.com/file/repo1.maven.org/maven2/org.eclipse.jetty/jetty-server/7.0.0.v20091005/org/eclipse/jetty/server/AbstractConnector.java#AbstractConnector.0_maxIdleTime.
 This should be manually configurable using 
 connector.setMaxIdleTime(maxIdleTime);



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7738) tez select sum(decimal) from union all of decimal and null throws NPE

2014-08-15 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-7738:
--

Attachment: HIVE-7738.patch

 tez select sum(decimal) from union all of decimal and null throws NPE
 -

 Key: HIVE-7738
 URL: https://issues.apache.org/jira/browse/HIVE-7738
 Project: Hive
  Issue Type: Bug
  Components: Tez
Affects Versions: 0.13.1
Reporter: Alexander Pivovarov
 Attachments: HIVE-7738.patch


 if run this query using tez engine then hive will throw NPE
 {code}
 select sum(a) from (
   select cast(1.1 as decimal) a from dual
   union all
   select cast(null as decimal) a from dual
 ) t;
 {code}
 hive select sum(a) from (
select cast(1.1 as decimal) a from dual
union all
select cast(null as decimal) a from dual
  ) t;
 Query ID = apivovarov_20140814200909_438385b2-4147-47bc-98a0-a01567bbb5c5
 Total jobs = 1
 Launching Job 1 out of 1
 Status: Running (application id: application_1407388228332_5616)
 Map 1: -/-Map 4: -/-  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 0/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 0/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Status: Failed
 Vertex failed, vertexName=Map 1, vertexId=vertex_1407388228332_5616_1_02, 
 diagnostics=[Task failed, taskId=task_1407388228332_5616_1_02_00, 
 diagnostics=[AttemptID:attempt_1407388228332_5616_1_02_00_0 Info:Error: 
 java.lang.RuntimeException: java.lang.RuntimeException: Map operator 
 initialization failed
   at 
 org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:188)
   at 
 org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:307)
   at 
 org.apache.hadoop.mapred.YarnTezDagChild$5.run(YarnTezDagChild.java:564)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
   at 
 org.apache.hadoop.mapred.YarnTezDagChild.main(YarnTezDagChild.java:553)
 Caused by: java.lang.RuntimeException: Map operator initialization failed
   at 
 org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:145)
   at 
 org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:164)
   ... 6 more
 Caused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableConstantHiveDecimalObjectInspector.precision(WritableConstantHiveDecimalObjectInspector.java:61)
   at 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDAFSum$GenericUDAFSumHiveDecimal.init(GenericUDAFSum.java:106)
   at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.initializeOp(GroupByOperator.java:362)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:460)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:416)
   at 
 org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:67)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:460)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:416)
   at 
 org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:67)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:460)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:416)
   at 
 org.apache.hadoop.hive.ql.exec.TableScanOperator.initializeOp(TableScanOperator.java:189)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
   at 
 org.apache.hadoop.hive.ql.exec.MapOperator.initializeOp(MapOperator.java:425)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
   at 
 org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:121)
   ... 7 more
 Container released by application, 
 AttemptID:attempt_1407388228332_5616_1_02_00_1 Info:Error: 
 java.lang.RuntimeException: java.lang.RuntimeException: Map operator 
 initialization failed
   at 
 org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:188)
   at 
 

[jira] [Updated] (HIVE-7738) tez select sum(decimal) from union all of decimal and null throws NPE

2014-08-15 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-7738:
--

Status: Patch Available  (was: Open)

 tez select sum(decimal) from union all of decimal and null throws NPE
 -

 Key: HIVE-7738
 URL: https://issues.apache.org/jira/browse/HIVE-7738
 Project: Hive
  Issue Type: Bug
  Components: Tez
Affects Versions: 0.13.1
Reporter: Alexander Pivovarov
 Attachments: HIVE-7738.patch


 if run this query using tez engine then hive will throw NPE
 {code}
 select sum(a) from (
   select cast(1.1 as decimal) a from dual
   union all
   select cast(null as decimal) a from dual
 ) t;
 {code}
 hive select sum(a) from (
select cast(1.1 as decimal) a from dual
union all
select cast(null as decimal) a from dual
  ) t;
 Query ID = apivovarov_20140814200909_438385b2-4147-47bc-98a0-a01567bbb5c5
 Total jobs = 1
 Launching Job 1 out of 1
 Status: Running (application id: application_1407388228332_5616)
 Map 1: -/-Map 4: -/-  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 0/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 0/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Status: Failed
 Vertex failed, vertexName=Map 1, vertexId=vertex_1407388228332_5616_1_02, 
 diagnostics=[Task failed, taskId=task_1407388228332_5616_1_02_00, 
 diagnostics=[AttemptID:attempt_1407388228332_5616_1_02_00_0 Info:Error: 
 java.lang.RuntimeException: java.lang.RuntimeException: Map operator 
 initialization failed
   at 
 org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:188)
   at 
 org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:307)
   at 
 org.apache.hadoop.mapred.YarnTezDagChild$5.run(YarnTezDagChild.java:564)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
   at 
 org.apache.hadoop.mapred.YarnTezDagChild.main(YarnTezDagChild.java:553)
 Caused by: java.lang.RuntimeException: Map operator initialization failed
   at 
 org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:145)
   at 
 org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:164)
   ... 6 more
 Caused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableConstantHiveDecimalObjectInspector.precision(WritableConstantHiveDecimalObjectInspector.java:61)
   at 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDAFSum$GenericUDAFSumHiveDecimal.init(GenericUDAFSum.java:106)
   at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.initializeOp(GroupByOperator.java:362)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:460)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:416)
   at 
 org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:67)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:460)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:416)
   at 
 org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:67)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:460)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:416)
   at 
 org.apache.hadoop.hive.ql.exec.TableScanOperator.initializeOp(TableScanOperator.java:189)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
   at 
 org.apache.hadoop.hive.ql.exec.MapOperator.initializeOp(MapOperator.java:425)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
   at 
 org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:121)
   ... 7 more
 Container released by application, 
 AttemptID:attempt_1407388228332_5616_1_02_00_1 Info:Error: 
 java.lang.RuntimeException: java.lang.RuntimeException: Map operator 
 initialization failed
   at 
 org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:188)
   at 
 

[jira] [Updated] (HIVE-7738) tez select sum(decimal) from union all of decimal and null throws NPE

2014-08-15 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-7738:
--

Component/s: (was: Tez)
 Serializers/Deserializers

 tez select sum(decimal) from union all of decimal and null throws NPE
 -

 Key: HIVE-7738
 URL: https://issues.apache.org/jira/browse/HIVE-7738
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Affects Versions: 0.13.1
Reporter: Alexander Pivovarov
 Attachments: HIVE-7738.patch


 if run this query using tez engine then hive will throw NPE
 {code}
 select sum(a) from (
   select cast(1.1 as decimal) a from dual
   union all
   select cast(null as decimal) a from dual
 ) t;
 {code}
 hive select sum(a) from (
select cast(1.1 as decimal) a from dual
union all
select cast(null as decimal) a from dual
  ) t;
 Query ID = apivovarov_20140814200909_438385b2-4147-47bc-98a0-a01567bbb5c5
 Total jobs = 1
 Launching Job 1 out of 1
 Status: Running (application id: application_1407388228332_5616)
 Map 1: -/-Map 4: -/-  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 0/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 0/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Status: Failed
 Vertex failed, vertexName=Map 1, vertexId=vertex_1407388228332_5616_1_02, 
 diagnostics=[Task failed, taskId=task_1407388228332_5616_1_02_00, 
 diagnostics=[AttemptID:attempt_1407388228332_5616_1_02_00_0 Info:Error: 
 java.lang.RuntimeException: java.lang.RuntimeException: Map operator 
 initialization failed
   at 
 org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:188)
   at 
 org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:307)
   at 
 org.apache.hadoop.mapred.YarnTezDagChild$5.run(YarnTezDagChild.java:564)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
   at 
 org.apache.hadoop.mapred.YarnTezDagChild.main(YarnTezDagChild.java:553)
 Caused by: java.lang.RuntimeException: Map operator initialization failed
   at 
 org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:145)
   at 
 org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:164)
   ... 6 more
 Caused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableConstantHiveDecimalObjectInspector.precision(WritableConstantHiveDecimalObjectInspector.java:61)
   at 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDAFSum$GenericUDAFSumHiveDecimal.init(GenericUDAFSum.java:106)
   at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.initializeOp(GroupByOperator.java:362)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:460)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:416)
   at 
 org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:67)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:460)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:416)
   at 
 org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:67)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:460)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:416)
   at 
 org.apache.hadoop.hive.ql.exec.TableScanOperator.initializeOp(TableScanOperator.java:189)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
   at 
 org.apache.hadoop.hive.ql.exec.MapOperator.initializeOp(MapOperator.java:425)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
   at 
 org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:121)
   ... 7 more
 Container released by application, 
 AttemptID:attempt_1407388228332_5616_1_02_00_1 Info:Error: 
 java.lang.RuntimeException: java.lang.RuntimeException: Map operator 
 initialization failed
   at 
 

[jira] [Commented] (HIVE-7738) tez select sum(decimal) from union all of decimal and null throws NPE

2014-08-15 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14098271#comment-14098271
 ] 

Hive QA commented on HIVE-7738:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12662024/HIVE-7738.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/332/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/332/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-332/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Tests exited with: IllegalArgumentException: No propertifies found in file: 
mainProperties for property: spark.query.files
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12662024

 tez select sum(decimal) from union all of decimal and null throws NPE
 -

 Key: HIVE-7738
 URL: https://issues.apache.org/jira/browse/HIVE-7738
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Affects Versions: 0.13.1
Reporter: Alexander Pivovarov
 Attachments: HIVE-7738.patch


 if run this query using tez engine then hive will throw NPE
 {code}
 select sum(a) from (
   select cast(1.1 as decimal) a from dual
   union all
   select cast(null as decimal) a from dual
 ) t;
 {code}
 hive select sum(a) from (
select cast(1.1 as decimal) a from dual
union all
select cast(null as decimal) a from dual
  ) t;
 Query ID = apivovarov_20140814200909_438385b2-4147-47bc-98a0-a01567bbb5c5
 Total jobs = 1
 Launching Job 1 out of 1
 Status: Running (application id: application_1407388228332_5616)
 Map 1: -/-Map 4: -/-  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 0/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 0/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Status: Failed
 Vertex failed, vertexName=Map 1, vertexId=vertex_1407388228332_5616_1_02, 
 diagnostics=[Task failed, taskId=task_1407388228332_5616_1_02_00, 
 diagnostics=[AttemptID:attempt_1407388228332_5616_1_02_00_0 Info:Error: 
 java.lang.RuntimeException: java.lang.RuntimeException: Map operator 
 initialization failed
   at 
 org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:188)
   at 
 org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:307)
   at 
 org.apache.hadoop.mapred.YarnTezDagChild$5.run(YarnTezDagChild.java:564)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
   at 
 org.apache.hadoop.mapred.YarnTezDagChild.main(YarnTezDagChild.java:553)
 Caused by: java.lang.RuntimeException: Map operator initialization failed
   at 
 org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:145)
   at 
 org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:164)
   ... 6 more
 Caused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableConstantHiveDecimalObjectInspector.precision(WritableConstantHiveDecimalObjectInspector.java:61)
   at 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDAFSum$GenericUDAFSumHiveDecimal.init(GenericUDAFSum.java:106)
   at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.initializeOp(GroupByOperator.java:362)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:460)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:416)
   at 
 org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:67)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:460)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:416)
   at 
 org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:67)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:460)
   at 
 

[jira] [Commented] (HIVE-7739) TestSparkCliDriver should not use includeQueryFiles

2014-08-15 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14098272#comment-14098272
 ] 

Hive QA commented on HIVE-7739:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12661991/HIVE-7739.1-spark.patch

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 5879 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample_islocalmode_hook
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_dynpart_sort_opt_vectorization
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_fs_default_name2
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/44/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/44/console
Test logs: 
http://ec2-54-176-176-199.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-44/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12661991

 TestSparkCliDriver should not use includeQueryFiles
 ---

 Key: HIVE-7739
 URL: https://issues.apache.org/jira/browse/HIVE-7739
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-7739.1-spark.patch


 By using includesQueryFile TestSparkCliDriver cannot be used by -Dqfile or 
 -Dqfile_regex. These options are very useful so let's remove it.
 spark.query.files in testconfiguration.properties will still be used when run 
 via the pre-commit tests to generate -Dqfiles



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7735) Implement Char, Varchar in ParquetSerDe

2014-08-15 Thread Szehon Ho (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho updated HIVE-7735:


Attachment: HIVE-7735.1.patch

 Implement Char, Varchar in ParquetSerDe
 ---

 Key: HIVE-7735
 URL: https://issues.apache.org/jira/browse/HIVE-7735
 Project: Hive
  Issue Type: Sub-task
  Components: Serializers/Deserializers
Reporter: Mohit Sabharwal
Assignee: Mohit Sabharwal
  Labels: Parquet
 Attachments: HIVE-7735.1.patch, HIVE-7735.1.patch, HIVE-7735.patch


 This JIRA is to implement CHAR and VARCHAR support in Parquet SerDe.
 Both are represented in Parquet as PrimitiveType binary and OriginalType UTF8.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7169) HiveServer2 in Http Mode should have a configurable IdleMaxTime timeout

2014-08-15 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14098314#comment-14098314
 ] 

Lefty Leverenz commented on HIVE-7169:
--

Thanks for fixing the capitalization, [~hsubramaniyan].

+1 for the parameter description.

 HiveServer2 in Http Mode should have a configurable IdleMaxTime timeout
 ---

 Key: HIVE-7169
 URL: https://issues.apache.org/jira/browse/HIVE-7169
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
 Attachments: HIVE-7169.1.patch, HIVE-7169.2.patch, HIVE-7169.3.patch, 
 HIVE-7169.4.patch


 Currently, in HiveServer2 we use Jetty Server to start the Http Server. The 
 connector used for this Thrift Http Cli Service has maximum idle time as the 
 default timeout as mentioned in 
 http://grepcode.com/file/repo1.maven.org/maven2/org.eclipse.jetty/jetty-server/7.0.0.v20091005/org/eclipse/jetty/server/AbstractConnector.java#AbstractConnector.0_maxIdleTime.
 This should be manually configurable using 
 connector.setMaxIdleTime(maxIdleTime);



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6806) CREATE TABLE should support STORED AS AVRO

2014-08-15 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-6806:
-

Labels: Avro  (was: Avro TODOC14)

 CREATE TABLE should support STORED AS AVRO
 --

 Key: HIVE-6806
 URL: https://issues.apache.org/jira/browse/HIVE-6806
 Project: Hive
  Issue Type: New Feature
  Components: Serializers/Deserializers
Affects Versions: 0.12.0
Reporter: Jeremy Beard
Assignee: Ashish Kumar Singh
Priority: Minor
  Labels: Avro
 Fix For: 0.14.0

 Attachments: HIVE-6806.1.patch, HIVE-6806.2.patch, HIVE-6806.3.patch, 
 HIVE-6806.patch


 Avro is well established and widely used within Hive, however creating 
 Avro-backed tables requires the messy listing of the SerDe, InputFormat and 
 OutputFormat classes.
 Similarly to HIVE-5783 for Parquet, Hive would be easier to use if it had 
 native Avro support.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7446) Add support to ALTER TABLE .. ADD COLUMN to Avro backed tables

2014-08-15 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14098334#comment-14098334
 ] 

Lefty Leverenz commented on HIVE-7446:
--

Good release note and docs, thanks [~singhashish].  I added links back to this 
jira.

 Add support to ALTER TABLE .. ADD COLUMN to Avro backed tables
 --

 Key: HIVE-7446
 URL: https://issues.apache.org/jira/browse/HIVE-7446
 Project: Hive
  Issue Type: New Feature
Reporter: Ashish Kumar Singh
Assignee: Ashish Kumar Singh
 Fix For: 0.14.0

 Attachments: HIVE-7446.1.patch, HIVE-7446.patch


 HIVE-6806 adds native support for creating hive table stored as Avro. It 
 would be good to add support to ALTER TABLE .. ADD COLUMN to Avro backed 
 tables.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7446) Add support to ALTER TABLE .. ADD COLUMN to Avro backed tables

2014-08-15 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-7446:
-

Labels:   (was: TODOC14)

 Add support to ALTER TABLE .. ADD COLUMN to Avro backed tables
 --

 Key: HIVE-7446
 URL: https://issues.apache.org/jira/browse/HIVE-7446
 Project: Hive
  Issue Type: New Feature
Reporter: Ashish Kumar Singh
Assignee: Ashish Kumar Singh
 Fix For: 0.14.0

 Attachments: HIVE-7446.1.patch, HIVE-7446.patch


 HIVE-6806 adds native support for creating hive table stored as Avro. It 
 would be good to add support to ALTER TABLE .. ADD COLUMN to Avro backed 
 tables.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 24713: HIVE-7735 : Implement Char, Varchar in ParquetSerDe

2014-08-15 Thread Szehon Ho

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24713/#review50710
---


Looks good overall to me, one minor suggestion below.


ql/src/java/org/apache/hadoop/hive/ql/metadata/VirtualColumn.java
https://reviews.apache.org/r/24713/#comment88575

Can we put VCols in a set for more efficiency, and also can we use Guava's 
Iterables to make this logic cleaner?


- Szehon Ho


On Aug. 14, 2014, 10:53 p.m., Mohit Sabharwal wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/24713/
 ---
 
 (Updated Aug. 14, 2014, 10:53 p.m.)
 
 
 Review request for hive.
 
 
 Bugs: HIVE-7735
 https://issues.apache.org/jira/browse/HIVE-7735
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 HIVE-7735 : Implement Char, Varchar in ParquetSerDe
 
 - Since string, char and varchar are all represented as the same parquet
 type (primitive type binary, original type utf8), this patch plumbs the
 hive column types into ETypeConverter to distinguish between the three.
 
 - Removes Decimal related dead code in ArrayWritableObjectInspector,
 (decimal is supported in Parquet SerDe) 
 
 
 Diffs
 -
 
   data/files/parquet_types.txt 9d81c3c3130cb94ae2bc308d511b0e24a60d4b8e 
   
 ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ArrayWritableGroupConverter.java
  582a5dfdaccaa25d46bfb515248eeb4bb84bedc5 
   
 ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/DataWritableGroupConverter.java
  0e310fbfb748d5409ff3c0d8cd8327bec9988ecf 
   
 ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/DataWritableRecordConverter.java
  7762afea4dda8cb4be4756eef43abec566ea8444 
   
 ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java 
 67ce15187a33d58fda7ff5b629339bd89d0e5e54 
   
 ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveGroupConverter.java
  524a2937e39a4821a856c8e25b14633ade89ea49 
   
 ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveSchemaConverter.java
  99901f0f57328db6fb2a260f7b7d76ded6f39558 
   
 ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java
  d6be4bdfc1502cf79c184726d88eb0bd94fb2b02 
   
 ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ArrayWritableObjectInspector.java
  47bf69ce7cb6f474f9f48dd693a7915475a1d9cb 
   
 ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveSerDe.java 
 e3e327c7b657cdd397dd2b4dddf40187c65ce901 
   ql/src/java/org/apache/hadoop/hive/ql/metadata/VirtualColumn.java 
 0637d46f2f7162c8d617c761e817dcf396fc94fe 
   
 ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestHiveSchemaConverter.java 
 b87cf7449679a9b6da997010056e388fb3de9945 
   ql/src/test/queries/clientnegative/parquet_char.q 
 745a7867264e321c079d8146f60d14ae186bbc29 
   ql/src/test/queries/clientnegative/parquet_varchar.q 
 55825f76dc240c54ef451ceec12adee23f12b36c 
   ql/src/test/queries/clientpositive/parquet_types.q 
 cb0dcfdf2d637854a84b165f8565fcb683617696 
   ql/src/test/results/clientnegative/parquet_char.q.out 
 eeaf33b3cca7ccc116fcec4bf11786f22d59c27f 
   ql/src/test/results/clientnegative/parquet_timestamp.q.out 
 00973b7e1f6360ce830a8baa4b959491ccc87a9b 
   ql/src/test/results/clientnegative/parquet_varchar.q.out 
 c03a5b6bc991f12db66b7779c37b86f7a461ee1b 
   ql/src/test/results/clientpositive/parquet_types.q.out 
 dc6dc73479a8df3cd36bebfc8b5919893be33bcd 
   serde/src/java/org/apache/hadoop/hive/serde2/Deserializer.java 
 ade3b5f081eb71e5cf4e639aff8bff6447d68dfc 
   serde/src/java/org/apache/hadoop/hive/serde2/typeinfo/TypeInfo.java 
 e7f3f4837ab253a825a7210f56f595b2403e7385 
 
 Diff: https://reviews.apache.org/r/24713/diff/
 
 
 Testing
 ---
 
 - Added char, varchar types in parquet_types q-test.
 - Added unit test for char, varchar in TestHiveSchemaConverter
 - Removed char, varchar negative q-test files.
 
 
 Thanks,
 
 Mohit Sabharwal
 




[jira] [Commented] (HIVE-7735) Implement Char, Varchar in ParquetSerDe

2014-08-15 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14098345#comment-14098345
 ] 

Hive QA commented on HIVE-7735:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12662026/HIVE-7735.1.patch

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 5808 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_types
org.apache.hive.hcatalog.pig.TestHCatLoader.testReadDataPrimitiveTypes
org.apache.hive.jdbc.miniHS2.TestHiveServer2.testConnection
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/334/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/334/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-334/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12662026

 Implement Char, Varchar in ParquetSerDe
 ---

 Key: HIVE-7735
 URL: https://issues.apache.org/jira/browse/HIVE-7735
 Project: Hive
  Issue Type: Sub-task
  Components: Serializers/Deserializers
Reporter: Mohit Sabharwal
Assignee: Mohit Sabharwal
  Labels: Parquet
 Attachments: HIVE-7735.1.patch, HIVE-7735.1.patch, HIVE-7735.patch


 This JIRA is to implement CHAR and VARCHAR support in Parquet SerDe.
 Both are represented in Parquet as PrimitiveType binary and OriginalType UTF8.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7528) Support cluster by and distributed by

2014-08-15 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14098360#comment-14098360
 ] 

Rui Li commented on HIVE-7528:
--

I've tried simple distribute/cluster by queries and they can run successfully.

 Support cluster by and distributed by
 -

 Key: HIVE-7528
 URL: https://issues.apache.org/jira/browse/HIVE-7528
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xuefu Zhang
Assignee: Rui Li

 clustered by = distributed by + sort by, so this is related to HIVE-7527. If 
 sort by is in place, the assumption is that we don't need to do anything 
 about distributed by or clustered by. Still, we need to confirm and verify.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-1434) Cassandra Storage Handler

2014-08-15 Thread Hari Sekhon (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14098458#comment-14098458
 ] 

Hari Sekhon commented on HIVE-1434:
---

Where is the new Jira for this?

This seems like quite an important Storage Handler for Hive to support Apache 
Cassandra natively allowing bulk parallel data transfers between Cassandra and 
Hive on HDFS clusters.

 Cassandra Storage Handler
 -

 Key: HIVE-1434
 URL: https://issues.apache.org/jira/browse/HIVE-1434
 Project: Hive
  Issue Type: New Feature
Affects Versions: 0.7.0
Reporter: Edward Capriolo
Assignee: Edward Capriolo
 Attachments: HIVE-1434-r1182878.patch, cas-handle.tar.gz, 
 cass_handler.diff, hive-1434-1.txt, hive-1434-2-patch.txt, 
 hive-1434-2011-02-26.patch.txt, hive-1434-2011-03-07.patch.txt, 
 hive-1434-2011-03-07.patch.txt, hive-1434-2011-03-14.patch.txt, 
 hive-1434-3-patch.txt, hive-1434-4-patch.txt, hive-1434-5.patch.txt, 
 hive-1434.2011-02-27.diff.txt, hive-cassandra.2011-02-25.txt, hive.diff


 Add a cassandra storage handler.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7694) SMB join on tables differing by number of sorted by columns with same join prefix fails

2014-08-15 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14098480#comment-14098480
 ] 

Suma Shivaprasad commented on HIVE-7694:


Review request - https://reviews.apache.org/r/24630/

 SMB join on tables differing by number of sorted by columns with same join 
 prefix fails
 ---

 Key: HIVE-7694
 URL: https://issues.apache.org/jira/browse/HIVE-7694
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.13.1
Reporter: Suma Shivaprasad
 Fix For: 0.14.0

 Attachments: HIVE-7694.1.patch, HIVE-7694.patch


 For eg: If two tables T1 sorted by (a, b, c) clustered by a and T2 sorted by 
 (a) and clustered by (a) are joined, the following exception is seen
 {noformat}
 14/08/11 09:09:38 ERROR ql.Driver: FAILED: IndexOutOfBoundsException Index: 
 1, Size: 1
 java.lang.IndexOutOfBoundsException: Index: 1, Size: 1
 at java.util.ArrayList.RangeCheck(ArrayList.java:547)
 at java.util.ArrayList.get(ArrayList.java:322)
 at 
 org.apache.hadoop.hive.ql.optimizer.AbstractSMBJoinProc.checkSortColsAndJoinCols(AbstractSMBJoinProc.java:378)
 at 
 org.apache.hadoop.hive.ql.optimizer.AbstractSMBJoinProc.isEligibleForBucketSortMergeJoin(AbstractSMBJoinProc.java:352)
 at 
 org.apache.hadoop.hive.ql.optimizer.AbstractSMBJoinProc.canConvertBucketMapJoinToSMBJoin(AbstractSMBJoinProc.java:119)
 at 
 org.apache.hadoop.hive.ql.optimizer.SortedMergeBucketMapjoinProc.process(SortedMergeBucketMapjoinProc.java:51)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:94)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:78)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:132)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:109)
 at 
 org.apache.hadoop.hive.ql.optimizer.SortedMergeBucketMapJoinOptimizer.transform(SortedMergeBucketMapJoinOptimizer.java:109)
 at org.apache.hadoop.hive.ql.optimizer.Optimizer.optimize(Optimizer.java:146)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:9305)
 at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
 at 
 org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:64)
 at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:393)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7694) SMB join on tables differing by number of sorted by columns with same join prefix fails

2014-08-15 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14098481#comment-14098481
 ] 

Suma Shivaprasad commented on HIVE-7694:


Review request - https://reviews.apache.org/r/24630/

 SMB join on tables differing by number of sorted by columns with same join 
 prefix fails
 ---

 Key: HIVE-7694
 URL: https://issues.apache.org/jira/browse/HIVE-7694
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.13.1
Reporter: Suma Shivaprasad
 Fix For: 0.14.0

 Attachments: HIVE-7694.1.patch, HIVE-7694.patch


 For eg: If two tables T1 sorted by (a, b, c) clustered by a and T2 sorted by 
 (a) and clustered by (a) are joined, the following exception is seen
 {noformat}
 14/08/11 09:09:38 ERROR ql.Driver: FAILED: IndexOutOfBoundsException Index: 
 1, Size: 1
 java.lang.IndexOutOfBoundsException: Index: 1, Size: 1
 at java.util.ArrayList.RangeCheck(ArrayList.java:547)
 at java.util.ArrayList.get(ArrayList.java:322)
 at 
 org.apache.hadoop.hive.ql.optimizer.AbstractSMBJoinProc.checkSortColsAndJoinCols(AbstractSMBJoinProc.java:378)
 at 
 org.apache.hadoop.hive.ql.optimizer.AbstractSMBJoinProc.isEligibleForBucketSortMergeJoin(AbstractSMBJoinProc.java:352)
 at 
 org.apache.hadoop.hive.ql.optimizer.AbstractSMBJoinProc.canConvertBucketMapJoinToSMBJoin(AbstractSMBJoinProc.java:119)
 at 
 org.apache.hadoop.hive.ql.optimizer.SortedMergeBucketMapjoinProc.process(SortedMergeBucketMapjoinProc.java:51)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:94)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:78)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:132)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:109)
 at 
 org.apache.hadoop.hive.ql.optimizer.SortedMergeBucketMapJoinOptimizer.transform(SortedMergeBucketMapJoinOptimizer.java:109)
 at org.apache.hadoop.hive.ql.optimizer.Optimizer.optimize(Optimizer.java:146)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:9305)
 at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
 at 
 org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:64)
 at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:393)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-1434) Cassandra Storage Handler

2014-08-15 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14098623#comment-14098623
 ] 

Edward Capriolo commented on HIVE-1434:
---

There is going to be no jira. I am doing the code here 
https://github.com/edwardcapriolo/hive-cassandra-ng/blob/master/src/main/java/io/teknek/hive/cassandra/CassandraSerde.java
 

Please do not share this link. I have not had time to commit the licence file 
yet and I would not want it to end up into 50 others peoples github again. 

 Cassandra Storage Handler
 -

 Key: HIVE-1434
 URL: https://issues.apache.org/jira/browse/HIVE-1434
 Project: Hive
  Issue Type: New Feature
Affects Versions: 0.7.0
Reporter: Edward Capriolo
Assignee: Edward Capriolo
 Attachments: HIVE-1434-r1182878.patch, cas-handle.tar.gz, 
 cass_handler.diff, hive-1434-1.txt, hive-1434-2-patch.txt, 
 hive-1434-2011-02-26.patch.txt, hive-1434-2011-03-07.patch.txt, 
 hive-1434-2011-03-07.patch.txt, hive-1434-2011-03-14.patch.txt, 
 hive-1434-3-patch.txt, hive-1434-4-patch.txt, hive-1434-5.patch.txt, 
 hive-1434.2011-02-27.diff.txt, hive-cassandra.2011-02-25.txt, hive.diff


 Add a cassandra storage handler.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7740) qfile and qfile_regex should override includeFiles

2014-08-15 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-7740:
---

Attachment: HIVE-7740.patch

 qfile and qfile_regex should override includeFiles
 --

 Key: HIVE-7740
 URL: https://issues.apache.org/jira/browse/HIVE-7740
 Project: Hive
  Issue Type: Improvement
Reporter: Brock Noland
 Attachments: HIVE-7740.patch


 qfile and qfile_regex should override include files so they can be used by 
 devs to run tests speculatively.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7740) qfile and qfile_regex should override includeFiles

2014-08-15 Thread Brock Noland (JIRA)
Brock Noland created HIVE-7740:
--

 Summary: qfile and qfile_regex should override includeFiles
 Key: HIVE-7740
 URL: https://issues.apache.org/jira/browse/HIVE-7740
 Project: Hive
  Issue Type: Improvement
Reporter: Brock Noland
 Attachments: HIVE-7740.patch

qfile and qfile_regex should override include files so they can be used by devs 
to run tests speculatively.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7740) qfile and qfile_regex should override includeFiles

2014-08-15 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-7740:
---

Assignee: Brock Noland
  Status: Patch Available  (was: Open)

 qfile and qfile_regex should override includeFiles
 --

 Key: HIVE-7740
 URL: https://issues.apache.org/jira/browse/HIVE-7740
 Project: Hive
  Issue Type: Improvement
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-7740.patch


 qfile and qfile_regex should override include files so they can be used by 
 devs to run tests speculatively.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7739) TestSparkCliDriver should not use includeQueryFiles

2014-08-15 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-7739:
---

Status: Open  (was: Patch Available)

Perhaps changing the test framework is not a bad idea. Personally I was quite 
surprised that qfile* did not overwrite include files.

 TestSparkCliDriver should not use includeQueryFiles
 ---

 Key: HIVE-7739
 URL: https://issues.apache.org/jira/browse/HIVE-7739
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-7739.1-spark.patch


 By using includesQueryFile TestSparkCliDriver cannot be used by -Dqfile or 
 -Dqfile_regex. These options are very useful so let's remove it.
 spark.query.files in testconfiguration.properties will still be used when run 
 via the pre-commit tests to generate -Dqfiles



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7739) TestSparkCliDriver should not use includeQueryFiles

2014-08-15 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14098670#comment-14098670
 ] 

Brock Noland commented on HIVE-7739:


Linking to HIVE-7740

 TestSparkCliDriver should not use includeQueryFiles
 ---

 Key: HIVE-7739
 URL: https://issues.apache.org/jira/browse/HIVE-7739
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-7739.1-spark.patch


 By using includesQueryFile TestSparkCliDriver cannot be used by -Dqfile or 
 -Dqfile_regex. These options are very useful so let's remove it.
 spark.query.files in testconfiguration.properties will still be used when run 
 via the pre-commit tests to generate -Dqfiles



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 24609: Hive-7653: AvroSerDe does not support circular references in Schema

2014-08-15 Thread Sachin Goyal

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24609/#review50727
---


Can someone please review this small patch?
It would be really helpful.

- Sachin Goyal


On Aug. 12, 2014, 4:35 p.m., Sachin Goyal wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/24609/
 ---
 
 (Updated Aug. 12, 2014, 4:35 p.m.)
 
 
 Review request for hive.
 
 
 Bugs: https://issues.apache.org/jira/browse/HIVE-7653
 
 https://issues.apache.org/jira/browse/https://issues.apache.org/jira/browse/HIVE-7653
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 Hi,
 
 I have submitted a patch for the following issue:
 https://issues.apache.org/jira/browse/HIVE-7653
 
 But the build is failing due to some other issue.
 Its been failing for the past 70 builds or so and I don't think its related 
 to my change.
 Also, my local build for the same is passing.
 
 Can someone please help me override/fix this test-failure?
 
 Also, a code review of the above patch would be much appreciated.
 
 Thanks
 Sachin
 
 
 Diffs
 -
 
   serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java 
 688b072 
   
 serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroObjectInspectorGenerator.java
  46cdb4f 
   serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerializer.java 
 2bd48ca 
   serde/src/java/org/apache/hadoop/hive/serde2/avro/InstanceCache.java 
 d848005 
   serde/src/java/org/apache/hadoop/hive/serde2/avro/SchemaToTypeInfo.java 
 23e024f 
   serde/src/test/org/apache/hadoop/hive/serde2/avro/TestAvroSerializer.java 
 f8161da 
   serde/src/test/org/apache/hadoop/hive/serde2/avro/TestInstanceCache.java 
 1df88ee 
 
 Diff: https://reviews.apache.org/r/24609/diff/
 
 
 Testing
 ---
 
 All tests pass.
 Also added a new unit-test for the patch.
 
 
 Thanks,
 
 Sachin Goyal
 




[jira] [Updated] (HIVE-7525) Research to find out if it's possible to submit Spark jobs concurrently using shared SparkContext

2014-08-15 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-7525:
---

Issue Type: Sub-task  (was: Task)
Parent: HIVE-7292

 Research to find out if it's possible to submit Spark jobs concurrently using 
 shared SparkContext
 -

 Key: HIVE-7525
 URL: https://issues.apache.org/jira/browse/HIVE-7525
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xuefu Zhang
Assignee: Chao

 Refer to HIVE-7503 and SPARK-2688. Find out if it's possible to submit 
 multiple spark jobs concurrently using a shared SparkContext. SparkClient's 
 code can be manipulated for this test. Here is the process:
 1. Transform rdd1 to rdd2 using some transformation.
 2. call rdd2.cache() to persist it in memory.
 3. in two threads, calling accordingly:
 Thread a. rdd2 - rdd3; rdd3.foreach()
 Thread b. rdd2 - rdd4; rdd4.foreach()
 It would be nice to find out monitoring and error reporting aspects.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7729) Enable q-tests for ANALYZE TABLE feature.

2014-08-15 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-7729:
---

Issue Type: Sub-task  (was: Bug)
Parent: HIVE-7292

 Enable q-tests for ANALYZE TABLE feature.
 -

 Key: HIVE-7729
 URL: https://issues.apache.org/jira/browse/HIVE-7729
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li

 Enable q-tests for ANALYZE TABLE feature since automatic test environment is 
 ready.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7728) Enable q-tests for TABLESAMPLE feature.

2014-08-15 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-7728:
---

Issue Type: Sub-task  (was: Bug)
Parent: HIVE-7292

 Enable q-tests for TABLESAMPLE feature.
 ---

 Key: HIVE-7728
 URL: https://issues.apache.org/jira/browse/HIVE-7728
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li

 Enable q-tests for TABLESAMPLE feature since automatic test environment is 
 ready.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7740) qfile and qfile_regex should override includeFiles

2014-08-15 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14098745#comment-14098745
 ] 

Hive QA commented on HIVE-7740:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12662083/HIVE-7740.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5808 tests executed
*Failed tests:*
{noformat}
org.apache.hive.hcatalog.pig.TestHCatLoader.testReadDataPrimitiveTypes
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/336/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/336/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-336/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12662083

 qfile and qfile_regex should override includeFiles
 --

 Key: HIVE-7740
 URL: https://issues.apache.org/jira/browse/HIVE-7740
 Project: Hive
  Issue Type: Improvement
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-7740.patch


 qfile and qfile_regex should override include files so they can be used by 
 devs to run tests speculatively.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7740) qfile and qfile_regex should override includeFiles

2014-08-15 Thread Szehon Ho (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14098764#comment-14098764
 ] 

Szehon Ho commented on HIVE-7740:
-

+1, thanks Brock

 qfile and qfile_regex should override includeFiles
 --

 Key: HIVE-7740
 URL: https://issues.apache.org/jira/browse/HIVE-7740
 Project: Hive
  Issue Type: Improvement
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-7740.patch


 qfile and qfile_regex should override include files so they can be used by 
 devs to run tests speculatively.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7680) Do not throw SQLException for HiveStatement getMoreResults and setEscapeProcessing(false)

2014-08-15 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14098799#comment-14098799
 ] 

Thejas M Nair commented on HIVE-7680:
-

Thanks for the extensive research and experiments [~apivovarov] ! 
I spend some more time reading up on this. Returning -1 instead of 0 for 
getUpdateCount might be a better behavior. It does look like better behavior 
than what we have. But the really correct behavior (when statement.execute 
indicates it is not a ResultSet), seems to be returning 0 the first time and 
returning -1 in the subsequent calls.

This is would be easy to implement using another variable in HiveStatement.

Other related changes that could potentially be made along with this is -
* getMoreResults returning appropriate value instead of throwing exception 
(Returns value of stmtHandle.isHasResultSet() the first time it is called, then 
false for subsequent calls)
* getResultSet returning the ResultSet only the first time it is called




 Do not throw SQLException for HiveStatement getMoreResults and 
 setEscapeProcessing(false)
 -

 Key: HIVE-7680
 URL: https://issues.apache.org/jira/browse/HIVE-7680
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.13.1
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
Priority: Minor
 Attachments: HIVE-7680.patch


 1. Some JDBC clients call method setEscapeProcessing(false)  (e.g. SQL 
 Workbench)
 Looks like setEscapeProcessing(false) should do nothing.So, lets do  nothing 
 instead of throwing SQLException
 2. getMoreResults is needed in case Statements returns several ReseltSet.
 Hive does not support Multiple ResultSets. So this method can safely always 
 return false.
 3. getUpdateCount. Currently this method always returns 0. Hive cannot tell 
 us how many rows were inserted. According to JDBC spec it should return  -1 
 if the current result is a ResultSet object or there are no more results 
 if this method returns 0 then in case of execution insert statement JDBC 
 client shows 0 rows were inserted which is not true.
 if this method returns -1 then JDBC client runs insert statements and  shows 
 that it was executed successfully, no result were returned. 
 I think the latter behaviour is more correct.
 4. Some methods in Statement class should throw 
 SQLFeatureNotSupportedException if they are not supported.  Current 
 implementation throws SQLException instead which means database access error.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HIVE-7680) Do not throw SQLException for HiveStatement getMoreResults and setEscapeProcessing(false)

2014-08-15 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14098799#comment-14098799
 ] 

Thejas M Nair edited comment on HIVE-7680 at 8/15/14 5:39 PM:
--

Thanks for the extensive research and experiments [~apivovarov] ! 
I spend some more time reading up on this. Returning -1 instead of 0 for 
getUpdateCount might be a better behavior. It does look like better behavior 
than what we have. But the really correct behavior (when statement.execute 
indicates it is not a ResultSet), seems to be returning 0 the first time and 
returning -1 in the subsequent calls. According to what I read, statements that 
don't update rows such as create table are expected to return 0.

This is would be easy to implement using another variable in HiveStatement.

Other related changes that could potentially be made along with this is -
* getMoreResults returning appropriate value instead of throwing exception 
(Returns value of stmtHandle.isHasResultSet() the first time it is called, then 
false for subsequent calls)
* getResultSet returning the ResultSet only the first time it is called





was (Author: thejas):
Thanks for the extensive research and experiments [~apivovarov] ! 
I spend some more time reading up on this. Returning -1 instead of 0 for 
getUpdateCount might be a better behavior. It does look like better behavior 
than what we have. But the really correct behavior (when statement.execute 
indicates it is not a ResultSet), seems to be returning 0 the first time and 
returning -1 in the subsequent calls.

This is would be easy to implement using another variable in HiveStatement.

Other related changes that could potentially be made along with this is -
* getMoreResults returning appropriate value instead of throwing exception 
(Returns value of stmtHandle.isHasResultSet() the first time it is called, then 
false for subsequent calls)
* getResultSet returning the ResultSet only the first time it is called




 Do not throw SQLException for HiveStatement getMoreResults and 
 setEscapeProcessing(false)
 -

 Key: HIVE-7680
 URL: https://issues.apache.org/jira/browse/HIVE-7680
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.13.1
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
Priority: Minor
 Attachments: HIVE-7680.patch


 1. Some JDBC clients call method setEscapeProcessing(false)  (e.g. SQL 
 Workbench)
 Looks like setEscapeProcessing(false) should do nothing.So, lets do  nothing 
 instead of throwing SQLException
 2. getMoreResults is needed in case Statements returns several ReseltSet.
 Hive does not support Multiple ResultSets. So this method can safely always 
 return false.
 3. getUpdateCount. Currently this method always returns 0. Hive cannot tell 
 us how many rows were inserted. According to JDBC spec it should return  -1 
 if the current result is a ResultSet object or there are no more results 
 if this method returns 0 then in case of execution insert statement JDBC 
 client shows 0 rows were inserted which is not true.
 if this method returns -1 then JDBC client runs insert statements and  shows 
 that it was executed successfully, no result were returned. 
 I think the latter behaviour is more correct.
 4. Some methods in Statement class should throw 
 SQLFeatureNotSupportedException if they are not supported.  Current 
 implementation throws SQLException instead which means database access error.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HIVE-7680) Do not throw SQLException for HiveStatement getMoreResults and setEscapeProcessing(false)

2014-08-15 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14098799#comment-14098799
 ] 

Thejas M Nair edited comment on HIVE-7680 at 8/15/14 5:40 PM:
--

Thanks for the extensive research and experiments [~apivovarov] ! 
I spend some more time reading up on this. Returning -1 instead of 0 for 
getUpdateCount might be a better behavior. It does look like better behavior 
than what we have. But the more correct behavior (when statement.execute 
indicates it is not a ResultSet), seems to be returning 0 the first time and 
returning -1 in the subsequent calls. According to what I read, statements that 
don't update rows such as create table are expected to return 0.

This is would be easy to implement using another variable in HiveStatement.

Other related changes that could potentially be made along with this is -
* getMoreResults returning appropriate value instead of throwing exception 
(Returns value of stmtHandle.isHasResultSet() the first time it is called, then 
false for subsequent calls)
* getResultSet returning the ResultSet only the first time it is called





was (Author: thejas):
Thanks for the extensive research and experiments [~apivovarov] ! 
I spend some more time reading up on this. Returning -1 instead of 0 for 
getUpdateCount might be a better behavior. It does look like better behavior 
than what we have. But the really correct behavior (when statement.execute 
indicates it is not a ResultSet), seems to be returning 0 the first time and 
returning -1 in the subsequent calls. According to what I read, statements that 
don't update rows such as create table are expected to return 0.

This is would be easy to implement using another variable in HiveStatement.

Other related changes that could potentially be made along with this is -
* getMoreResults returning appropriate value instead of throwing exception 
(Returns value of stmtHandle.isHasResultSet() the first time it is called, then 
false for subsequent calls)
* getResultSet returning the ResultSet only the first time it is called




 Do not throw SQLException for HiveStatement getMoreResults and 
 setEscapeProcessing(false)
 -

 Key: HIVE-7680
 URL: https://issues.apache.org/jira/browse/HIVE-7680
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.13.1
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
Priority: Minor
 Attachments: HIVE-7680.patch


 1. Some JDBC clients call method setEscapeProcessing(false)  (e.g. SQL 
 Workbench)
 Looks like setEscapeProcessing(false) should do nothing.So, lets do  nothing 
 instead of throwing SQLException
 2. getMoreResults is needed in case Statements returns several ReseltSet.
 Hive does not support Multiple ResultSets. So this method can safely always 
 return false.
 3. getUpdateCount. Currently this method always returns 0. Hive cannot tell 
 us how many rows were inserted. According to JDBC spec it should return  -1 
 if the current result is a ResultSet object or there are no more results 
 if this method returns 0 then in case of execution insert statement JDBC 
 client shows 0 rows were inserted which is not true.
 if this method returns -1 then JDBC client runs insert statements and  shows 
 that it was executed successfully, no result were returned. 
 I think the latter behaviour is more correct.
 4. Some methods in Statement class should throw 
 SQLFeatureNotSupportedException if they are not supported.  Current 
 implementation throws SQLException instead which means database access error.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7680) Do not throw SQLException for HiveStatement getMoreResults and setEscapeProcessing(false)

2014-08-15 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14098812#comment-14098812
 ] 

Thejas M Nair commented on HIVE-7680:
-

[~navis] [~prasadm] Do you guys have any opinion on this ?

[~apivovarov] What to the other tools show as the number of rows updated if -1 
is returned the first time ? Do they just not print it in that case ? I agree 
that returning 0 the first time still has the problem of potentially confusing 
users.



 Do not throw SQLException for HiveStatement getMoreResults and 
 setEscapeProcessing(false)
 -

 Key: HIVE-7680
 URL: https://issues.apache.org/jira/browse/HIVE-7680
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.13.1
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
Priority: Minor
 Attachments: HIVE-7680.patch


 1. Some JDBC clients call method setEscapeProcessing(false)  (e.g. SQL 
 Workbench)
 Looks like setEscapeProcessing(false) should do nothing.So, lets do  nothing 
 instead of throwing SQLException
 2. getMoreResults is needed in case Statements returns several ReseltSet.
 Hive does not support Multiple ResultSets. So this method can safely always 
 return false.
 3. getUpdateCount. Currently this method always returns 0. Hive cannot tell 
 us how many rows were inserted. According to JDBC spec it should return  -1 
 if the current result is a ResultSet object or there are no more results 
 if this method returns 0 then in case of execution insert statement JDBC 
 client shows 0 rows were inserted which is not true.
 if this method returns -1 then JDBC client runs insert statements and  shows 
 that it was executed successfully, no result were returned. 
 I think the latter behaviour is more correct.
 4. Some methods in Statement class should throw 
 SQLFeatureNotSupportedException if they are not supported.  Current 
 implementation throws SQLException instead which means database access error.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7373) Hive should not remove trailing zeros for decimal numbers

2014-08-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-7373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-7373:
--

Attachment: HIVE-7373.5.patch

 Hive should not remove trailing zeros for decimal numbers
 -

 Key: HIVE-7373
 URL: https://issues.apache.org/jira/browse/HIVE-7373
 Project: Hive
  Issue Type: Bug
  Components: Types
Affects Versions: 0.13.0, 0.13.1
Reporter: Xuefu Zhang
Assignee: Sergio Peña
 Attachments: HIVE-7373.1.patch, HIVE-7373.2.patch, HIVE-7373.3.patch, 
 HIVE-7373.4.patch, HIVE-7373.5.patch


 Currently Hive blindly removes trailing zeros of a decimal input number as 
 sort of standardization. This is questionable in theory and problematic in 
 practice.
 1. In decimal context,  number 3.14 has a different semantic meaning from 
 number 3.14. Removing trailing zeroes makes the meaning lost.
 2. In a extreme case, 0.0 has (p, s) as (1, 1). Hive removes trailing zeros, 
 and then the number becomes 0, which has (p, s) of (1, 0). Thus, for a 
 decimal column of (1,1), input such as 0.0, 0.00, and so on becomes NULL 
 because the column doesn't allow a decimal number with integer part.
 Therefore, I propose Hive preserve the trailing zeroes (up to what the scale 
 allows). With this, in above example, 0.0, 0.00, and 0. will be 
 represented as 0.0 (precision=1, scale=1) internally.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7373) Hive should not remove trailing zeros for decimal numbers

2014-08-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-7373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-7373:
--

Status: Patch Available  (was: Open)

Attach new patch with the avro_decimal*.q trailing spaces issue.

 Hive should not remove trailing zeros for decimal numbers
 -

 Key: HIVE-7373
 URL: https://issues.apache.org/jira/browse/HIVE-7373
 Project: Hive
  Issue Type: Bug
  Components: Types
Affects Versions: 0.13.1, 0.13.0
Reporter: Xuefu Zhang
Assignee: Sergio Peña
 Attachments: HIVE-7373.1.patch, HIVE-7373.2.patch, HIVE-7373.3.patch, 
 HIVE-7373.4.patch, HIVE-7373.5.patch


 Currently Hive blindly removes trailing zeros of a decimal input number as 
 sort of standardization. This is questionable in theory and problematic in 
 practice.
 1. In decimal context,  number 3.14 has a different semantic meaning from 
 number 3.14. Removing trailing zeroes makes the meaning lost.
 2. In a extreme case, 0.0 has (p, s) as (1, 1). Hive removes trailing zeros, 
 and then the number becomes 0, which has (p, s) of (1, 0). Thus, for a 
 decimal column of (1,1), input such as 0.0, 0.00, and so on becomes NULL 
 because the column doesn't allow a decimal number with integer part.
 Therefore, I propose Hive preserve the trailing zeroes (up to what the scale 
 allows). With this, in above example, 0.0, 0.00, and 0. will be 
 represented as 0.0 (precision=1, scale=1) internally.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7373) Hive should not remove trailing zeros for decimal numbers

2014-08-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-7373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-7373:
--

Status: Open  (was: Patch Available)

cancel patch to reattach a patch with trailing spaces issue

 Hive should not remove trailing zeros for decimal numbers
 -

 Key: HIVE-7373
 URL: https://issues.apache.org/jira/browse/HIVE-7373
 Project: Hive
  Issue Type: Bug
  Components: Types
Affects Versions: 0.13.1, 0.13.0
Reporter: Xuefu Zhang
Assignee: Sergio Peña
 Attachments: HIVE-7373.1.patch, HIVE-7373.2.patch, HIVE-7373.3.patch, 
 HIVE-7373.4.patch, HIVE-7373.5.patch


 Currently Hive blindly removes trailing zeros of a decimal input number as 
 sort of standardization. This is questionable in theory and problematic in 
 practice.
 1. In decimal context,  number 3.14 has a different semantic meaning from 
 number 3.14. Removing trailing zeroes makes the meaning lost.
 2. In a extreme case, 0.0 has (p, s) as (1, 1). Hive removes trailing zeros, 
 and then the number becomes 0, which has (p, s) of (1, 0). Thus, for a 
 decimal column of (1,1), input such as 0.0, 0.00, and so on becomes NULL 
 because the column doesn't allow a decimal number with integer part.
 Therefore, I propose Hive preserve the trailing zeroes (up to what the scale 
 allows). With this, in above example, 0.0, 0.00, and 0. will be 
 represented as 0.0 (precision=1, scale=1) internally.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 24467: HIVE-7373: Hive should not remove trailing zeros for decimal numbers

2014-08-15 Thread Sergio Pena

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24467/
---

(Updated Aug. 15, 2014, 6:01 p.m.)


Review request for hive.


Bugs: HIVE-7373
https://issues.apache.org/jira/browse/HIVE-7373


Repository: hive-git


Description
---

Removes trim() call from HiveDecimal normalize/enforcePrecisionScale methods. 
This change affects the Decimal128 getHiveDecimalString() method; so a new 
'actualScale' variable is used that stores the actual scale of a value passed 
to Decimal128.

The rest of the changes are added to fix decimal query tests to match the new 
HiveDecimal value.


Diffs (updated)
-

  common/src/java/org/apache/hadoop/hive/common/type/Decimal128.java d4cc32d 
  common/src/java/org/apache/hadoop/hive/common/type/HiveDecimal.java ad09015 
  common/src/test/org/apache/hadoop/hive/common/type/TestDecimal128.java 
46236a5 
  common/src/test/org/apache/hadoop/hive/common/type/TestHiveDecimal.java 
1384a45 
  data/files/kv10.txt PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/RecordReaderImpl.java f5023bb 
  
ql/src/test/org/apache/hadoop/hive/ql/exec/vector/expressions/TestVectorTypeCasts.java
 2a871c5 
  ql/src/test/org/apache/hadoop/hive/ql/io/sarg/TestSearchArgumentImpl.java 
b1524f7 
  ql/src/test/org/apache/hadoop/hive/ql/udf/generic/TestGenericUDFOPDivide.java 
4c5b3a5 
  ql/src/test/queries/clientpositive/decimal_trailing.q PRE-CREATION 
  ql/src/test/queries/clientpositive/literal_decimal.q 08b21dc 
  ql/src/test/results/clientpositive/avro_decimal.q.out 1868de3 
  ql/src/test/results/clientpositive/avro_decimal_native.q.out bc87a7d 
  ql/src/test/results/clientpositive/char_pad_convert.q.out 1f81426 
  ql/src/test/results/clientpositive/decimal_2.q.out 794bad0 
  ql/src/test/results/clientpositive/decimal_3.q.out 524fa62 
  ql/src/test/results/clientpositive/decimal_4.q.out 7444e83 
  ql/src/test/results/clientpositive/decimal_5.q.out 52dae22 
  ql/src/test/results/clientpositive/decimal_6.q.out 4338b52 
  ql/src/test/results/clientpositive/decimal_precision.q.out ea08b73 
  ql/src/test/results/clientpositive/decimal_trailing.q.out PRE-CREATION 
  ql/src/test/results/clientpositive/decimal_udf.q.out 02a0caa 
  ql/src/test/results/clientpositive/literal_decimal.q.out 2f2df6a 
  ql/src/test/results/clientpositive/orc_predicate_pushdown.q.out 890cb2c 
  ql/src/test/results/clientpositive/parquet_decimal.q.out b2d542f 
  ql/src/test/results/clientpositive/parquet_decimal1.q.out 9ff0950 
  ql/src/test/results/clientpositive/serde_regex.q.out e231a09 
  ql/src/test/results/clientpositive/tez/mapjoin_decimal.q.out 9abaa46 
  ql/src/test/results/clientpositive/tez/vector_data_types.q.out 4954825 
  ql/src/test/results/clientpositive/tez/vector_decimal_aggregate.q.out 437e830 
  ql/src/test/results/clientpositive/udf_case.q.out 6c186bd 
  ql/src/test/results/clientpositive/udf_when.q.out cbb1210 
  ql/src/test/results/clientpositive/vector_between_in.q.out bbd23d2 
  ql/src/test/results/clientpositive/vector_data_types.q.out 007f4e8 
  ql/src/test/results/clientpositive/vector_decimal_aggregate.q.out 2c4d552 
  ql/src/test/results/clientpositive/vector_decimal_cast.q.out a508732 
  ql/src/test/results/clientpositive/vector_decimal_expressions.q.out 094eb8e 
  ql/src/test/results/clientpositive/vector_decimal_mapjoin.q.out 3327c90 
  ql/src/test/results/clientpositive/vector_decimal_math_funcs.q.out d60d855 
  ql/src/test/results/clientpositive/windowing_decimal.q.out 88d11af 
  ql/src/test/results/clientpositive/windowing_navfn.q.out 95d7942 
  ql/src/test/results/clientpositive/windowing_rank.q.out 9976fdb 
  
serde/src/java/org/apache/hadoop/hive/serde2/binarysortable/BinarySortableSerDe.java
 523ad7d 

Diff: https://reviews.apache.org/r/24467/diff/


Testing
---


Thanks,

Sergio Pena



[jira] [Commented] (HIVE-7373) Hive should not remove trailing zeros for decimal numbers

2014-08-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-7373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14098847#comment-14098847
 ] 

Sergio Peña commented on HIVE-7373:
---

[~brocknoland] I thought I fixed that issue with the .4.patch
I attached the new one, I had to checkout the two files again, and then do the 
changes manually instead with -Dtest.output.overwrite=true.

 Hive should not remove trailing zeros for decimal numbers
 -

 Key: HIVE-7373
 URL: https://issues.apache.org/jira/browse/HIVE-7373
 Project: Hive
  Issue Type: Bug
  Components: Types
Affects Versions: 0.13.0, 0.13.1
Reporter: Xuefu Zhang
Assignee: Sergio Peña
 Attachments: HIVE-7373.1.patch, HIVE-7373.2.patch, HIVE-7373.3.patch, 
 HIVE-7373.4.patch, HIVE-7373.5.patch


 Currently Hive blindly removes trailing zeros of a decimal input number as 
 sort of standardization. This is questionable in theory and problematic in 
 practice.
 1. In decimal context,  number 3.14 has a different semantic meaning from 
 number 3.14. Removing trailing zeroes makes the meaning lost.
 2. In a extreme case, 0.0 has (p, s) as (1, 1). Hive removes trailing zeros, 
 and then the number becomes 0, which has (p, s) of (1, 0). Thus, for a 
 decimal column of (1,1), input such as 0.0, 0.00, and so on becomes NULL 
 because the column doesn't allow a decimal number with integer part.
 Therefore, I propose Hive preserve the trailing zeroes (up to what the scale 
 allows). With this, in above example, 0.0, 0.00, and 0. will be 
 represented as 0.0 (precision=1, scale=1) internally.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


HiveQA fails due to some spark qfiles issue

2014-08-15 Thread Sergey Shelukhin
For several of my patches, HiveQA failed with: Tests exited with:
IllegalArgumentException: No propertifies found in file: mainProperties for
property: spark.query.files
I tried for some time to find a commit to blame, but couldn't find one :)

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


[jira] [Updated] (HIVE-7737) Hive logs full exception for table not found

2014-08-15 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-7737:
---

Attachment: HIVE-7737.01.patch

try again to see if HiveQA is un-broken

 Hive logs full exception for table not found
 

 Key: HIVE-7737
 URL: https://issues.apache.org/jira/browse/HIVE-7737
 Project: Hive
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
Priority: Trivial
 Attachments: HIVE-7737.01.patch, HIVE-7737.patch


 Table not found is generally user error, the call stack is annoying and 
 unnecessary. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7620) Hive metastore fails to start in secure mode due to java.lang.NoSuchFieldError: SASL_PROPS error

2014-08-15 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-7620:


   Resolution: Fixed
Fix Version/s: 0.14.0
   Status: Resolved  (was: Patch Available)

Patch committed to trunk. Thanks for the review Jason!


 Hive metastore fails to start in secure mode due to 
 java.lang.NoSuchFieldError: SASL_PROPS error
 --

 Key: HIVE-7620
 URL: https://issues.apache.org/jira/browse/HIVE-7620
 Project: Hive
  Issue Type: Bug
  Components: Metastore
 Environment: Hadoop 2.5-snapshot with kerberos authentication on
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 0.14.0

 Attachments: HIVE-7620.1.patch, HIVE-7620.2.patch, HIVE-7620.3.patch


 When Hive metastore is started in a Hadoop 2.5 cluster, it fails to start 
 with following error
 {code}
 14/07/31 17:45:58 [main]: ERROR metastore.HiveMetaStore: Metastore Thrift 
 Server threw an exception...
 java.lang.NoSuchFieldError: SASL_PROPS
   at 
 org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S.getHadoopSaslProperties(HadoopThriftAuthBridge20S.java:126)
   at 
 org.apache.hadoop.hive.metastore.MetaStoreUtils.getMetaStoreSaslProperties(MetaStoreUtils.java:1483)
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:5225)
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:5152)
 {code}
 Changes in HADOOP-10451 to remove SaslRpcServer.SASL_PROPS are causing this 
 error.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: HiveQA fails due to some spark qfiles issue

2014-08-15 Thread Szehon Ho
Yea I saw that too, and it should be fixed it as of build 336.  Brock and I
were doing some work on spark branch test, and I guess wrong branch build
properties got changed at some point?  Patch in that window needs to be
uploaded again though unfortunately..


On Fri, Aug 15, 2014 at 11:14 AM, Sergey Shelukhin ser...@hortonworks.com
wrote:

 For several of my patches, HiveQA failed with: Tests exited with:
 IllegalArgumentException: No propertifies found in file: mainProperties for
 property: spark.query.files
 I tried for some time to find a commit to blame, but couldn't find one :)

 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.



[jira] [Updated] (HIVE-7169) HiveServer2 in Http Mode should have a configurable IdleMaxTime timeout

2014-08-15 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-7169:


   Resolution: Fixed
Fix Version/s: 0.14.0
   Status: Resolved  (was: Patch Available)

Patch committed to trunk. Thanks for the contribution [~hsubramaniyan]

 HiveServer2 in Http Mode should have a configurable IdleMaxTime timeout
 ---

 Key: HIVE-7169
 URL: https://issues.apache.org/jira/browse/HIVE-7169
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
 Fix For: 0.14.0

 Attachments: HIVE-7169.1.patch, HIVE-7169.2.patch, HIVE-7169.3.patch, 
 HIVE-7169.4.patch


 Currently, in HiveServer2 we use Jetty Server to start the Http Server. The 
 connector used for this Thrift Http Cli Service has maximum idle time as the 
 default timeout as mentioned in 
 http://grepcode.com/file/repo1.maven.org/maven2/org.eclipse.jetty/jetty-server/7.0.0.v20091005/org/eclipse/jetty/server/AbstractConnector.java#AbstractConnector.0_maxIdleTime.
 This should be manually configurable using 
 connector.setMaxIdleTime(maxIdleTime);



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7704) Create tez task for fast file merging

2014-08-15 Thread Prasanth J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth J updated HIVE-7704:
-

Attachment: HIVE-7704.4.patch

Reuploading the same patch again as Hive QA did not run.

 Create tez task for fast file merging
 -

 Key: HIVE-7704
 URL: https://issues.apache.org/jira/browse/HIVE-7704
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0
Reporter: Prasanth J
Assignee: Prasanth J
 Attachments: HIVE-7704.1.patch, HIVE-7704.2.patch, HIVE-7704.3.patch, 
 HIVE-7704.4.patch, HIVE-7704.4.patch


 Currently tez falls back to MR task for merge file task. It will beneficial 
 to convert the merge file tasks to tez task to make use of the performance 
 gains from tez. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7700) authorization api - HivePrivilegeObject for permanent function should have database name set

2014-08-15 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14098932#comment-14098932
 ] 

Jason Dere commented on HIVE-7700:
--

+1

 authorization api - HivePrivilegeObject for permanent function should have 
 database name set
 

 Key: HIVE-7700
 URL: https://issues.apache.org/jira/browse/HIVE-7700
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-7700.1.patch, HIVE-7700.2.patch, HIVE-7700.3.patch, 
 HIVE-7700.4.patch


 The HivePrivilegeObject for permanent function should have databasename set, 
 and the functionname should be without the db part.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7513) Add ROW__ID VirtualColumn

2014-08-15 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-7513:
-

Status: Open  (was: Patch Available)

 Add ROW__ID VirtualColumn
 -

 Key: HIVE-7513
 URL: https://issues.apache.org/jira/browse/HIVE-7513
 Project: Hive
  Issue Type: Sub-task
  Components: Query Processor
Affects Versions: 0.13.1
Reporter: Eugene Koifman
Assignee: Eugene Koifman
 Attachments: HIVE-7513.10.patch, HIVE-7513.11.patch, 
 HIVE-7513.12.patch, HIVE-7513.3.patch, HIVE-7513.4.patch, HIVE-7513.5.patch, 
 HIVE-7513.8.patch, HIVE-7513.9.patch, HIVE-7513.codeOnly.txt


 In order to support Update/Delete we need to read rowId from AcidInputFormat 
 and pass that along through the operator pipeline (built from the WHERE 
 clause of the SQL Statement) so that it can be written to the delta file by 
 the update/delete (sink) operators.
 The parser will add this column to the projection list to make sure it's 
 passed along.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7513) Add ROW__ID VirtualColumn

2014-08-15 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-7513:
-

Status: Patch Available  (was: Open)

 Add ROW__ID VirtualColumn
 -

 Key: HIVE-7513
 URL: https://issues.apache.org/jira/browse/HIVE-7513
 Project: Hive
  Issue Type: Sub-task
  Components: Query Processor
Affects Versions: 0.13.1
Reporter: Eugene Koifman
Assignee: Eugene Koifman
 Attachments: HIVE-7513.10.patch, HIVE-7513.11.patch, 
 HIVE-7513.12.patch, HIVE-7513.3.patch, HIVE-7513.4.patch, HIVE-7513.5.patch, 
 HIVE-7513.8.patch, HIVE-7513.9.patch, HIVE-7513.codeOnly.txt


 In order to support Update/Delete we need to read rowId from AcidInputFormat 
 and pass that along through the operator pipeline (built from the WHERE 
 clause of the SQL Statement) so that it can be written to the delta file by 
 the update/delete (sink) operators.
 The parser will add this column to the projection list to make sure it's 
 passed along.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7741) Don't synchronize WriterImpl.addRow() when dynamic.partition is enabled

2014-08-15 Thread Mostafa Mokhtar (JIRA)
Mostafa Mokhtar created HIVE-7741:
-

 Summary: Don't synchronize WriterImpl.addRow() when 
dynamic.partition is enabled
 Key: HIVE-7741
 URL: https://issues.apache.org/jira/browse/HIVE-7741
 Project: Hive
  Issue Type: Bug
  Components: File Formats
Affects Versions: 0.13.1
 Environment: Loading into orc
Reporter: Mostafa Mokhtar
Assignee: Prasanth J
 Fix For: 0.14.0


When loading into an un-paritioned ORC table WriterImpl$StructTreeWriter.write 
method is synchronized.

When hive.optimize.sort.dynamic.partition is enabled the current thread will be 
the only writer and the synchronization is not needed.

Also  checking for memory per row is an over kill , this can be done per 1K 
rows or such

{code}
  public void addRow(Object row) throws IOException {
synchronized (this) {
  treeWriter.write(row);
  rowsInStripe += 1;
  if (buildIndex) {
rowsInIndex += 1;

if (rowsInIndex = rowIndexStride) {
  createRowIndexEntry();
}
  }
}
memoryManager.addedRow();
  }
{code}

This can improve ORC load performance by 7% 

{code}
Stack Trace Sample CountPercentage(%)
WriterImpl.addRow(Object)   5,852   65.782
   WriterImpl$StructTreeWriter.write(Object)5,163   58.037
   MemoryManager.addedRow() 666 7.487
  MemoryManager.notifyWriters() 648 7.284
 WriterImpl.checkMemory(double) 645 7.25
WriterImpl.flushStripe()643 7.228
   
WriterImpl$StructTreeWriter.writeStripe(OrcProto$StripeFooter$Builder, int) 
 584 6.565
{code}







--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7680) Do not throw SQLException for HiveStatement getMoreResults and setEscapeProcessing(false)

2014-08-15 Thread Alexander Pivovarov (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14098954#comment-14098954
 ] 

Alexander Pivovarov commented on HIVE-7680:
---

Thejas

Below are messages for create/insert/select/drop from SQL Workbench, SQuirreL 
SQL and DbVisualiser using fixed jdbc driver   (
Fixed jdbc driver always returns:
getUpdateCount=-1
getMoreResuls=false


SQL Workbench build 116


create table aa5 (int id);

Table 'aa5' created

Execution time: 0.12s
-

insert into table aa5 select 1 from dual;

INSERT INTO TABLE successful

Execution time: 10.22s
-

select * from aa5;

SELECT executed successfully

Execution time: 0.13s
-

drop table aa5;

Table 'aa5' dropped

Execution time: 0.89s

=
SQuirreL SQL 3.5.3
=

create table aa5 (int id);
Query 1 of 1, Rows read: 0, Elapsed time (seconds) - Total: 0.077, SQL query: 
0.077, Reading results: 0

insert into table aa5 select 1 from dual;
Query 1 of 1, Rows read: 0, Elapsed time (seconds) - Total: 10.557, SQL query: 
10.557, Reading results: 0

select * from aa5;
Query 1 of 1, Rows read: 1, Elapsed time (seconds) - Total: 0.096, SQL query: 
0.063, Reading results: 0.033

drop table aa5;
Query 1 of 1, Rows read: 0, Elapsed time (seconds) - Total: 0.12, SQL query: 
0.12, Reading results: 0


DbVisualizer 9.1.9


create table aa5 (int id);
11:42:41  [CREATE - 0 row(s), 0.091 secs]  Command processed. No rows were 
affected
... 1 statement(s) executed, 0 row(s) affected, exec/fetch time: 0.091/0.000 
sec  [0 successful, 1 warnings, 0 errors]

insert into table aa5 select 1 from dual;
11:43:56  [INSERT - 0 row(s), 9.758 secs]  Command processed. No rows were 
affected
... 1 statement(s) executed, 0 row(s) affected, exec/fetch time: 9.758/0.000 
sec  [0 successful, 1 warnings, 0 errors]

select * from aa5;
11:44:23  [SELECT - 1 row(s), 0.069 secs]  Result set fetched
... 1 statement(s) executed, 1 row(s) affected, exec/fetch time: 0.069/0.029 
sec  [1 successful, 0 warnings, 0 errors]

select * from aa5 where 1=0
11:57:12  [SELECT - 0 row(s), 10.022 secs]  Empty result set fetched
... 1 statement(s) executed, 0 row(s) affected, exec/fetch time: 10.022/0.009 
sec  [0 successful, 1 warnings, 0 errors]

drop table aa5;
11:44:40  [DROP - 0 row(s), 0.095 secs]  Command processed. No rows were 
affected
... 1 statement(s) executed, 0 row(s) affected, exec/fetch time: 0.095/0.000 
sec  [0 successful, 1 warnings, 0 errors]

Note: DbVisualizer always increase warnings counter on empty result (e.g. for 
select .. where 1=0), so it's ok that is shows 1 warnings on 
create/insert/drop statements



 Do not throw SQLException for HiveStatement getMoreResults and 
 setEscapeProcessing(false)
 -

 Key: HIVE-7680
 URL: https://issues.apache.org/jira/browse/HIVE-7680
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.13.1
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
Priority: Minor
 Attachments: HIVE-7680.patch


 1. Some JDBC clients call method setEscapeProcessing(false)  (e.g. SQL 
 Workbench)
 Looks like setEscapeProcessing(false) should do nothing.So, lets do  nothing 
 instead of throwing SQLException
 2. getMoreResults is needed in case Statements returns several ReseltSet.
 Hive does not support Multiple ResultSets. So this method can safely always 
 return false.
 3. getUpdateCount. Currently this method always returns 0. Hive cannot tell 
 us how many rows were inserted. According to JDBC spec it should return  -1 
 if the current result is a ResultSet object or there are no more results 
 if this method returns 0 then in case of execution insert statement JDBC 
 client shows 0 rows were inserted which is not true.
 if this method returns -1 then JDBC client runs insert statements and  shows 
 that it was executed successfully, no result were returned. 
 I think the latter behaviour is more correct.
 4. Some methods in Statement class should throw 
 SQLFeatureNotSupportedException if they are not supported.  Current 
 implementation throws SQLException instead which means database access error.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HIVE-7738) tez select sum(decimal) from union all of decimal and null throws NPE

2014-08-15 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov reassigned HIVE-7738:
-

Assignee: Alexander Pivovarov

 tez select sum(decimal) from union all of decimal and null throws NPE
 -

 Key: HIVE-7738
 URL: https://issues.apache.org/jira/browse/HIVE-7738
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Affects Versions: 0.13.1
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
 Attachments: HIVE-7738.patch


 if run this query using tez engine then hive will throw NPE
 {code}
 select sum(a) from (
   select cast(1.1 as decimal) a from dual
   union all
   select cast(null as decimal) a from dual
 ) t;
 {code}
 hive select sum(a) from (
select cast(1.1 as decimal) a from dual
union all
select cast(null as decimal) a from dual
  ) t;
 Query ID = apivovarov_20140814200909_438385b2-4147-47bc-98a0-a01567bbb5c5
 Total jobs = 1
 Launching Job 1 out of 1
 Status: Running (application id: application_1407388228332_5616)
 Map 1: -/-Map 4: -/-  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 0/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 0/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Status: Failed
 Vertex failed, vertexName=Map 1, vertexId=vertex_1407388228332_5616_1_02, 
 diagnostics=[Task failed, taskId=task_1407388228332_5616_1_02_00, 
 diagnostics=[AttemptID:attempt_1407388228332_5616_1_02_00_0 Info:Error: 
 java.lang.RuntimeException: java.lang.RuntimeException: Map operator 
 initialization failed
   at 
 org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:188)
   at 
 org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:307)
   at 
 org.apache.hadoop.mapred.YarnTezDagChild$5.run(YarnTezDagChild.java:564)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
   at 
 org.apache.hadoop.mapred.YarnTezDagChild.main(YarnTezDagChild.java:553)
 Caused by: java.lang.RuntimeException: Map operator initialization failed
   at 
 org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:145)
   at 
 org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:164)
   ... 6 more
 Caused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableConstantHiveDecimalObjectInspector.precision(WritableConstantHiveDecimalObjectInspector.java:61)
   at 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDAFSum$GenericUDAFSumHiveDecimal.init(GenericUDAFSum.java:106)
   at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.initializeOp(GroupByOperator.java:362)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:460)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:416)
   at 
 org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:67)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:460)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:416)
   at 
 org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:67)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:460)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:416)
   at 
 org.apache.hadoop.hive.ql.exec.TableScanOperator.initializeOp(TableScanOperator.java:189)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
   at 
 org.apache.hadoop.hive.ql.exec.MapOperator.initializeOp(MapOperator.java:425)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
   at 
 org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:121)
   ... 7 more
 Container released by application, 
 AttemptID:attempt_1407388228332_5616_1_02_00_1 Info:Error: 
 java.lang.RuntimeException: java.lang.RuntimeException: Map operator 
 initialization failed
   at 
 

[jira] [Updated] (HIVE-7169) HiveServer2 in Http Mode should have a configurable IdleMaxTime timeout

2014-08-15 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-7169:
-

Labels: TODOC14  (was: )

 HiveServer2 in Http Mode should have a configurable IdleMaxTime timeout
 ---

 Key: HIVE-7169
 URL: https://issues.apache.org/jira/browse/HIVE-7169
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
  Labels: TODOC14
 Fix For: 0.14.0

 Attachments: HIVE-7169.1.patch, HIVE-7169.2.patch, HIVE-7169.3.patch, 
 HIVE-7169.4.patch


 Currently, in HiveServer2 we use Jetty Server to start the Http Server. The 
 connector used for this Thrift Http Cli Service has maximum idle time as the 
 default timeout as mentioned in 
 http://grepcode.com/file/repo1.maven.org/maven2/org.eclipse.jetty/jetty-server/7.0.0.v20091005/org/eclipse/jetty/server/AbstractConnector.java#AbstractConnector.0_maxIdleTime.
 This should be manually configurable using 
 connector.setMaxIdleTime(maxIdleTime);



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7169) HiveServer2 in Http Mode should have a configurable IdleMaxTime timeout

2014-08-15 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14098971#comment-14098971
 ] 

Lefty Leverenz commented on HIVE-7169:
--

This adds configuration parameter *hive.server2.thrift.http.max.idle.time* so 
it needs to be documented in the wiki by the time 0.14.0 is released.

* [Configuration Properties -- HiveServer2 | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-HiveServer2]

 HiveServer2 in Http Mode should have a configurable IdleMaxTime timeout
 ---

 Key: HIVE-7169
 URL: https://issues.apache.org/jira/browse/HIVE-7169
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
  Labels: TODOC14
 Fix For: 0.14.0

 Attachments: HIVE-7169.1.patch, HIVE-7169.2.patch, HIVE-7169.3.patch, 
 HIVE-7169.4.patch


 Currently, in HiveServer2 we use Jetty Server to start the Http Server. The 
 connector used for this Thrift Http Cli Service has maximum idle time as the 
 default timeout as mentioned in 
 http://grepcode.com/file/repo1.maven.org/maven2/org.eclipse.jetty/jetty-server/7.0.0.v20091005/org/eclipse/jetty/server/AbstractConnector.java#AbstractConnector.0_maxIdleTime.
 This should be manually configurable using 
 connector.setMaxIdleTime(maxIdleTime);



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7738) tez select sum(decimal) from union all of decimal and null throws NPE

2014-08-15 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14098974#comment-14098974
 ] 

Gopal V commented on HIVE-7738:
---

[~apivovarov]: can you re-attach the same patch for the QA to run?

 tez select sum(decimal) from union all of decimal and null throws NPE
 -

 Key: HIVE-7738
 URL: https://issues.apache.org/jira/browse/HIVE-7738
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Affects Versions: 0.13.1
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
 Attachments: HIVE-7738.patch


 if run this query using tez engine then hive will throw NPE
 {code}
 select sum(a) from (
   select cast(1.1 as decimal) a from dual
   union all
   select cast(null as decimal) a from dual
 ) t;
 {code}
 hive select sum(a) from (
select cast(1.1 as decimal) a from dual
union all
select cast(null as decimal) a from dual
  ) t;
 Query ID = apivovarov_20140814200909_438385b2-4147-47bc-98a0-a01567bbb5c5
 Total jobs = 1
 Launching Job 1 out of 1
 Status: Running (application id: application_1407388228332_5616)
 Map 1: -/-Map 4: -/-  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 0/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 0/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Status: Failed
 Vertex failed, vertexName=Map 1, vertexId=vertex_1407388228332_5616_1_02, 
 diagnostics=[Task failed, taskId=task_1407388228332_5616_1_02_00, 
 diagnostics=[AttemptID:attempt_1407388228332_5616_1_02_00_0 Info:Error: 
 java.lang.RuntimeException: java.lang.RuntimeException: Map operator 
 initialization failed
   at 
 org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:188)
   at 
 org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:307)
   at 
 org.apache.hadoop.mapred.YarnTezDagChild$5.run(YarnTezDagChild.java:564)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
   at 
 org.apache.hadoop.mapred.YarnTezDagChild.main(YarnTezDagChild.java:553)
 Caused by: java.lang.RuntimeException: Map operator initialization failed
   at 
 org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:145)
   at 
 org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:164)
   ... 6 more
 Caused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableConstantHiveDecimalObjectInspector.precision(WritableConstantHiveDecimalObjectInspector.java:61)
   at 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDAFSum$GenericUDAFSumHiveDecimal.init(GenericUDAFSum.java:106)
   at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.initializeOp(GroupByOperator.java:362)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:460)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:416)
   at 
 org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:67)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:460)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:416)
   at 
 org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:67)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:460)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:416)
   at 
 org.apache.hadoop.hive.ql.exec.TableScanOperator.initializeOp(TableScanOperator.java:189)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
   at 
 org.apache.hadoop.hive.ql.exec.MapOperator.initializeOp(MapOperator.java:425)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
   at 
 org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:121)
   ... 7 more
 Container released by application, 
 AttemptID:attempt_1407388228332_5616_1_02_00_1 Info:Error: 
 java.lang.RuntimeException: java.lang.RuntimeException: Map operator 
 initialization failed
   at 
 

[jira] [Commented] (HIVE-7373) Hive should not remove trailing zeros for decimal numbers

2014-08-15 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14098982#comment-14098982
 ] 

Hive QA commented on HIVE-7373:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12662102/HIVE-7373.5.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5810 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_decimal_trailing
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/338/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/338/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-338/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12662102

 Hive should not remove trailing zeros for decimal numbers
 -

 Key: HIVE-7373
 URL: https://issues.apache.org/jira/browse/HIVE-7373
 Project: Hive
  Issue Type: Bug
  Components: Types
Affects Versions: 0.13.0, 0.13.1
Reporter: Xuefu Zhang
Assignee: Sergio Peña
 Attachments: HIVE-7373.1.patch, HIVE-7373.2.patch, HIVE-7373.3.patch, 
 HIVE-7373.4.patch, HIVE-7373.5.patch


 Currently Hive blindly removes trailing zeros of a decimal input number as 
 sort of standardization. This is questionable in theory and problematic in 
 practice.
 1. In decimal context,  number 3.14 has a different semantic meaning from 
 number 3.14. Removing trailing zeroes makes the meaning lost.
 2. In a extreme case, 0.0 has (p, s) as (1, 1). Hive removes trailing zeros, 
 and then the number becomes 0, which has (p, s) of (1, 0). Thus, for a 
 decimal column of (1,1), input such as 0.0, 0.00, and so on becomes NULL 
 because the column doesn't allow a decimal number with integer part.
 Therefore, I propose Hive preserve the trailing zeroes (up to what the scale 
 allows). With this, in above example, 0.0, 0.00, and 0. will be 
 represented as 0.0 (precision=1, scale=1) internally.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7738) tez select sum(decimal) from union all of decimal and null throws NPE

2014-08-15 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-7738:
--

Attachment: HIVE-7738.patch

for QA

 tez select sum(decimal) from union all of decimal and null throws NPE
 -

 Key: HIVE-7738
 URL: https://issues.apache.org/jira/browse/HIVE-7738
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Affects Versions: 0.13.1
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
 Attachments: HIVE-7738.patch, HIVE-7738.patch


 if run this query using tez engine then hive will throw NPE
 {code}
 select sum(a) from (
   select cast(1.1 as decimal) a from dual
   union all
   select cast(null as decimal) a from dual
 ) t;
 {code}
 hive select sum(a) from (
select cast(1.1 as decimal) a from dual
union all
select cast(null as decimal) a from dual
  ) t;
 Query ID = apivovarov_20140814200909_438385b2-4147-47bc-98a0-a01567bbb5c5
 Total jobs = 1
 Launching Job 1 out of 1
 Status: Running (application id: application_1407388228332_5616)
 Map 1: -/-Map 4: -/-  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 0/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 0/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Map 1: 0/1Map 4: 1/1  Reducer 3: 0/1  
 Status: Failed
 Vertex failed, vertexName=Map 1, vertexId=vertex_1407388228332_5616_1_02, 
 diagnostics=[Task failed, taskId=task_1407388228332_5616_1_02_00, 
 diagnostics=[AttemptID:attempt_1407388228332_5616_1_02_00_0 Info:Error: 
 java.lang.RuntimeException: java.lang.RuntimeException: Map operator 
 initialization failed
   at 
 org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:188)
   at 
 org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:307)
   at 
 org.apache.hadoop.mapred.YarnTezDagChild$5.run(YarnTezDagChild.java:564)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
   at 
 org.apache.hadoop.mapred.YarnTezDagChild.main(YarnTezDagChild.java:553)
 Caused by: java.lang.RuntimeException: Map operator initialization failed
   at 
 org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:145)
   at 
 org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:164)
   ... 6 more
 Caused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableConstantHiveDecimalObjectInspector.precision(WritableConstantHiveDecimalObjectInspector.java:61)
   at 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDAFSum$GenericUDAFSumHiveDecimal.init(GenericUDAFSum.java:106)
   at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.initializeOp(GroupByOperator.java:362)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:460)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:416)
   at 
 org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:67)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:460)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:416)
   at 
 org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:67)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:460)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:416)
   at 
 org.apache.hadoop.hive.ql.exec.TableScanOperator.initializeOp(TableScanOperator.java:189)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
   at 
 org.apache.hadoop.hive.ql.exec.MapOperator.initializeOp(MapOperator.java:425)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
   at 
 org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:121)
   ... 7 more
 Container released by application, 
 AttemptID:attempt_1407388228332_5616_1_02_00_1 Info:Error: 
 java.lang.RuntimeException: java.lang.RuntimeException: Map operator 
 initialization failed
   at 
 

[jira] [Updated] (HIVE-7617) optimize bytes mapjoin hash table read path wrt serialization, at least for common cases

2014-08-15 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-7617:
---

Attachment: HIVE-7617.04.patch

same patch, HiveQA was broken yday

 optimize bytes mapjoin hash table read path wrt serialization, at least for 
 common cases
 

 Key: HIVE-7617
 URL: https://issues.apache.org/jira/browse/HIVE-7617
 Project: Hive
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HIVE-7617.01.patch, HIVE-7617.02.patch, 
 HIVE-7617.03.patch, HIVE-7617.04.patch, HIVE-7617.patch, 
 HIVE-7617.prelim.patch, hashmap-wb-fixes.png


 BytesBytes has table stores keys in the byte array for compact 
 representation, however that means that the straightforward implementation of 
 lookups serializes lookup keys to byte arrays, which is relatively expensive.
 We can either shortcut hashcode and compare for common types on read path 
 (integral types which would cover most of the real-world keys), or specialize 
 hashtable and from BytesBytes... create LongBytes, StringBytes, or whatever. 
 First one seems simpler now.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7696) small changes to mapjoin hashtable

2014-08-15 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-7696:
---

Attachment: HIVE-7696.02.patch

Same patch, HiveQA was broken yday

 small changes to mapjoin hashtable
 --

 Key: HIVE-7696
 URL: https://issues.apache.org/jira/browse/HIVE-7696
 Project: Hive
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HIVE-7696.01.patch, HIVE-7696.02.patch, HIVE-7696.patch


 Parts of HIVE-7617 patch that are not related to the core issue, based on 
 some profiling by [~mmokhtar]



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 24467: HIVE-7373: Hive should not remove trailing zeros for decimal numbers

2014-08-15 Thread Sergio Pena

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24467/
---

(Updated Aug. 15, 2014, 8:26 p.m.)


Review request for hive.


Changes
---

Fixed decimal_trailing.q

It started failing after I did a rebase on origin/trunk


Bugs: HIVE-7373
https://issues.apache.org/jira/browse/HIVE-7373


Repository: hive-git


Description
---

Removes trim() call from HiveDecimal normalize/enforcePrecisionScale methods. 
This change affects the Decimal128 getHiveDecimalString() method; so a new 
'actualScale' variable is used that stores the actual scale of a value passed 
to Decimal128.

The rest of the changes are added to fix decimal query tests to match the new 
HiveDecimal value.


Diffs (updated)
-

  common/src/java/org/apache/hadoop/hive/common/type/Decimal128.java d4cc32d 
  common/src/java/org/apache/hadoop/hive/common/type/HiveDecimal.java ad09015 
  common/src/test/org/apache/hadoop/hive/common/type/TestDecimal128.java 
46236a5 
  common/src/test/org/apache/hadoop/hive/common/type/TestHiveDecimal.java 
1384a45 
  data/files/kv10.txt PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/RecordReaderImpl.java f5023bb 
  
ql/src/test/org/apache/hadoop/hive/ql/exec/vector/expressions/TestVectorTypeCasts.java
 2a871c5 
  ql/src/test/org/apache/hadoop/hive/ql/io/sarg/TestSearchArgumentImpl.java 
b1524f7 
  ql/src/test/org/apache/hadoop/hive/ql/udf/generic/TestGenericUDFOPDivide.java 
4c5b3a5 
  ql/src/test/queries/clientpositive/decimal_trailing.q PRE-CREATION 
  ql/src/test/queries/clientpositive/literal_decimal.q 08b21dc 
  ql/src/test/results/clientpositive/avro_decimal.q.out 88268ce 
  ql/src/test/results/clientpositive/avro_decimal_native.q.out c8ae0fb 
  ql/src/test/results/clientpositive/char_pad_convert.q.out 26102e4 
  ql/src/test/results/clientpositive/decimal_2.q.out 934590c 
  ql/src/test/results/clientpositive/decimal_3.q.out 8e9a30a 
  ql/src/test/results/clientpositive/decimal_4.q.out 50662af 
  ql/src/test/results/clientpositive/decimal_5.q.out 0f24b8a 
  ql/src/test/results/clientpositive/decimal_6.q.out c0cad1f 
  ql/src/test/results/clientpositive/decimal_precision.q.out f3f2cbc 
  ql/src/test/results/clientpositive/decimal_trailing.q.out PRE-CREATION 
  ql/src/test/results/clientpositive/decimal_udf.q.out 1d5fee9 
  ql/src/test/results/clientpositive/literal_decimal.q.out 2f2df6a 
  ql/src/test/results/clientpositive/orc_predicate_pushdown.q.out f25b442 
  ql/src/test/results/clientpositive/parquet_decimal.q.out cd87b92 
  ql/src/test/results/clientpositive/parquet_decimal1.q.out bd146f8 
  ql/src/test/results/clientpositive/serde_regex.q.out 65e7dec 
  ql/src/test/results/clientpositive/tez/mapjoin_decimal.q.out 07529b8 
  ql/src/test/results/clientpositive/tez/vector_data_types.q.out f577e13 
  ql/src/test/results/clientpositive/tez/vector_decimal_aggregate.q.out 437e830 
  ql/src/test/results/clientpositive/udf_case.q.out 6c186bd 
  ql/src/test/results/clientpositive/udf_when.q.out cbb1210 
  ql/src/test/results/clientpositive/vector_between_in.q.out bbd23d2 
  ql/src/test/results/clientpositive/vector_data_types.q.out a1183ad 
  ql/src/test/results/clientpositive/vector_decimal_aggregate.q.out 2c4d552 
  ql/src/test/results/clientpositive/vector_decimal_cast.q.out a508732 
  ql/src/test/results/clientpositive/vector_decimal_expressions.q.out 094eb8e 
  ql/src/test/results/clientpositive/vector_decimal_mapjoin.q.out 3327c90 
  ql/src/test/results/clientpositive/vector_decimal_math_funcs.q.out d60d855 
  ql/src/test/results/clientpositive/windowing_decimal.q.out 08dd6ab 
  ql/src/test/results/clientpositive/windowing_navfn.q.out f2f2cb4 
  ql/src/test/results/clientpositive/windowing_rank.q.out 6a74a8e 
  
serde/src/java/org/apache/hadoop/hive/serde2/binarysortable/BinarySortableSerDe.java
 523ad7d 

Diff: https://reviews.apache.org/r/24467/diff/


Testing
---


Thanks,

Sergio Pena



Re: Review Request 24713: HIVE-7735 : Implement Char, Varchar in ParquetSerDe

2014-08-15 Thread Mohit Sabharwal


 On Aug. 15, 2014, 8:21 a.m., Szehon Ho wrote:
  ql/src/java/org/apache/hadoop/hive/ql/metadata/VirtualColumn.java, line 152
  https://reviews.apache.org/r/24713/diff/1/?file=660935#file660935line152
 
  Can we put VCols in a set for more efficiency, and also can we use 
  Guava's Iterables to make this logic cleaner?

Changed to set.  However, couldn't really see a way to make the logic cleaner 
using Iterables (like removeAll with a predicate), since we need the index from 
one list to remove element in another.


- Mohit


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24713/#review50710
---


On Aug. 14, 2014, 10:53 p.m., Mohit Sabharwal wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/24713/
 ---
 
 (Updated Aug. 14, 2014, 10:53 p.m.)
 
 
 Review request for hive.
 
 
 Bugs: HIVE-7735
 https://issues.apache.org/jira/browse/HIVE-7735
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 HIVE-7735 : Implement Char, Varchar in ParquetSerDe
 
 - Since string, char and varchar are all represented as the same parquet
 type (primitive type binary, original type utf8), this patch plumbs the
 hive column types into ETypeConverter to distinguish between the three.
 
 - Removes Decimal related dead code in ArrayWritableObjectInspector,
 (decimal is supported in Parquet SerDe) 
 
 
 Diffs
 -
 
   data/files/parquet_types.txt 9d81c3c3130cb94ae2bc308d511b0e24a60d4b8e 
   
 ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ArrayWritableGroupConverter.java
  582a5dfdaccaa25d46bfb515248eeb4bb84bedc5 
   
 ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/DataWritableGroupConverter.java
  0e310fbfb748d5409ff3c0d8cd8327bec9988ecf 
   
 ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/DataWritableRecordConverter.java
  7762afea4dda8cb4be4756eef43abec566ea8444 
   
 ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java 
 67ce15187a33d58fda7ff5b629339bd89d0e5e54 
   
 ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveGroupConverter.java
  524a2937e39a4821a856c8e25b14633ade89ea49 
   
 ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveSchemaConverter.java
  99901f0f57328db6fb2a260f7b7d76ded6f39558 
   
 ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java
  d6be4bdfc1502cf79c184726d88eb0bd94fb2b02 
   
 ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ArrayWritableObjectInspector.java
  47bf69ce7cb6f474f9f48dd693a7915475a1d9cb 
   
 ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveSerDe.java 
 e3e327c7b657cdd397dd2b4dddf40187c65ce901 
   ql/src/java/org/apache/hadoop/hive/ql/metadata/VirtualColumn.java 
 0637d46f2f7162c8d617c761e817dcf396fc94fe 
   
 ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestHiveSchemaConverter.java 
 b87cf7449679a9b6da997010056e388fb3de9945 
   ql/src/test/queries/clientnegative/parquet_char.q 
 745a7867264e321c079d8146f60d14ae186bbc29 
   ql/src/test/queries/clientnegative/parquet_varchar.q 
 55825f76dc240c54ef451ceec12adee23f12b36c 
   ql/src/test/queries/clientpositive/parquet_types.q 
 cb0dcfdf2d637854a84b165f8565fcb683617696 
   ql/src/test/results/clientnegative/parquet_char.q.out 
 eeaf33b3cca7ccc116fcec4bf11786f22d59c27f 
   ql/src/test/results/clientnegative/parquet_timestamp.q.out 
 00973b7e1f6360ce830a8baa4b959491ccc87a9b 
   ql/src/test/results/clientnegative/parquet_varchar.q.out 
 c03a5b6bc991f12db66b7779c37b86f7a461ee1b 
   ql/src/test/results/clientpositive/parquet_types.q.out 
 dc6dc73479a8df3cd36bebfc8b5919893be33bcd 
   serde/src/java/org/apache/hadoop/hive/serde2/Deserializer.java 
 ade3b5f081eb71e5cf4e639aff8bff6447d68dfc 
   serde/src/java/org/apache/hadoop/hive/serde2/typeinfo/TypeInfo.java 
 e7f3f4837ab253a825a7210f56f595b2403e7385 
 
 Diff: https://reviews.apache.org/r/24713/diff/
 
 
 Testing
 ---
 
 - Added char, varchar types in parquet_types q-test.
 - Added unit test for char, varchar in TestHiveSchemaConverter
 - Removed char, varchar negative q-test files.
 
 
 Thanks,
 
 Mohit Sabharwal
 




[jira] [Updated] (HIVE-7373) Hive should not remove trailing zeros for decimal numbers

2014-08-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-7373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-7373:
--

Status: Patch Available  (was: Open)

 Hive should not remove trailing zeros for decimal numbers
 -

 Key: HIVE-7373
 URL: https://issues.apache.org/jira/browse/HIVE-7373
 Project: Hive
  Issue Type: Bug
  Components: Types
Affects Versions: 0.13.1, 0.13.0
Reporter: Xuefu Zhang
Assignee: Sergio Peña
 Attachments: HIVE-7373.1.patch, HIVE-7373.2.patch, HIVE-7373.3.patch, 
 HIVE-7373.4.patch, HIVE-7373.5.patch, HIVE-7373.6.patch


 Currently Hive blindly removes trailing zeros of a decimal input number as 
 sort of standardization. This is questionable in theory and problematic in 
 practice.
 1. In decimal context,  number 3.14 has a different semantic meaning from 
 number 3.14. Removing trailing zeroes makes the meaning lost.
 2. In a extreme case, 0.0 has (p, s) as (1, 1). Hive removes trailing zeros, 
 and then the number becomes 0, which has (p, s) of (1, 0). Thus, for a 
 decimal column of (1,1), input such as 0.0, 0.00, and so on becomes NULL 
 because the column doesn't allow a decimal number with integer part.
 Therefore, I propose Hive preserve the trailing zeroes (up to what the scale 
 allows). With this, in above example, 0.0, 0.00, and 0. will be 
 represented as 0.0 (precision=1, scale=1) internally.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7373) Hive should not remove trailing zeros for decimal numbers

2014-08-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-7373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-7373:
--

Attachment: HIVE-7373.6.patch

 Hive should not remove trailing zeros for decimal numbers
 -

 Key: HIVE-7373
 URL: https://issues.apache.org/jira/browse/HIVE-7373
 Project: Hive
  Issue Type: Bug
  Components: Types
Affects Versions: 0.13.0, 0.13.1
Reporter: Xuefu Zhang
Assignee: Sergio Peña
 Attachments: HIVE-7373.1.patch, HIVE-7373.2.patch, HIVE-7373.3.patch, 
 HIVE-7373.4.patch, HIVE-7373.5.patch, HIVE-7373.6.patch


 Currently Hive blindly removes trailing zeros of a decimal input number as 
 sort of standardization. This is questionable in theory and problematic in 
 practice.
 1. In decimal context,  number 3.14 has a different semantic meaning from 
 number 3.14. Removing trailing zeroes makes the meaning lost.
 2. In a extreme case, 0.0 has (p, s) as (1, 1). Hive removes trailing zeros, 
 and then the number becomes 0, which has (p, s) of (1, 0). Thus, for a 
 decimal column of (1,1), input such as 0.0, 0.00, and so on becomes NULL 
 because the column doesn't allow a decimal number with integer part.
 Therefore, I propose Hive preserve the trailing zeroes (up to what the scale 
 allows). With this, in above example, 0.0, 0.00, and 0. will be 
 represented as 0.0 (precision=1, scale=1) internally.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7373) Hive should not remove trailing zeros for decimal numbers

2014-08-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-7373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-7373:
--

Status: Open  (was: Patch Available)

decimal_trailing.q failed after I rebase origin/trunk 
(9be9bd7f663e421f9f50cb3b3bc054b7eb9ef647). 

I am canceling this patch to upload a new one.

 Hive should not remove trailing zeros for decimal numbers
 -

 Key: HIVE-7373
 URL: https://issues.apache.org/jira/browse/HIVE-7373
 Project: Hive
  Issue Type: Bug
  Components: Types
Affects Versions: 0.13.1, 0.13.0
Reporter: Xuefu Zhang
Assignee: Sergio Peña
 Attachments: HIVE-7373.1.patch, HIVE-7373.2.patch, HIVE-7373.3.patch, 
 HIVE-7373.4.patch, HIVE-7373.5.patch, HIVE-7373.6.patch


 Currently Hive blindly removes trailing zeros of a decimal input number as 
 sort of standardization. This is questionable in theory and problematic in 
 practice.
 1. In decimal context,  number 3.14 has a different semantic meaning from 
 number 3.14. Removing trailing zeroes makes the meaning lost.
 2. In a extreme case, 0.0 has (p, s) as (1, 1). Hive removes trailing zeros, 
 and then the number becomes 0, which has (p, s) of (1, 0). Thus, for a 
 decimal column of (1,1), input such as 0.0, 0.00, and so on becomes NULL 
 because the column doesn't allow a decimal number with integer part.
 Therefore, I propose Hive preserve the trailing zeroes (up to what the scale 
 allows). With this, in above example, 0.0, 0.00, and 0. will be 
 represented as 0.0 (precision=1, scale=1) internally.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 24713: HIVE-7735 : Implement Char, Varchar in ParquetSerDe

2014-08-15 Thread Mohit Sabharwal

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24713/
---

(Updated Aug. 15, 2014, 8:28 p.m.)


Review request for hive.


Changes
---

Incorporated feedback.


Bugs: HIVE-7735
https://issues.apache.org/jira/browse/HIVE-7735


Repository: hive-git


Description
---

HIVE-7735 : Implement Char, Varchar in ParquetSerDe

- Since string, char and varchar are all represented as the same parquet
type (primitive type binary, original type utf8), this patch plumbs the
hive column types into ETypeConverter to distinguish between the three.

- Removes Decimal related dead code in ArrayWritableObjectInspector,
(decimal is supported in Parquet SerDe) 


Diffs (updated)
-

  data/files/parquet_types.txt 9d81c3c3130cb94ae2bc308d511b0e24a60d4b8e 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ArrayWritableGroupConverter.java
 582a5dfdaccaa25d46bfb515248eeb4bb84bedc5 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/DataWritableGroupConverter.java
 0e310fbfb748d5409ff3c0d8cd8327bec9988ecf 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/DataWritableRecordConverter.java
 7762afea4dda8cb4be4756eef43abec566ea8444 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java 
67ce15187a33d58fda7ff5b629339bd89d0e5e54 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveGroupConverter.java
 524a2937e39a4821a856c8e25b14633ade89ea49 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveSchemaConverter.java
 99901f0f57328db6fb2a260f7b7d76ded6f39558 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java
 d6be4bdfc1502cf79c184726d88eb0bd94fb2b02 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ArrayWritableObjectInspector.java
 47bf69ce7cb6f474f9f48dd693a7915475a1d9cb 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveSerDe.java 
e3e327c7b657cdd397dd2b4dddf40187c65ce901 
  ql/src/java/org/apache/hadoop/hive/ql/metadata/VirtualColumn.java 
0637d46f2f7162c8d617c761e817dcf396fc94fe 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/Vectorizer.java 
ba4ac690ccc361e65f12220997f300067bbd0d6c 
  ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestHiveSchemaConverter.java 
b87cf7449679a9b6da997010056e388fb3de9945 
  ql/src/test/queries/clientnegative/parquet_char.q 
745a7867264e321c079d8146f60d14ae186bbc29 
  ql/src/test/queries/clientnegative/parquet_varchar.q 
55825f76dc240c54ef451ceec12adee23f12b36c 
  ql/src/test/queries/clientpositive/parquet_types.q 
cb0dcfdf2d637854a84b165f8565fcb683617696 
  ql/src/test/results/clientnegative/parquet_char.q.out 
8c9a52c63416eaeaf99cb51b9f386f886483f29c 
  ql/src/test/results/clientnegative/parquet_timestamp.q.out 
00973b7e1f6360ce830a8baa4b959491ccc87a9b 
  ql/src/test/results/clientnegative/parquet_varchar.q.out 
90f6db25960825472270532811b8a17d9774d412 
  ql/src/test/results/clientpositive/parquet_types.q.out 
3acb0520bab023238e19b728ffedc3344c7f1a06 
  serde/src/java/org/apache/hadoop/hive/serde2/Deserializer.java 
ade3b5f081eb71e5cf4e639aff8bff6447d68dfc 
  serde/src/java/org/apache/hadoop/hive/serde2/typeinfo/TypeInfo.java 
e7f3f4837ab253a825a7210f56f595b2403e7385 

Diff: https://reviews.apache.org/r/24713/diff/


Testing
---

- Added char, varchar types in parquet_types q-test.
- Added unit test for char, varchar in TestHiveSchemaConverter
- Removed char, varchar negative q-test files.


Thanks,

Mohit Sabharwal



[jira] [Updated] (HIVE-7735) Implement Char, Varchar in ParquetSerDe

2014-08-15 Thread Mohit Sabharwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohit Sabharwal updated HIVE-7735:
--

Attachment: HIVE-7735.2.patch

 Implement Char, Varchar in ParquetSerDe
 ---

 Key: HIVE-7735
 URL: https://issues.apache.org/jira/browse/HIVE-7735
 Project: Hive
  Issue Type: Sub-task
  Components: Serializers/Deserializers
Reporter: Mohit Sabharwal
Assignee: Mohit Sabharwal
  Labels: Parquet
 Attachments: HIVE-7735.1.patch, HIVE-7735.1.patch, HIVE-7735.2.patch, 
 HIVE-7735.patch


 This JIRA is to implement CHAR and VARCHAR support in Parquet SerDe.
 Both are represented in Parquet as PrimitiveType binary and OriginalType UTF8.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7735) Implement Char, Varchar in ParquetSerDe

2014-08-15 Thread Mohit Sabharwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14099086#comment-14099086
 ] 

Mohit Sabharwal commented on HIVE-7735:
---

Updated with review board feedback.

 Implement Char, Varchar in ParquetSerDe
 ---

 Key: HIVE-7735
 URL: https://issues.apache.org/jira/browse/HIVE-7735
 Project: Hive
  Issue Type: Sub-task
  Components: Serializers/Deserializers
Reporter: Mohit Sabharwal
Assignee: Mohit Sabharwal
  Labels: Parquet
 Attachments: HIVE-7735.1.patch, HIVE-7735.1.patch, HIVE-7735.2.patch, 
 HIVE-7735.patch


 This JIRA is to implement CHAR and VARCHAR support in Parquet SerDe.
 Both are represented in Parquet as PrimitiveType binary and OriginalType UTF8.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7737) Hive logs full exception for table not found

2014-08-15 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14099112#comment-14099112
 ] 

Hive QA commented on HIVE-7737:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12662106/HIVE-7737.01.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5808 tests executed
*Failed tests:*
{noformat}
org.apache.hive.hcatalog.pig.TestOrcHCatLoader.testReadDataPrimitiveTypes
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/339/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/339/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-339/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12662106

 Hive logs full exception for table not found
 

 Key: HIVE-7737
 URL: https://issues.apache.org/jira/browse/HIVE-7737
 Project: Hive
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
Priority: Trivial
 Attachments: HIVE-7737.01.patch, HIVE-7737.patch


 Table not found is generally user error, the call stack is annoying and 
 unnecessary. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7533) sql std auth - set authorization privileges for tables when created from hive cli

2014-08-15 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-7533:


Attachment: HIVE-7533.3.patch

HIVE-7533.3.patch - Updated patch file for newly added tests . 

 sql std auth - set authorization privileges for tables when created from hive 
 cli
 -

 Key: HIVE-7533
 URL: https://issues.apache.org/jira/browse/HIVE-7533
 Project: Hive
  Issue Type: Bug
  Components: Authorization, SQLStandardAuthorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-7533.1.patch, HIVE-7533.2.patch, HIVE-7533.3.patch


 As SQL standard authorization mode is not available from hive-cli, the 
 default permissions on table for the table owner are not being set, when the 
 table is created from hive-cli.
 It should be possible set the sql standards based authorization as the 
 authorizer for hive-cli, which would update the configuration appropriately. 
 hive-cli data access is actually controlled by hdfs, not the authorization 
 policy. As a result, using sql std auth from hive-cli for authorization would 
 lead to a false sense of security. To avoid this, hive-cli users will have to 
 keep the authorization disabled on hive-cli  (in case of sql std auth). But 
 this would affect only authorization checks, not configuration updates by the 
 authorizer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7742) CBO: Predicate Push Down to Honor Hive Join Condition restrictions

2014-08-15 Thread Laljo John Pullokkaran (JIRA)
Laljo John Pullokkaran created HIVE-7742:


 Summary: CBO: Predicate Push Down to Honor Hive Join Condition 
restrictions
 Key: HIVE-7742
 URL: https://issues.apache.org/jira/browse/HIVE-7742
 Project: Hive
  Issue Type: Sub-task
Reporter: Laljo John Pullokkaran
Assignee: Laljo John Pullokkaran






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7742) CBO: Predicate Push Down to Honor Hive Join Condition restrictions

2014-08-15 Thread Laljo John Pullokkaran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laljo John Pullokkaran updated HIVE-7742:
-

Attachment: HIVE-7742.patch

 CBO: Predicate Push Down to Honor Hive Join Condition restrictions
 --

 Key: HIVE-7742
 URL: https://issues.apache.org/jira/browse/HIVE-7742
 Project: Hive
  Issue Type: Sub-task
Reporter: Laljo John Pullokkaran
Assignee: Laljo John Pullokkaran
 Attachments: HIVE-7742.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7742) CBO: Predicate Push Down to Honor Hive Join Condition restrictions

2014-08-15 Thread Laljo John Pullokkaran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laljo John Pullokkaran updated HIVE-7742:
-

Status: Patch Available  (was: Open)

 CBO: Predicate Push Down to Honor Hive Join Condition restrictions
 --

 Key: HIVE-7742
 URL: https://issues.apache.org/jira/browse/HIVE-7742
 Project: Hive
  Issue Type: Sub-task
Reporter: Laljo John Pullokkaran
Assignee: Laljo John Pullokkaran
 Attachments: HIVE-7742.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7656) Bring tez-branch up-to the API changes made by TEZ-1372

2014-08-15 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-7656:
--

Resolution: Fixed
Status: Resolved  (was: Patch Available)

 Bring tez-branch up-to the API changes made by TEZ-1372
 ---

 Key: HIVE-7656
 URL: https://issues.apache.org/jira/browse/HIVE-7656
 Project: Hive
  Issue Type: Sub-task
Affects Versions: tez-branch
Reporter: Gopal V
Assignee: Gopal V
 Fix For: tez-branch

 Attachments: HIVE-7656.1-tez.patch, HIVE-7656.2.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7068) Integrate AccumuloStorageHandler

2014-08-15 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14099180#comment-14099180
 ] 

Nick Dimiduk commented on HIVE-7068:


This is really cool, nice work fellas! It's a shame to see so many of the 
StorageHandler warts repeated here too, but that's how it is.

Does it make sense to try to share more code between the accumulo and hbase 
modules? Column mapping stuff looks pretty much identical to me, and maybe the 
hbase module could benefit from some of the comparator work? Nothing critical 
for this patch, but could be good for follow-on work.

I'm with [~navis] on this one, +1 for getting it committed setting users loose 
to play!

 Integrate AccumuloStorageHandler
 

 Key: HIVE-7068
 URL: https://issues.apache.org/jira/browse/HIVE-7068
 Project: Hive
  Issue Type: New Feature
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: 0.14.0

 Attachments: HIVE-7068.1.patch, HIVE-7068.2.patch, HIVE-7068.3.patch


 [Accumulo|http://accumulo.apache.org] is a BigTable-clone which is similar to 
 HBase. Some [initial 
 work|https://github.com/bfemiano/accumulo-hive-storage-manager] has been done 
 to support querying an Accumulo table using Hive already. It is not a 
 complete solution as, most notably, the current implementation presently 
 lacks support for INSERTs.
 I would like to polish up the AccumuloStorageHandler (presently based on 
 0.10), implement missing basic functionality and compare it to the 
 HBaseStorageHandler (to ensure that we follow the same general usage 
 patterns).
 I've also been in communication with [~bfem] (the initial author) who 
 expressed interest in working on this again. I hope to coordinate efforts 
 with him.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7068) Integrate AccumuloStorageHandler

2014-08-15 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14099196#comment-14099196
 ] 

Josh Elser commented on HIVE-7068:
--

[~ndimiduk], I agree with you completely. There's no reason that the column 
mapping stuff needs to be separated as it is now. I tried to make the 
ColumnMapping class hierarchy a bit cleaner over what was in the hbase-handler 
(it looked like there were already comments in the hbase-handler code saying 
that it would be good to clean it up in the future). I'd love to help converge 
these.

Many thanks for taking the time to look through it.

 Integrate AccumuloStorageHandler
 

 Key: HIVE-7068
 URL: https://issues.apache.org/jira/browse/HIVE-7068
 Project: Hive
  Issue Type: New Feature
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: 0.14.0

 Attachments: HIVE-7068.1.patch, HIVE-7068.2.patch, HIVE-7068.3.patch


 [Accumulo|http://accumulo.apache.org] is a BigTable-clone which is similar to 
 HBase. Some [initial 
 work|https://github.com/bfemiano/accumulo-hive-storage-manager] has been done 
 to support querying an Accumulo table using Hive already. It is not a 
 complete solution as, most notably, the current implementation presently 
 lacks support for INSERTs.
 I would like to polish up the AccumuloStorageHandler (presently based on 
 0.10), implement missing basic functionality and compare it to the 
 HBaseStorageHandler (to ensure that we follow the same general usage 
 patterns).
 I've also been in communication with [~bfem] (the initial author) who 
 expressed interest in working on this again. I hope to coordinate efforts 
 with him.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7533) sql std auth - set authorization privileges for tables when created from hive cli

2014-08-15 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-7533:


   Resolution: Fixed
Fix Version/s: 0.14.0
   Status: Resolved  (was: Patch Available)

Patch committed to trunk. Thanks for the review Jason!


 sql std auth - set authorization privileges for tables when created from hive 
 cli
 -

 Key: HIVE-7533
 URL: https://issues.apache.org/jira/browse/HIVE-7533
 Project: Hive
  Issue Type: Bug
  Components: Authorization, SQLStandardAuthorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 0.14.0

 Attachments: HIVE-7533.1.patch, HIVE-7533.2.patch, HIVE-7533.3.patch


 As SQL standard authorization mode is not available from hive-cli, the 
 default permissions on table for the table owner are not being set, when the 
 table is created from hive-cli.
 It should be possible set the sql standards based authorization as the 
 authorizer for hive-cli, which would update the configuration appropriately. 
 hive-cli data access is actually controlled by hdfs, not the authorization 
 policy. As a result, using sql std auth from hive-cli for authorization would 
 lead to a false sense of security. To avoid this, hive-cli users will have to 
 keep the authorization disabled on hive-cli  (in case of sql std auth). But 
 this would affect only authorization checks, not configuration updates by the 
 authorizer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6093) table creation should fail when user does not have permissions on db

2014-08-15 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-6093:


Attachment: HIVE-6093.1.patch

 table creation should fail when user does not have permissions on db
 

 Key: HIVE-6093
 URL: https://issues.apache.org/jira/browse/HIVE-6093
 Project: Hive
  Issue Type: Bug
  Components: Authorization, HCatalog, Metastore
Affects Versions: 0.12.0, 0.13.0
Reporter: Thiruvel Thirumoolan
Assignee: Thiruvel Thirumoolan
Priority: Minor
  Labels: authorization, metastore, security
 Fix For: 0.14.0

 Attachments: HIVE-6093-1.patch, HIVE-6093.1.patch, HIVE-6093.1.patch, 
 HIVE-6093.patch


 Its possible to create a table under a database where the user does not have 
 write permission. It can be done by specifying a LOCATION where the user has 
 write access (say /tmp/foo). This should be restricted.
 HdfsAuthorizationProvider (which typically runs on client) checks the 
 database directory during table creation. But 
 StorageBasedAuthorizationProvider does not.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7704) Create tez task for fast file merging

2014-08-15 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14099221#comment-14099221
 ] 

Hive QA commented on HIVE-7704:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12662120/HIVE-7704.4.patch

{color:green}SUCCESS:{color} +1 5811 tests passed

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/341/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/341/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-341/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12662120

 Create tez task for fast file merging
 -

 Key: HIVE-7704
 URL: https://issues.apache.org/jira/browse/HIVE-7704
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0
Reporter: Prasanth J
Assignee: Prasanth J
 Attachments: HIVE-7704.1.patch, HIVE-7704.2.patch, HIVE-7704.3.patch, 
 HIVE-7704.4.patch, HIVE-7704.4.patch


 Currently tez falls back to MR task for merge file task. It will beneficial 
 to convert the merge file tasks to tez task to make use of the performance 
 gains from tez. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7696) small changes to mapjoin hashtable

2014-08-15 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14099229#comment-14099229
 ] 

Hive QA commented on HIVE-7696:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12662142/HIVE-7696.02.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/342/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/342/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-342/

Messages:
{noformat}
 This message was trimmed, see log for full details 
warning(200): IdentifiersParser.g:179:5: 
Decision can match input such as KW_STRUCT using multiple alternatives: 4, 6

As a result, alternative(s) 6 were disabled for that input
warning(200): IdentifiersParser.g:179:5: 
Decision can match input such as KW_ARRAY using multiple alternatives: 2, 6

As a result, alternative(s) 6 were disabled for that input
warning(200): IdentifiersParser.g:179:5: 
Decision can match input such as KW_UNIONTYPE using multiple alternatives: 5, 
6

As a result, alternative(s) 6 were disabled for that input
warning(200): IdentifiersParser.g:261:5: 
Decision can match input such as KW_NULL using multiple alternatives: 1, 8

As a result, alternative(s) 8 were disabled for that input
warning(200): IdentifiersParser.g:261:5: 
Decision can match input such as KW_TRUE using multiple alternatives: 3, 8

As a result, alternative(s) 8 were disabled for that input
warning(200): IdentifiersParser.g:261:5: 
Decision can match input such as KW_DATE StringLiteral using multiple 
alternatives: 2, 3

As a result, alternative(s) 3 were disabled for that input
warning(200): IdentifiersParser.g:261:5: 
Decision can match input such as KW_FALSE using multiple alternatives: 3, 8

As a result, alternative(s) 8 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_CLUSTER 
KW_BY using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_MAP LPAREN 
using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_INSERT 
KW_OVERWRITE using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_GROUP 
KW_BY using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_INSERT 
KW_INTO using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_LATERAL 
KW_VIEW using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_SORT KW_BY 
using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_UNION 
KW_ALL using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_ORDER 
KW_BY using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as KW_BETWEEN KW_MAP LPAREN using multiple 
alternatives: 8, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_DISTRIBUTE 
KW_BY using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:518:5: 
Decision can match input such as {AMPERSAND..BITWISEXOR, DIV..DIVIDE, 
EQUAL..EQUAL_NS, GREATERTHAN..GREATERTHANOREQUALTO, KW_AND, KW_ARRAY, 
KW_BETWEEN..KW_BOOLEAN, KW_CASE, KW_DOUBLE, KW_FLOAT, KW_IF, KW_IN, KW_INT, 
KW_LIKE, KW_MAP, KW_NOT, KW_OR, KW_REGEXP, KW_RLIKE, KW_SMALLINT, 
KW_STRING..KW_STRUCT, KW_TINYINT, KW_UNIONTYPE, KW_WHEN, 
LESSTHAN..LESSTHANOREQUALTO, MINUS..NOTEQUAL, PLUS, STAR, TILDE} using 
multiple alternatives: 1, 3

As a result, alternative(s) 3 

[jira] [Commented] (HIVE-7373) Hive should not remove trailing zeros for decimal numbers

2014-08-15 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14099236#comment-14099236
 ] 

Hive QA commented on HIVE-7373:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12662144/HIVE-7373.6.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/343/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/343/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-343/

Messages:
{noformat}
 This message was trimmed, see log for full details 
warning(200): IdentifiersParser.g:179:5: 
Decision can match input such as KW_STRUCT using multiple alternatives: 4, 6

As a result, alternative(s) 6 were disabled for that input
warning(200): IdentifiersParser.g:179:5: 
Decision can match input such as KW_ARRAY using multiple alternatives: 2, 6

As a result, alternative(s) 6 were disabled for that input
warning(200): IdentifiersParser.g:179:5: 
Decision can match input such as KW_UNIONTYPE using multiple alternatives: 5, 
6

As a result, alternative(s) 6 were disabled for that input
warning(200): IdentifiersParser.g:261:5: 
Decision can match input such as KW_NULL using multiple alternatives: 1, 8

As a result, alternative(s) 8 were disabled for that input
warning(200): IdentifiersParser.g:261:5: 
Decision can match input such as KW_TRUE using multiple alternatives: 3, 8

As a result, alternative(s) 8 were disabled for that input
warning(200): IdentifiersParser.g:261:5: 
Decision can match input such as KW_DATE StringLiteral using multiple 
alternatives: 2, 3

As a result, alternative(s) 3 were disabled for that input
warning(200): IdentifiersParser.g:261:5: 
Decision can match input such as KW_FALSE using multiple alternatives: 3, 8

As a result, alternative(s) 8 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_CLUSTER 
KW_BY using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_MAP LPAREN 
using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_INSERT 
KW_OVERWRITE using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_GROUP 
KW_BY using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_INSERT 
KW_INTO using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_LATERAL 
KW_VIEW using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_SORT KW_BY 
using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_UNION 
KW_ALL using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_ORDER 
KW_BY using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as KW_BETWEEN KW_MAP LPAREN using multiple 
alternatives: 8, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_DISTRIBUTE 
KW_BY using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:518:5: 
Decision can match input such as {AMPERSAND..BITWISEXOR, DIV..DIVIDE, 
EQUAL..EQUAL_NS, GREATERTHAN..GREATERTHANOREQUALTO, KW_AND, KW_ARRAY, 
KW_BETWEEN..KW_BOOLEAN, KW_CASE, KW_DOUBLE, KW_FLOAT, KW_IF, KW_IN, KW_INT, 
KW_LIKE, KW_MAP, KW_NOT, KW_OR, KW_REGEXP, KW_RLIKE, KW_SMALLINT, 
KW_STRING..KW_STRUCT, KW_TINYINT, KW_UNIONTYPE, KW_WHEN, 
LESSTHAN..LESSTHANOREQUALTO, MINUS..NOTEQUAL, PLUS, STAR, TILDE} using 
multiple alternatives: 1, 3

As a result, alternative(s) 3 

[jira] [Updated] (HIVE-7700) authorization api - HivePrivilegeObject for permanent function should have database name set

2014-08-15 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-7700:


Attachment: HIVE-7700.4.patch

 authorization api - HivePrivilegeObject for permanent function should have 
 database name set
 

 Key: HIVE-7700
 URL: https://issues.apache.org/jira/browse/HIVE-7700
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-7700.1.patch, HIVE-7700.2.patch, HIVE-7700.3.patch, 
 HIVE-7700.4.patch, HIVE-7700.4.patch


 The HivePrivilegeObject for permanent function should have databasename set, 
 and the functionname should be without the db part.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7738) tez select sum(decimal) from union all of decimal and null throws NPE

2014-08-15 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14099242#comment-14099242
 ] 

Hive QA commented on HIVE-7738:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12662139/HIVE-7738.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/344/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/344/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-344/

Messages:
{noformat}
 This message was trimmed, see log for full details 
warning(200): IdentifiersParser.g:179:5: 
Decision can match input such as KW_STRUCT using multiple alternatives: 4, 6

As a result, alternative(s) 6 were disabled for that input
warning(200): IdentifiersParser.g:179:5: 
Decision can match input such as KW_ARRAY using multiple alternatives: 2, 6

As a result, alternative(s) 6 were disabled for that input
warning(200): IdentifiersParser.g:179:5: 
Decision can match input such as KW_UNIONTYPE using multiple alternatives: 5, 
6

As a result, alternative(s) 6 were disabled for that input
warning(200): IdentifiersParser.g:261:5: 
Decision can match input such as KW_NULL using multiple alternatives: 1, 8

As a result, alternative(s) 8 were disabled for that input
warning(200): IdentifiersParser.g:261:5: 
Decision can match input such as KW_TRUE using multiple alternatives: 3, 8

As a result, alternative(s) 8 were disabled for that input
warning(200): IdentifiersParser.g:261:5: 
Decision can match input such as KW_DATE StringLiteral using multiple 
alternatives: 2, 3

As a result, alternative(s) 3 were disabled for that input
warning(200): IdentifiersParser.g:261:5: 
Decision can match input such as KW_FALSE using multiple alternatives: 3, 8

As a result, alternative(s) 8 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_CLUSTER 
KW_BY using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_MAP LPAREN 
using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_INSERT 
KW_OVERWRITE using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_GROUP 
KW_BY using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_INSERT 
KW_INTO using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_LATERAL 
KW_VIEW using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_SORT KW_BY 
using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_UNION 
KW_ALL using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_ORDER 
KW_BY using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as KW_BETWEEN KW_MAP LPAREN using multiple 
alternatives: 8, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_DISTRIBUTE 
KW_BY using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:518:5: 
Decision can match input such as {AMPERSAND..BITWISEXOR, DIV..DIVIDE, 
EQUAL..EQUAL_NS, GREATERTHAN..GREATERTHANOREQUALTO, KW_AND, KW_ARRAY, 
KW_BETWEEN..KW_BOOLEAN, KW_CASE, KW_DOUBLE, KW_FLOAT, KW_IF, KW_IN, KW_INT, 
KW_LIKE, KW_MAP, KW_NOT, KW_OR, KW_REGEXP, KW_RLIKE, KW_SMALLINT, 
KW_STRING..KW_STRUCT, KW_TINYINT, KW_UNIONTYPE, KW_WHEN, 
LESSTHAN..LESSTHANOREQUALTO, MINUS..NOTEQUAL, PLUS, STAR, TILDE} using 
multiple alternatives: 1, 3

As a result, alternative(s) 3 

[jira] [Commented] (HIVE-7617) optimize bytes mapjoin hash table read path wrt serialization, at least for common cases

2014-08-15 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14099248#comment-14099248
 ] 

Hive QA commented on HIVE-7617:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12662140/HIVE-7617.04.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/345/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/345/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-345/

Messages:
{noformat}
 This message was trimmed, see log for full details 
warning(200): IdentifiersParser.g:179:5: 
Decision can match input such as KW_STRUCT using multiple alternatives: 4, 6

As a result, alternative(s) 6 were disabled for that input
warning(200): IdentifiersParser.g:179:5: 
Decision can match input such as KW_ARRAY using multiple alternatives: 2, 6

As a result, alternative(s) 6 were disabled for that input
warning(200): IdentifiersParser.g:179:5: 
Decision can match input such as KW_UNIONTYPE using multiple alternatives: 5, 
6

As a result, alternative(s) 6 were disabled for that input
warning(200): IdentifiersParser.g:261:5: 
Decision can match input such as KW_NULL using multiple alternatives: 1, 8

As a result, alternative(s) 8 were disabled for that input
warning(200): IdentifiersParser.g:261:5: 
Decision can match input such as KW_TRUE using multiple alternatives: 3, 8

As a result, alternative(s) 8 were disabled for that input
warning(200): IdentifiersParser.g:261:5: 
Decision can match input such as KW_DATE StringLiteral using multiple 
alternatives: 2, 3

As a result, alternative(s) 3 were disabled for that input
warning(200): IdentifiersParser.g:261:5: 
Decision can match input such as KW_FALSE using multiple alternatives: 3, 8

As a result, alternative(s) 8 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_CLUSTER 
KW_BY using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_MAP LPAREN 
using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_INSERT 
KW_OVERWRITE using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_GROUP 
KW_BY using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_INSERT 
KW_INTO using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_LATERAL 
KW_VIEW using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_SORT KW_BY 
using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_UNION 
KW_ALL using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_ORDER 
KW_BY using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as KW_BETWEEN KW_MAP LPAREN using multiple 
alternatives: 8, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_DISTRIBUTE 
KW_BY using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:518:5: 
Decision can match input such as {AMPERSAND..BITWISEXOR, DIV..DIVIDE, 
EQUAL..EQUAL_NS, GREATERTHAN..GREATERTHANOREQUALTO, KW_AND, KW_ARRAY, 
KW_BETWEEN..KW_BOOLEAN, KW_CASE, KW_DOUBLE, KW_FLOAT, KW_IF, KW_IN, KW_INT, 
KW_LIKE, KW_MAP, KW_NOT, KW_OR, KW_REGEXP, KW_RLIKE, KW_SMALLINT, 
KW_STRING..KW_STRUCT, KW_TINYINT, KW_UNIONTYPE, KW_WHEN, 
LESSTHAN..LESSTHANOREQUALTO, MINUS..NOTEQUAL, PLUS, STAR, TILDE} using 
multiple alternatives: 1, 3

As a result, alternative(s) 3 

[jira] [Commented] (HIVE-7735) Implement Char, Varchar in ParquetSerDe

2014-08-15 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14099256#comment-14099256
 ] 

Hive QA commented on HIVE-7735:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12662151/HIVE-7735.2.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/346/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/346/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-346/

Messages:
{noformat}
 This message was trimmed, see log for full details 
warning(200): IdentifiersParser.g:179:5: 
Decision can match input such as KW_STRUCT using multiple alternatives: 4, 6

As a result, alternative(s) 6 were disabled for that input
warning(200): IdentifiersParser.g:179:5: 
Decision can match input such as KW_ARRAY using multiple alternatives: 2, 6

As a result, alternative(s) 6 were disabled for that input
warning(200): IdentifiersParser.g:179:5: 
Decision can match input such as KW_UNIONTYPE using multiple alternatives: 5, 
6

As a result, alternative(s) 6 were disabled for that input
warning(200): IdentifiersParser.g:261:5: 
Decision can match input such as KW_NULL using multiple alternatives: 1, 8

As a result, alternative(s) 8 were disabled for that input
warning(200): IdentifiersParser.g:261:5: 
Decision can match input such as KW_TRUE using multiple alternatives: 3, 8

As a result, alternative(s) 8 were disabled for that input
warning(200): IdentifiersParser.g:261:5: 
Decision can match input such as KW_DATE StringLiteral using multiple 
alternatives: 2, 3

As a result, alternative(s) 3 were disabled for that input
warning(200): IdentifiersParser.g:261:5: 
Decision can match input such as KW_FALSE using multiple alternatives: 3, 8

As a result, alternative(s) 8 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_CLUSTER 
KW_BY using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_MAP LPAREN 
using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_INSERT 
KW_OVERWRITE using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_GROUP 
KW_BY using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_INSERT 
KW_INTO using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_LATERAL 
KW_VIEW using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_SORT KW_BY 
using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_UNION 
KW_ALL using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_ORDER 
KW_BY using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as KW_BETWEEN KW_MAP LPAREN using multiple 
alternatives: 8, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_DISTRIBUTE 
KW_BY using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:518:5: 
Decision can match input such as {AMPERSAND..BITWISEXOR, DIV..DIVIDE, 
EQUAL..EQUAL_NS, GREATERTHAN..GREATERTHANOREQUALTO, KW_AND, KW_ARRAY, 
KW_BETWEEN..KW_BOOLEAN, KW_CASE, KW_DOUBLE, KW_FLOAT, KW_IF, KW_IN, KW_INT, 
KW_LIKE, KW_MAP, KW_NOT, KW_OR, KW_REGEXP, KW_RLIKE, KW_SMALLINT, 
KW_STRING..KW_STRUCT, KW_TINYINT, KW_UNIONTYPE, KW_WHEN, 
LESSTHAN..LESSTHANOREQUALTO, MINUS..NOTEQUAL, PLUS, STAR, TILDE} using 
multiple alternatives: 1, 3

As a result, alternative(s) 3 

[jira] [Commented] (HIVE-4329) HCatalog should use getHiveRecordWriter rather than getRecordWriter

2014-08-15 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14099257#comment-14099257
 ] 

Sushanth Sowmyan commented on HIVE-4329:


Hi,

I'm against the goal of this patch requirement altogether, and this patch 
effectively breaks one of the core reasons for the existence of HCatalog, to be 
a generic wrapper for underlying mapreduce IF/OFs, for consumers that expect 
mapreduce IF/OFs. I apologize for not having spotted this jira earlier, since 
it seems a lot of work has gone into this, and I understand that there is an 
impedance mismatch here between HiveOutputFormat and OutputFormat, and one we 
want to fix, but this fix is in the opposite direction of the desired way of 
solving that impedance mismatch.

One of the longer term goals, for us, has been to try to evolve Hive's usage of 
StorageHandlers to a point where Hive stops using 
HiveRecordWriter/HiveOutputFormat altogether, so that there is no notion of an 
internal and external OutputFormat definition, so that third party 
mapreduce IF/OFs can directly be integrated into Hive, instead of having to 
change them to HiveOutputFormat/etc.

The primary issue discussed in this problem, that of FileRecordWriterContainer 
writing out a NullComparable is something that's solvable, since 
FileRecordWritableContainer's key format is a WritableComparable, and if 
AvroContainerOutputFormat does not already care about the key anyway, we should 
be ignoring it. If it's simpler, I would also be in favour of a hack like the 
FileRecordWriterContainer emiting a LongWritable in that case if it detects 
it's wrapping an AvroContainerOutputFormat instead of rewiring HCatalog to make 
it based on HiveOutputFormat.

 HCatalog should use getHiveRecordWriter rather than getRecordWriter
 ---

 Key: HIVE-4329
 URL: https://issues.apache.org/jira/browse/HIVE-4329
 Project: Hive
  Issue Type: Bug
  Components: HCatalog, Serializers/Deserializers
Affects Versions: 0.14.0
 Environment: discovered in Pig, but it looks like the root cause 
 impacts all non-Hive users
Reporter: Sean Busbey
Assignee: David Chen
 Attachments: HIVE-4329.0.patch


 Attempting to write to a HCatalog defined table backed by the AvroSerde fails 
 with the following stacktrace:
 {code}
 java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be 
 cast to org.apache.hadoop.io.LongWritable
   at 
 org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat$1.write(AvroContainerOutputFormat.java:84)
   at 
 org.apache.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:253)
   at 
 org.apache.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:53)
   at 
 org.apache.hcatalog.pig.HCatBaseStorer.putNext(HCatBaseStorer.java:242)
   at org.apache.hcatalog.pig.HCatStorer.putNext(HCatStorer.java:52)
   at 
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
   at 
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
   at 
 org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:559)
   at 
 org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:85)
 {code}
 The proximal cause of this failure is that the AvroContainerOutputFormat's 
 signature mandates a LongWritable key and HCat's FileRecordWriterContainer 
 forces a NullWritable. I'm not sure of a general fix, other than redefining 
 HiveOutputFormat to mandate a WritableComparable.
 It looks like accepting WritableComparable is what's done in the other Hive 
 OutputFormats, and there's no reason AvroContainerOutputFormat couldn't also 
 be changed, since it's ignoring the key. That way fixing things so 
 FileRecordWriterContainer can always use NullWritable could get spun into a 
 different issue?
 The underlying cause for failure to write to AvroSerde tables is that 
 AvroContainerOutputFormat doesn't meaningfully implement getRecordWriter, so 
 fixing the above will just push the failure into the placeholder RecordWriter.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7742) CBO: Predicate Push Down to Honor Hive Join Condition restrictions

2014-08-15 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14099258#comment-14099258
 ] 

Hive QA commented on HIVE-7742:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12662159/HIVE-7742.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/347/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/347/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-347/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/java/jdk1.6.0_34/bin:/usr/local/apache-maven-3.0.5/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.6.0_34/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/java/jdk1.6.0_34/bin:/usr/local/apache-maven-3.0.5/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.6.0_34/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-347/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
Reverted 'data/files/parquet_types.txt'
Reverted 'serde/src/java/org/apache/hadoop/hive/serde2/typeinfo/TypeInfo.java'
Reverted 'serde/src/java/org/apache/hadoop/hive/serde2/Deserializer.java'
Reverted 'ql/src/test/results/clientnegative/parquet_timestamp.q.out'
Reverted 'ql/src/test/results/clientnegative/parquet_char.q.out'
Reverted 'ql/src/test/results/clientnegative/parquet_varchar.q.out'
Reverted 'ql/src/test/results/clientpositive/parquet_types.q.out'
Reverted 
'ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestHiveSchemaConverter.java'
Reverted 'ql/src/test/queries/clientnegative/parquet_char.q'
Reverted 'ql/src/test/queries/clientnegative/parquet_varchar.q'
Reverted 'ql/src/test/queries/clientpositive/parquet_types.q'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/Vectorizer.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/metadata/VirtualColumn.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveGroupConverter.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveSchemaConverter.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/DataWritableGroupConverter.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/DataWritableRecordConverter.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ArrayWritableGroupConverter.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveSerDe.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ArrayWritableObjectInspector.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java'
++ awk '{print $2}'
++ egrep -v '^X|^Performing status on external'
++ svn status --no-ignore
+ rm -rf target datanucleus.log ant/target shims/0.20/target shims/0.20S/target 
shims/0.23/target shims/aggregator/target shims/common/target 
shims/common-secure/target metastore/target common/target common/src/gen 
serde/target ql/target
+ svn update

Fetching external item into 'hcatalog/src/test/e2e/harness'
External at revision 1618286.

At revision 1618286.
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically 

[jira] [Updated] (HIVE-7705) there's a useless threadlocal in LBUtils that shows up in perf profiles

2014-08-15 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-7705:
---

Attachment: HIVE-7705.03.patch

Reattach patch because HiveQA was broken

 there's a useless threadlocal in LBUtils that shows up in perf profiles
 ---

 Key: HIVE-7705
 URL: https://issues.apache.org/jira/browse/HIVE-7705
 Project: Hive
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HIVE-7705.02.patch, HIVE-7705.03.patch, 
 HIVE-7705.1.patch, HIVE-7705.patch


 It might be cheaper to just create a VInt every time, but it can also be 
 passed in externally.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6093) table creation should fail when user does not have permissions on db

2014-08-15 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14099264#comment-14099264
 ] 

Hive QA commented on HIVE-6093:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12662171/HIVE-6093.1.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/349/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/349/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-349/

Messages:
{noformat}
 This message was trimmed, see log for full details 
warning(200): IdentifiersParser.g:179:5: 
Decision can match input such as KW_STRUCT using multiple alternatives: 4, 6

As a result, alternative(s) 6 were disabled for that input
warning(200): IdentifiersParser.g:179:5: 
Decision can match input such as KW_ARRAY using multiple alternatives: 2, 6

As a result, alternative(s) 6 were disabled for that input
warning(200): IdentifiersParser.g:179:5: 
Decision can match input such as KW_UNIONTYPE using multiple alternatives: 5, 
6

As a result, alternative(s) 6 were disabled for that input
warning(200): IdentifiersParser.g:261:5: 
Decision can match input such as KW_NULL using multiple alternatives: 1, 8

As a result, alternative(s) 8 were disabled for that input
warning(200): IdentifiersParser.g:261:5: 
Decision can match input such as KW_TRUE using multiple alternatives: 3, 8

As a result, alternative(s) 8 were disabled for that input
warning(200): IdentifiersParser.g:261:5: 
Decision can match input such as KW_DATE StringLiteral using multiple 
alternatives: 2, 3

As a result, alternative(s) 3 were disabled for that input
warning(200): IdentifiersParser.g:261:5: 
Decision can match input such as KW_FALSE using multiple alternatives: 3, 8

As a result, alternative(s) 8 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_CLUSTER 
KW_BY using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_MAP LPAREN 
using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_INSERT 
KW_OVERWRITE using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_GROUP 
KW_BY using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_INSERT 
KW_INTO using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_LATERAL 
KW_VIEW using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_SORT KW_BY 
using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_UNION 
KW_ALL using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_ORDER 
KW_BY using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as KW_BETWEEN KW_MAP LPAREN using multiple 
alternatives: 8, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_DISTRIBUTE 
KW_BY using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:518:5: 
Decision can match input such as {AMPERSAND..BITWISEXOR, DIV..DIVIDE, 
EQUAL..EQUAL_NS, GREATERTHAN..GREATERTHANOREQUALTO, KW_AND, KW_ARRAY, 
KW_BETWEEN..KW_BOOLEAN, KW_CASE, KW_DOUBLE, KW_FLOAT, KW_IF, KW_IN, KW_INT, 
KW_LIKE, KW_MAP, KW_NOT, KW_OR, KW_REGEXP, KW_RLIKE, KW_SMALLINT, 
KW_STRING..KW_STRUCT, KW_TINYINT, KW_UNIONTYPE, KW_WHEN, 
LESSTHAN..LESSTHANOREQUALTO, MINUS..NOTEQUAL, PLUS, STAR, TILDE} using 
multiple alternatives: 1, 3

As a result, alternative(s) 3 

[jira] [Commented] (HIVE-7700) authorization api - HivePrivilegeObject for permanent function should have database name set

2014-08-15 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14099268#comment-14099268
 ] 

Hive QA commented on HIVE-7700:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12662175/HIVE-7700.4.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/350/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/350/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-350/

Messages:
{noformat}
 This message was trimmed, see log for full details 
warning(200): IdentifiersParser.g:179:5: 
Decision can match input such as KW_STRUCT using multiple alternatives: 4, 6

As a result, alternative(s) 6 were disabled for that input
warning(200): IdentifiersParser.g:179:5: 
Decision can match input such as KW_ARRAY using multiple alternatives: 2, 6

As a result, alternative(s) 6 were disabled for that input
warning(200): IdentifiersParser.g:179:5: 
Decision can match input such as KW_UNIONTYPE using multiple alternatives: 5, 
6

As a result, alternative(s) 6 were disabled for that input
warning(200): IdentifiersParser.g:261:5: 
Decision can match input such as KW_NULL using multiple alternatives: 1, 8

As a result, alternative(s) 8 were disabled for that input
warning(200): IdentifiersParser.g:261:5: 
Decision can match input such as KW_TRUE using multiple alternatives: 3, 8

As a result, alternative(s) 8 were disabled for that input
warning(200): IdentifiersParser.g:261:5: 
Decision can match input such as KW_DATE StringLiteral using multiple 
alternatives: 2, 3

As a result, alternative(s) 3 were disabled for that input
warning(200): IdentifiersParser.g:261:5: 
Decision can match input such as KW_FALSE using multiple alternatives: 3, 8

As a result, alternative(s) 8 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_CLUSTER 
KW_BY using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_MAP LPAREN 
using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_INSERT 
KW_OVERWRITE using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_GROUP 
KW_BY using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_INSERT 
KW_INTO using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_LATERAL 
KW_VIEW using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_SORT KW_BY 
using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_UNION 
KW_ALL using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_ORDER 
KW_BY using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as KW_BETWEEN KW_MAP LPAREN using multiple 
alternatives: 8, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_DISTRIBUTE 
KW_BY using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:518:5: 
Decision can match input such as {AMPERSAND..BITWISEXOR, DIV..DIVIDE, 
EQUAL..EQUAL_NS, GREATERTHAN..GREATERTHANOREQUALTO, KW_AND, KW_ARRAY, 
KW_BETWEEN..KW_BOOLEAN, KW_CASE, KW_DOUBLE, KW_FLOAT, KW_IF, KW_IN, KW_INT, 
KW_LIKE, KW_MAP, KW_NOT, KW_OR, KW_REGEXP, KW_RLIKE, KW_SMALLINT, 
KW_STRING..KW_STRUCT, KW_TINYINT, KW_UNIONTYPE, KW_WHEN, 
LESSTHAN..LESSTHANOREQUALTO, MINUS..NOTEQUAL, PLUS, STAR, TILDE} using 
multiple alternatives: 1, 3

As a result, alternative(s) 3 

[jira] [Commented] (HIVE-7533) sql std auth - set authorization privileges for tables when created from hive cli

2014-08-15 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14099267#comment-14099267
 ] 

Sergey Shelukhin commented on HIVE-7533:


This appears to have broken the build. Can you guys please fix or revert?

 sql std auth - set authorization privileges for tables when created from hive 
 cli
 -

 Key: HIVE-7533
 URL: https://issues.apache.org/jira/browse/HIVE-7533
 Project: Hive
  Issue Type: Bug
  Components: Authorization, SQLStandardAuthorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 0.14.0

 Attachments: HIVE-7533.1.patch, HIVE-7533.2.patch, HIVE-7533.3.patch


 As SQL standard authorization mode is not available from hive-cli, the 
 default permissions on table for the table owner are not being set, when the 
 table is created from hive-cli.
 It should be possible set the sql standards based authorization as the 
 authorizer for hive-cli, which would update the configuration appropriately. 
 hive-cli data access is actually controlled by hdfs, not the authorization 
 policy. As a result, using sql std auth from hive-cli for authorization would 
 lead to a false sense of security. To avoid this, hive-cli users will have to 
 keep the authorization disabled on hive-cli  (in case of sql std auth). But 
 this would affect only authorization checks, not configuration updates by the 
 authorizer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7533) sql std auth - set authorization privileges for tables when created from hive cli

2014-08-15 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14099269#comment-14099269
 ] 

Thejas M Nair commented on HIVE-7533:
-

Looks like this has broken the build. One of the test files committed in a 
recent patch fails to compile. 
I had run only the auth*.q tests to validate, and had not run the ql/ dir tests.
Will fix in few minutes.


 sql std auth - set authorization privileges for tables when created from hive 
 cli
 -

 Key: HIVE-7533
 URL: https://issues.apache.org/jira/browse/HIVE-7533
 Project: Hive
  Issue Type: Bug
  Components: Authorization, SQLStandardAuthorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 0.14.0

 Attachments: HIVE-7533.1.patch, HIVE-7533.2.patch, HIVE-7533.3.patch


 As SQL standard authorization mode is not available from hive-cli, the 
 default permissions on table for the table owner are not being set, when the 
 table is created from hive-cli.
 It should be possible set the sql standards based authorization as the 
 authorizer for hive-cli, which would update the configuration appropriately. 
 hive-cli data access is actually controlled by hdfs, not the authorization 
 policy. As a result, using sql std auth from hive-cli for authorization would 
 lead to a false sense of security. To avoid this, hive-cli users will have to 
 keep the authorization disabled on hive-cli  (in case of sql std auth). But 
 this would affect only authorization checks, not configuration updates by the 
 authorizer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Reopened] (HIVE-7533) sql std auth - set authorization privileges for tables when created from hive cli

2014-08-15 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair reopened HIVE-7533:
-


 sql std auth - set authorization privileges for tables when created from hive 
 cli
 -

 Key: HIVE-7533
 URL: https://issues.apache.org/jira/browse/HIVE-7533
 Project: Hive
  Issue Type: Bug
  Components: Authorization, SQLStandardAuthorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 0.14.0

 Attachments: HIVE-7533.1.patch, HIVE-7533.2.patch, HIVE-7533.3.patch


 As SQL standard authorization mode is not available from hive-cli, the 
 default permissions on table for the table owner are not being set, when the 
 table is created from hive-cli.
 It should be possible set the sql standards based authorization as the 
 authorizer for hive-cli, which would update the configuration appropriately. 
 hive-cli data access is actually controlled by hdfs, not the authorization 
 policy. As a result, using sql std auth from hive-cli for authorization would 
 lead to a false sense of security. To avoid this, hive-cli users will have to 
 keep the authorization disabled on hive-cli  (in case of sql std auth). But 
 this would affect only authorization checks, not configuration updates by the 
 authorizer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7705) there's a useless threadlocal in LBUtils that shows up in perf profiles

2014-08-15 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14099273#comment-14099273
 ] 

Hive QA commented on HIVE-7705:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12662180/HIVE-7705.03.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/351/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/351/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-351/

Messages:
{noformat}
 This message was trimmed, see log for full details 
warning(200): IdentifiersParser.g:179:5: 
Decision can match input such as KW_STRUCT using multiple alternatives: 4, 6

As a result, alternative(s) 6 were disabled for that input
warning(200): IdentifiersParser.g:179:5: 
Decision can match input such as KW_ARRAY using multiple alternatives: 2, 6

As a result, alternative(s) 6 were disabled for that input
warning(200): IdentifiersParser.g:179:5: 
Decision can match input such as KW_UNIONTYPE using multiple alternatives: 5, 
6

As a result, alternative(s) 6 were disabled for that input
warning(200): IdentifiersParser.g:261:5: 
Decision can match input such as KW_NULL using multiple alternatives: 1, 8

As a result, alternative(s) 8 were disabled for that input
warning(200): IdentifiersParser.g:261:5: 
Decision can match input such as KW_TRUE using multiple alternatives: 3, 8

As a result, alternative(s) 8 were disabled for that input
warning(200): IdentifiersParser.g:261:5: 
Decision can match input such as KW_DATE StringLiteral using multiple 
alternatives: 2, 3

As a result, alternative(s) 3 were disabled for that input
warning(200): IdentifiersParser.g:261:5: 
Decision can match input such as KW_FALSE using multiple alternatives: 3, 8

As a result, alternative(s) 8 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_CLUSTER 
KW_BY using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_MAP LPAREN 
using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_INSERT 
KW_OVERWRITE using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_GROUP 
KW_BY using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_INSERT 
KW_INTO using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_LATERAL 
KW_VIEW using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_SORT KW_BY 
using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_UNION 
KW_ALL using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_ORDER 
KW_BY using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as KW_BETWEEN KW_MAP LPAREN using multiple 
alternatives: 8, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:393:5: 
Decision can match input such as {KW_LIKE, KW_REGEXP, KW_RLIKE} KW_DISTRIBUTE 
KW_BY using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:518:5: 
Decision can match input such as {AMPERSAND..BITWISEXOR, DIV..DIVIDE, 
EQUAL..EQUAL_NS, GREATERTHAN..GREATERTHANOREQUALTO, KW_AND, KW_ARRAY, 
KW_BETWEEN..KW_BOOLEAN, KW_CASE, KW_DOUBLE, KW_FLOAT, KW_IF, KW_IN, KW_INT, 
KW_LIKE, KW_MAP, KW_NOT, KW_OR, KW_REGEXP, KW_RLIKE, KW_SMALLINT, 
KW_STRING..KW_STRUCT, KW_TINYINT, KW_UNIONTYPE, KW_WHEN, 
LESSTHAN..LESSTHANOREQUALTO, MINUS..NOTEQUAL, PLUS, STAR, TILDE} using 
multiple alternatives: 1, 3

As a result, alternative(s) 3 

  1   2   >