[jira] [Updated] (HIVE-8245) Collect table read entities at same time as view read entities

2014-09-26 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-8245:

Priority: Blocker  (was: Major)

 Collect table read entities at same time as view read entities 
 ---

 Key: HIVE-8245
 URL: https://issues.apache.org/jira/browse/HIVE-8245
 Project: Hive
  Issue Type: Improvement
  Components: CBO, Security
Affects Versions: 0.13.0, 0.14.0, 0.13.1
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
Priority: Blocker
 Fix For: 0.15.0

 Attachments: HIVE-8245.1.patch, HIVE-8245.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8221) authorize additional metadata read operations in metastore storage based authorization

2014-09-26 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-8221:

Attachment: HIVE-8221.2.patch

 authorize additional metadata read operations in metastore storage based 
 authorization 
 ---

 Key: HIVE-8221
 URL: https://issues.apache.org/jira/browse/HIVE-8221
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
Priority: Blocker
 Fix For: 0.14.0

 Attachments: HIVE-8221.1.patch, HIVE-8221.2.patch


 Table and database metadata read operations should also be authorized by 
 storage based authorization, when enabled in hive metastore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 25960: HIVE-8221 : authorize additional metadata read operations in metastore storage based authorization

2014-09-26 Thread Thejas Nair

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/25960/
---

(Updated Sept. 26, 2014, 6:16 a.m.)


Review request for hive and Sushanth Sowmyan.


Changes
---

HIVE-8221.2.patch - fixing test failures. Also made it possible to 
enable/disable read authorization on databases/tables using 
hive.security.metastore.authorization.auth.reads flag.


Bugs: HIVE-8221
https://issues.apache.org/jira/browse/HIVE-8221


Repository: hive-git


Description
---

https://issues.apache.org/jira/browse/HIVE-8221


Diffs (updated)
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 3a045b7 
  
hcatalog/server-extensions/src/main/java/org/apache/hive/hcatalog/listener/NotificationListener.java
 664248d 
  
itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestMetaStoreEventListener.java
 3e4c34a 
  
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/security/StorageBasedMetastoreTestBase.java
 PRE-CREATION 
  
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/security/TestAuthorizationPreEventListener.java
 fff1ed2 
  
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/security/TestMetastoreAuthorizationProvider.java
 c869469 
  
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/security/TestMultiAuthorizationPreEventListener.java
 d98f599 
  
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/security/TestStorageBasedMetastoreAuthorizationDrops.java
 6cf8565 
  
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/security/TestStorageBasedMetastoreAuthorizationProvider.java
 dc08271 
  
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/security/TestStorageBasedMetastoreAuthorizationReads.java
 PRE-CREATION 
  metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java 
5b5102c 
  metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java 
881 
  
metastore/src/java/org/apache/hadoop/hive/metastore/events/PreEventContext.java 
4499485 
  
metastore/src/java/org/apache/hadoop/hive/metastore/events/PreReadDatabaseEvent.java
 PRE-CREATION 
  
metastore/src/java/org/apache/hadoop/hive/metastore/events/PreReadTableEvent.java
 PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/AuthorizationPreEventListener.java
 930285e 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/HiveAuthorizationProviderBase.java
 18a1b25 

Diff: https://reviews.apache.org/r/25960/diff/


Testing
---

new test cases


Thanks,

Thejas Nair



[jira] [Commented] (HIVE-8221) authorize additional metadata read operations in metastore storage based authorization

2014-09-26 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14148828#comment-14148828
 ] 

Thejas M Nair commented on HIVE-8221:
-

HIVE-8221.2.patch - fixing test failures. Also made it possible to 
enable/disable read authorization on databases/tables using 
hive.security.metastore.authorization.auth.reads flag.

 authorize additional metadata read operations in metastore storage based 
 authorization 
 ---

 Key: HIVE-8221
 URL: https://issues.apache.org/jira/browse/HIVE-8221
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
Priority: Blocker
 Fix For: 0.14.0

 Attachments: HIVE-8221.1.patch, HIVE-8221.2.patch


 Table and database metadata read operations should also be authorized by 
 storage based authorization, when enabled in hive metastore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8266) create function using resource statement compilation should include resource URI entity

2014-09-26 Thread Prasad Mujumdar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasad Mujumdar updated HIVE-8266:
--
Attachment: HIVE-8266.2.patch

 create function using resource statement compilation should include 
 resource URI entity
 -

 Key: HIVE-8266
 URL: https://issues.apache.org/jira/browse/HIVE-8266
 Project: Hive
  Issue Type: Bug
  Components: SQL
Affects Versions: 0.13.1
Reporter: Prasad Mujumdar
Assignee: Prasad Mujumdar
 Attachments: HIVE-8266.2.patch


 The compiler add function name and db name as write entities for create 
 function using resource statement. We should also include the resource URI 
 path in the write entity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8266) create function using resource statement compilation should include resource URI entity

2014-09-26 Thread Prasad Mujumdar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasad Mujumdar updated HIVE-8266:
--
Status: Patch Available  (was: Open)

 create function using resource statement compilation should include 
 resource URI entity
 -

 Key: HIVE-8266
 URL: https://issues.apache.org/jira/browse/HIVE-8266
 Project: Hive
  Issue Type: Bug
  Components: SQL
Affects Versions: 0.13.1
Reporter: Prasad Mujumdar
Assignee: Prasad Mujumdar
 Attachments: HIVE-8266.2.patch


 The compiler add function name and db name as write entities for create 
 function using resource statement. We should also include the resource URI 
 path in the write entity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-8267) Exposing hbase cell latest timestamp through hbase columns mappings to hive columns.

2014-09-26 Thread Muhammad Ehsan ul Haque (JIRA)
Muhammad Ehsan ul Haque created HIVE-8267:
-

 Summary: Exposing hbase cell latest timestamp through hbase 
columns mappings to hive columns.
 Key: HIVE-8267
 URL: https://issues.apache.org/jira/browse/HIVE-8267
 Project: Hive
  Issue Type: Bug
  Components: HBase Handler
Affects Versions: 0.14.0
Reporter: Muhammad Ehsan ul Haque
Priority: Minor
 Fix For: 0.14.0


Previous attempts HIVE-2781 (not accepted), HIVE-2828 (broken and proposed with 
restricted feature).
The feature is to have hbase cell latest timestamp accessible in hive query, by 
mapping the cell timestamp with a hive column, using mapping format like 
{code}:timestamp:cf:[optional qualifier or qualifier prefix]{code}
The hive create table statement would be like
h4. For mapping a cell latest timestamp.
{code}
CREATE TABLE hive_hbase_table (key STRING, col1 STRING, col1_ts BIGINT)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES (hbase.columns.mapping = :key,cf:qualifier, 
:timestamp:cf:qualifier)
TBLPROPERTIES (hbase.table.name = hbase_table);
{code}
h4. For mapping a column family latest timestamp.
{code}
CREATE TABLE hive_hbase_table (key STRING, valuemap MAPSTRING, STRING, 
timestampmap MAPSTRING, BIGINT)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES (hbase.columns.mapping = :key,cf:,:timestamp:cf:)
TBLPROPERTIES (hbase.table.name = hbase_table);
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-7615) Beeline should have an option for user to see the query progress

2014-09-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14148870#comment-14148870
 ] 

Hive QA commented on HIVE-7615:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12671180/HIVE-7615.4.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 6356 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.parse.TestParse.testParse_union
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/990/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/990/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-990/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12671180

 Beeline should have an option for user to see the query progress
 

 Key: HIVE-7615
 URL: https://issues.apache.org/jira/browse/HIVE-7615
 Project: Hive
  Issue Type: Improvement
  Components: CLI
Reporter: Dong Chen
Assignee: Dong Chen
Priority: Blocker
 Fix For: 0.14.0

 Attachments: HIVE-7615.1.patch, HIVE-7615.2.patch, HIVE-7615.3.patch, 
 HIVE-7615.4.patch, HIVE-7615.patch, complete_logs, simple_logs


 When executing query in Beeline, user should have a option to see the 
 progress through the outputs.
 Beeline could use the API introduced in HIVE-4629 to get and display the logs 
 to the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-8268) Build hive by JDK 1.7 by default

2014-09-26 Thread Guo Ruijing (JIRA)
Guo Ruijing created HIVE-8268:
-

 Summary: Build hive by JDK 1.7 by default
 Key: HIVE-8268
 URL: https://issues.apache.org/jira/browse/HIVE-8268
 Project: Hive
  Issue Type: Improvement
  Components: Build Infrastructure
Reporter: Guo Ruijing


existing hive is buit by JDK 1.6 by default as:

groupIdorg.apache.maven.plugins/groupId
artifactIdmaven-compiler-plugin/artifactId
version${maven.compiler.plugin.version}/version
configuration
source1.6/source
target1.6/target
/configuration

We may change to build hive by JDK 1.7 by default
1. add 

properties
java.source.version1.6/java.source.version
java.target.version1.7/java.target.version

2. change

groupIdorg.apache.maven.plugins/groupId
artifactIdmaven-compiler-plugin/artifactId
version${maven.compiler.plugin.version}/version
configuration
source${java.source.version}/source
target${java.target.version}/target
/configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8182) beeline fails when executing multiple-line queries with trailing spaces

2014-09-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14148928#comment-14148928
 ] 

Hive QA commented on HIVE-8182:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12671243/HIVE-8182.1.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 6353 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.parse.TestParse.testParse_union
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/991/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/991/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-991/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12671243

 beeline fails when executing multiple-line queries with trailing spaces
 ---

 Key: HIVE-8182
 URL: https://issues.apache.org/jira/browse/HIVE-8182
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0, 0.13.1
Reporter: Yongzhi Chen
Assignee: Sergio Peña
 Fix For: 0.14.0

 Attachments: HIVE-8181.1.patch, HIVE-8182.1.patch


 As title indicates, when executing a multi-line query with trailing spaces, 
 beeline reports syntax error: 
 Error: Error while compiling statement: FAILED: ParseException line 1:76 
 extraneous input ';' expecting EOF near 'EOF' (state=42000,code=4)
 If put this query in one single line, beeline succeeds to execute it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8186) Self join may fail if one side has VCs and other doesn't

2014-09-26 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-8186:

Attachment: HIVE-8186.1.patch.txt

There are some structural problems in MapOperator. First try.

 Self join may fail if one side has VCs and other doesn't
 

 Key: HIVE-8186
 URL: https://issues.apache.org/jira/browse/HIVE-8186
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HIVE-8186.1.patch.txt


 See comments. This also fails on trunk, although not on original join_vc query



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8186) Self join may fail if one side has VCs and other doesn't

2014-09-26 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-8186:

Status: Patch Available  (was: Open)

 Self join may fail if one side has VCs and other doesn't
 

 Key: HIVE-8186
 URL: https://issues.apache.org/jira/browse/HIVE-8186
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HIVE-8186.1.patch.txt


 See comments. This also fails on trunk, although not on original join_vc query



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-7156) Group-By operator stat-annotation only uses distinct approx to generate rollups

2014-09-26 Thread Prasanth J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth J updated HIVE-7156:
-
Attachment: HIVE-7156.8.patch

 Group-By operator stat-annotation only uses distinct approx to generate 
 rollups
 ---

 Key: HIVE-7156
 URL: https://issues.apache.org/jira/browse/HIVE-7156
 Project: Hive
  Issue Type: Sub-task
Affects Versions: 0.14.0
Reporter: Gopal V
Assignee: Prasanth J
 Attachments: HIVE-7156.1.patch, HIVE-7156.2.patch, HIVE-7156.3.patch, 
 HIVE-7156.4.patch, HIVE-7156.5.patch, HIVE-7156.6.patch, HIVE-7156.7.patch, 
 HIVE-7156.8.patch, HIVE-7156.8.patch, hive-debug.log.bz2


 The stats annotation for a group-by only annotates the reduce-side row-count 
 with the distinct values.
 The map-side gets the row-count as the rows output instead of distinct * 
 parallelism, while the reducer side gets the correct parallelism.
 {code}
 hive explain select distinct L_SHIPDATE from lineitem;
   Vertices:
 Map 1 
 Map Operator Tree:
 TableScan
   alias: lineitem
   Statistics: Num rows: 589709 Data size: 4745677733354 
 Basic stats: COMPLETE Column stats: COMPLETE
   Select Operator
 expressions: l_shipdate (type: string)
 outputColumnNames: l_shipdate
 Statistics: Num rows: 589709 Data size: 4745677733354 
 Basic stats: COMPLETE Column stats: COMPLETE
 Group By Operator
   keys: l_shipdate (type: string)
   mode: hash
   outputColumnNames: _col0
   Statistics: Num rows: 589709 Data size: 
 563999032646 Basic stats: COMPLETE Column stats: COMPLETE
   Reduce Output Operator
 key expressions: _col0 (type: string)
 sort order: +
 Map-reduce partition columns: _col0 (type: string)
 Statistics: Num rows: 589709 Data size: 
 563999032646 Basic stats: COMPLETE Column stats: COMPLETE
 Execution mode: vectorized
 Reducer 2 
 Reduce Operator Tree:
   Group By Operator
 keys: KEY._col0 (type: string)
 mode: mergepartial
 outputColumnNames: _col0
 Statistics: Num rows: 1955 Data size: 183770 Basic stats: 
 COMPLETE Column stats: COMPLETE
 Select Operator
   expressions: _col0 (type: string)
   outputColumnNames: _col0
   Statistics: Num rows: 1955 Data size: 183770 Basic stats: 
 COMPLETE Column stats: COMPLETE
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-7156) Group-By operator stat-annotation only uses distinct approx to generate rollups

2014-09-26 Thread Prasanth J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth J updated HIVE-7156:
-
Attachment: (was: HIVE-7156.8.patch)

 Group-By operator stat-annotation only uses distinct approx to generate 
 rollups
 ---

 Key: HIVE-7156
 URL: https://issues.apache.org/jira/browse/HIVE-7156
 Project: Hive
  Issue Type: Sub-task
Affects Versions: 0.14.0
Reporter: Gopal V
Assignee: Prasanth J
 Attachments: HIVE-7156.1.patch, HIVE-7156.2.patch, HIVE-7156.3.patch, 
 HIVE-7156.4.patch, HIVE-7156.5.patch, HIVE-7156.6.patch, HIVE-7156.7.patch, 
 HIVE-7156.8.patch, HIVE-7156.8.patch, hive-debug.log.bz2


 The stats annotation for a group-by only annotates the reduce-side row-count 
 with the distinct values.
 The map-side gets the row-count as the rows output instead of distinct * 
 parallelism, while the reducer side gets the correct parallelism.
 {code}
 hive explain select distinct L_SHIPDATE from lineitem;
   Vertices:
 Map 1 
 Map Operator Tree:
 TableScan
   alias: lineitem
   Statistics: Num rows: 589709 Data size: 4745677733354 
 Basic stats: COMPLETE Column stats: COMPLETE
   Select Operator
 expressions: l_shipdate (type: string)
 outputColumnNames: l_shipdate
 Statistics: Num rows: 589709 Data size: 4745677733354 
 Basic stats: COMPLETE Column stats: COMPLETE
 Group By Operator
   keys: l_shipdate (type: string)
   mode: hash
   outputColumnNames: _col0
   Statistics: Num rows: 589709 Data size: 
 563999032646 Basic stats: COMPLETE Column stats: COMPLETE
   Reduce Output Operator
 key expressions: _col0 (type: string)
 sort order: +
 Map-reduce partition columns: _col0 (type: string)
 Statistics: Num rows: 589709 Data size: 
 563999032646 Basic stats: COMPLETE Column stats: COMPLETE
 Execution mode: vectorized
 Reducer 2 
 Reduce Operator Tree:
   Group By Operator
 keys: KEY._col0 (type: string)
 mode: mergepartial
 outputColumnNames: _col0
 Statistics: Num rows: 1955 Data size: 183770 Basic stats: 
 COMPLETE Column stats: COMPLETE
 Select Operator
   expressions: _col0 (type: string)
   outputColumnNames: _col0
   Statistics: Num rows: 1955 Data size: 183770 Basic stats: 
 COMPLETE Column stats: COMPLETE
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-7156) Group-By operator stat-annotation only uses distinct approx to generate rollups

2014-09-26 Thread Prasanth J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth J updated HIVE-7156:
-
Attachment: HIVE-7156.8.patch

 Group-By operator stat-annotation only uses distinct approx to generate 
 rollups
 ---

 Key: HIVE-7156
 URL: https://issues.apache.org/jira/browse/HIVE-7156
 Project: Hive
  Issue Type: Sub-task
Affects Versions: 0.14.0
Reporter: Gopal V
Assignee: Prasanth J
 Attachments: HIVE-7156.1.patch, HIVE-7156.2.patch, HIVE-7156.3.patch, 
 HIVE-7156.4.patch, HIVE-7156.5.patch, HIVE-7156.6.patch, HIVE-7156.7.patch, 
 HIVE-7156.8.patch, HIVE-7156.8.patch, hive-debug.log.bz2


 The stats annotation for a group-by only annotates the reduce-side row-count 
 with the distinct values.
 The map-side gets the row-count as the rows output instead of distinct * 
 parallelism, while the reducer side gets the correct parallelism.
 {code}
 hive explain select distinct L_SHIPDATE from lineitem;
   Vertices:
 Map 1 
 Map Operator Tree:
 TableScan
   alias: lineitem
   Statistics: Num rows: 589709 Data size: 4745677733354 
 Basic stats: COMPLETE Column stats: COMPLETE
   Select Operator
 expressions: l_shipdate (type: string)
 outputColumnNames: l_shipdate
 Statistics: Num rows: 589709 Data size: 4745677733354 
 Basic stats: COMPLETE Column stats: COMPLETE
 Group By Operator
   keys: l_shipdate (type: string)
   mode: hash
   outputColumnNames: _col0
   Statistics: Num rows: 589709 Data size: 
 563999032646 Basic stats: COMPLETE Column stats: COMPLETE
   Reduce Output Operator
 key expressions: _col0 (type: string)
 sort order: +
 Map-reduce partition columns: _col0 (type: string)
 Statistics: Num rows: 589709 Data size: 
 563999032646 Basic stats: COMPLETE Column stats: COMPLETE
 Execution mode: vectorized
 Reducer 2 
 Reduce Operator Tree:
   Group By Operator
 keys: KEY._col0 (type: string)
 mode: mergepartial
 outputColumnNames: _col0
 Statistics: Num rows: 1955 Data size: 183770 Basic stats: 
 COMPLETE Column stats: COMPLETE
 Select Operator
   expressions: _col0 (type: string)
   outputColumnNames: _col0
   Statistics: Num rows: 1955 Data size: 183770 Basic stats: 
 COMPLETE Column stats: COMPLETE
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8200) Make beeline use the hive-jdbc standalone jar

2014-09-26 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14148965#comment-14148965
 ] 

Vaibhav Gumashta commented on HIVE-8200:


[~deepesh] [~ashutoshc] I don't think the hive-jdbc uber jar quite ready yet. I 
did some investigation and found out that:
1. It is packaging a whole bunch of unrequired classes (from ql, serde, 
metastore etc).
2. It might not be shading properly to pack some required classes for a secure 
setup (HadoopThriftAuthBridge23, Configuration). This will require further 
investigation and more testing. 

I'm creating the following jiras to handle the issue:
1. Revert this patch.
2. Create a more effective  accurate uber jar.

Let me know what you guys think.

 Make beeline use the hive-jdbc standalone jar
 -

 Key: HIVE-8200
 URL: https://issues.apache.org/jira/browse/HIVE-8200
 Project: Hive
  Issue Type: Bug
  Components: CLI, HiveServer2
Affects Versions: 0.14.0
Reporter: Deepesh Khandelwal
Assignee: Deepesh Khandelwal
 Fix For: 0.14.0

 Attachments: HIVE-8200.1.patch


 Hiveserver2 JDBC client beeline currently generously includes all the jars 
 under $HIVE_HOME/lib in its invocation. With the fix from HIVE-8129 it should 
 only need a few. This will be a good validation of the hive-jdbc standalone 
 jar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-8269) Revert HIVE-8200 (Make beeline use the hive-jdbc standalone jar)

2014-09-26 Thread Vaibhav Gumashta (JIRA)
Vaibhav Gumashta created HIVE-8269:
--

 Summary: Revert HIVE-8200 (Make beeline use the hive-jdbc 
standalone jar)
 Key: HIVE-8269
 URL: https://issues.apache.org/jira/browse/HIVE-8269
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.14.0


More description on HIVE-8200.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8269) Revert HIVE-8200 (Make beeline use the hive-jdbc standalone jar)

2014-09-26 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-8269:
---
Status: Patch Available  (was: Open)

 Revert HIVE-8200 (Make beeline use the hive-jdbc standalone jar)
 

 Key: HIVE-8269
 URL: https://issues.apache.org/jira/browse/HIVE-8269
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.14.0

 Attachments: HIVE-8269.1.patch


 More description on HIVE-8200.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8269) Revert HIVE-8200 (Make beeline use the hive-jdbc standalone jar)

2014-09-26 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-8269:
---
Attachment: HIVE-8269.1.patch

cc [~ashutoshc] [~deepesh]

 Revert HIVE-8200 (Make beeline use the hive-jdbc standalone jar)
 

 Key: HIVE-8269
 URL: https://issues.apache.org/jira/browse/HIVE-8269
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.14.0

 Attachments: HIVE-8269.1.patch


 More description on HIVE-8200.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-8270) JDBC uber jar size is way too big - size should be reduced. It is also missing some classes required in secure setup

2014-09-26 Thread Vaibhav Gumashta (JIRA)
Vaibhav Gumashta created HIVE-8270:
--

 Summary: JDBC uber jar size is way too big - size should be 
reduced. It is also missing some classes required in secure setup
 Key: HIVE-8270
 URL: https://issues.apache.org/jira/browse/HIVE-8270
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.14.0
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.14.0


JDBC uber jar is ~ 28MB! Also missing some required classes for a secure setup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8162) hive.optimize.sort.dynamic.partition causes RuntimeException for inserting into dynamic partitioned table when map function is used in the subquery

2014-09-26 Thread Prasanth J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth J updated HIVE-8162:
-
Fix Version/s: 0.14.0

 hive.optimize.sort.dynamic.partition causes RuntimeException for inserting 
 into dynamic partitioned table when map function is used in the subquery 
 

 Key: HIVE-8162
 URL: https://issues.apache.org/jira/browse/HIVE-8162
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.13.0, 0.14.0
Reporter: Na Yang
Assignee: Prasanth J
Priority: Blocker
 Fix For: 0.14.0

 Attachments: 47rows.txt, HIVE-8162.1.patch, HIVE-8162.2.patch


 Exception:
 Diagnostic Messages for this Task:
 java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: 
 Hive Runtime Error: Unable to deserialize reduce input key from 
 x1x129x51x83x14x1x128x0x0x2x1x1x1x120x95x112x114x111x100x117x99x116x95x105x100x0x1x0x0x255
  with properties {columns=reducesinkkey0,reducesinkkey1,reducesinkkey2, 
 serialization.lib=org.apache.hadoop.hive.serde2.binarysortable.BinarySortableSerDe,
  serialization.sort.order=+++, columns.types=int,mapstring,string,int}
   at 
 org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:283)
   at 
 org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:518)
   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:462)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:282)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1122)
   at org.apache.hadoop.mapred.Child.main(Child.java:271)
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
 Error: Unable to deserialize reduce input key from 
 x1x129x51x83x14x1x128x0x0x2x1x1x1x120x95x112x114x111x100x117x99x116x95x105x100x0x1x0x0x255
  with properties {columns=reducesinkkey0,reducesinkkey1,reducesinkkey2, 
 serialization.lib=org.apache.hadoop.hive.serde2.binarysortable.BinarySortableSerDe,
  serialization.sort.order=+++, columns.types=int,mapstring,string,int}
   at 
 org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:222)
   ... 7 more
 Caused by: org.apache.hadoop.hive.serde2.SerDeException: java.io.EOFException
   at 
 org.apache.hadoop.hive.serde2.binarysortable.BinarySortableSerDe.deserialize(BinarySortableSerDe.java:189)
   at 
 org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:220)
   ... 7 more
 Caused by: java.io.EOFException
   at 
 org.apache.hadoop.hive.serde2.binarysortable.InputByteBuffer.read(InputByteBuffer.java:54)
   at 
 org.apache.hadoop.hive.serde2.binarysortable.BinarySortableSerDe.deserializeInt(BinarySortableSerDe.java:533)
   at 
 org.apache.hadoop.hive.serde2.binarysortable.BinarySortableSerDe.deserialize(BinarySortableSerDe.java:236)
   at 
 org.apache.hadoop.hive.serde2.binarysortable.BinarySortableSerDe.deserialize(BinarySortableSerDe.java:185)
   ... 8 more
 Step to reproduce the exception:
 -
 CREATE TABLE associateddata(creative_id int,creative_group_id int,placement_id
 int,sm_campaign_id int,browser_id string, trans_type_p string,trans_time_p
 string,group_name string,event_name string,order_id string,revenue
 float,currency string, trans_type_ci string,trans_time_ci string,f16
 mapstring,string,campaign_id int,user_agent_cat string,geo_country
 string,geo_city string,geo_state string,geo_zip string,geo_dma string,geo_area
 string,geo_isp string,site_id int,section_id int,f16_ci mapstring,string)
 PARTITIONED BY(day_id int, hour_id int) ROW FORMAT DELIMITED FIELDS TERMINATED
 BY '\t';
 LOAD DATA LOCAL INPATH '/tmp/47rows.txt' INTO TABLE associateddata
 PARTITION(day_id=20140814,hour_id=2014081417);
 set hive.exec.dynamic.partition=true;
 set hive.exec.dynamic.partition.mode=nonstrict; 
 CREATE  EXTERNAL TABLE IF NOT EXISTS agg_pv_associateddata_c (
  vt_tran_qty int COMMENT 'The count of view
 thru transactions'
 , pair_value_txt  string  COMMENT 'F16 name values
 pairs'
 )
 PARTITIONED BY (day_id int)
 ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
 STORED AS TEXTFILE
 LOCATION '/user/prodman/agg_pv_associateddata_c';
 INSERT INTO TABLE agg_pv_associateddata_c PARTITION (day_id)
 select 2 as vt_tran_qty, pair_value_txt, day_id
  from (select map( 'x_product_id',coalesce(F16['x_product_id'],'') ) as 
 pair_value_txt , day_id , hour_id 
 from associateddata where hour_id = 2014081417 and sm_campaign_id in
 

[jira] [Updated] (HIVE-8162) hive.optimize.sort.dynamic.partition causes RuntimeException for inserting into dynamic partitioned table when map function is used in the subquery

2014-09-26 Thread Prasanth J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth J updated HIVE-8162:
-
Priority: Blocker  (was: Major)

 hive.optimize.sort.dynamic.partition causes RuntimeException for inserting 
 into dynamic partitioned table when map function is used in the subquery 
 

 Key: HIVE-8162
 URL: https://issues.apache.org/jira/browse/HIVE-8162
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.13.0, 0.14.0
Reporter: Na Yang
Assignee: Prasanth J
Priority: Blocker
 Fix For: 0.14.0

 Attachments: 47rows.txt, HIVE-8162.1.patch, HIVE-8162.2.patch


 Exception:
 Diagnostic Messages for this Task:
 java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: 
 Hive Runtime Error: Unable to deserialize reduce input key from 
 x1x129x51x83x14x1x128x0x0x2x1x1x1x120x95x112x114x111x100x117x99x116x95x105x100x0x1x0x0x255
  with properties {columns=reducesinkkey0,reducesinkkey1,reducesinkkey2, 
 serialization.lib=org.apache.hadoop.hive.serde2.binarysortable.BinarySortableSerDe,
  serialization.sort.order=+++, columns.types=int,mapstring,string,int}
   at 
 org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:283)
   at 
 org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:518)
   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:462)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:282)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1122)
   at org.apache.hadoop.mapred.Child.main(Child.java:271)
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
 Error: Unable to deserialize reduce input key from 
 x1x129x51x83x14x1x128x0x0x2x1x1x1x120x95x112x114x111x100x117x99x116x95x105x100x0x1x0x0x255
  with properties {columns=reducesinkkey0,reducesinkkey1,reducesinkkey2, 
 serialization.lib=org.apache.hadoop.hive.serde2.binarysortable.BinarySortableSerDe,
  serialization.sort.order=+++, columns.types=int,mapstring,string,int}
   at 
 org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:222)
   ... 7 more
 Caused by: org.apache.hadoop.hive.serde2.SerDeException: java.io.EOFException
   at 
 org.apache.hadoop.hive.serde2.binarysortable.BinarySortableSerDe.deserialize(BinarySortableSerDe.java:189)
   at 
 org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:220)
   ... 7 more
 Caused by: java.io.EOFException
   at 
 org.apache.hadoop.hive.serde2.binarysortable.InputByteBuffer.read(InputByteBuffer.java:54)
   at 
 org.apache.hadoop.hive.serde2.binarysortable.BinarySortableSerDe.deserializeInt(BinarySortableSerDe.java:533)
   at 
 org.apache.hadoop.hive.serde2.binarysortable.BinarySortableSerDe.deserialize(BinarySortableSerDe.java:236)
   at 
 org.apache.hadoop.hive.serde2.binarysortable.BinarySortableSerDe.deserialize(BinarySortableSerDe.java:185)
   ... 8 more
 Step to reproduce the exception:
 -
 CREATE TABLE associateddata(creative_id int,creative_group_id int,placement_id
 int,sm_campaign_id int,browser_id string, trans_type_p string,trans_time_p
 string,group_name string,event_name string,order_id string,revenue
 float,currency string, trans_type_ci string,trans_time_ci string,f16
 mapstring,string,campaign_id int,user_agent_cat string,geo_country
 string,geo_city string,geo_state string,geo_zip string,geo_dma string,geo_area
 string,geo_isp string,site_id int,section_id int,f16_ci mapstring,string)
 PARTITIONED BY(day_id int, hour_id int) ROW FORMAT DELIMITED FIELDS TERMINATED
 BY '\t';
 LOAD DATA LOCAL INPATH '/tmp/47rows.txt' INTO TABLE associateddata
 PARTITION(day_id=20140814,hour_id=2014081417);
 set hive.exec.dynamic.partition=true;
 set hive.exec.dynamic.partition.mode=nonstrict; 
 CREATE  EXTERNAL TABLE IF NOT EXISTS agg_pv_associateddata_c (
  vt_tran_qty int COMMENT 'The count of view
 thru transactions'
 , pair_value_txt  string  COMMENT 'F16 name values
 pairs'
 )
 PARTITIONED BY (day_id int)
 ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
 STORED AS TEXTFILE
 LOCATION '/user/prodman/agg_pv_associateddata_c';
 INSERT INTO TABLE agg_pv_associateddata_c PARTITION (day_id)
 select 2 as vt_tran_qty, pair_value_txt, day_id
  from (select map( 'x_product_id',coalesce(F16['x_product_id'],'') ) as 
 pair_value_txt , day_id , hour_id 
 from associateddata where hour_id = 2014081417 and sm_campaign_id in
 

[jira] [Commented] (HIVE-8200) Make beeline use the hive-jdbc standalone jar

2014-09-26 Thread Damien Carol (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14148973#comment-14148973
 ] 

Damien Carol commented on HIVE-8200:


[~vgumashta] Should it be more simple to fix errors in hive-jdbc uber jar 
instead of reverting everything?

 Make beeline use the hive-jdbc standalone jar
 -

 Key: HIVE-8200
 URL: https://issues.apache.org/jira/browse/HIVE-8200
 Project: Hive
  Issue Type: Bug
  Components: CLI, HiveServer2
Affects Versions: 0.14.0
Reporter: Deepesh Khandelwal
Assignee: Deepesh Khandelwal
 Fix For: 0.14.0

 Attachments: HIVE-8200.1.patch


 Hiveserver2 JDBC client beeline currently generously includes all the jars 
 under $HIVE_HOME/lib in its invocation. With the fix from HIVE-8129 it should 
 only need a few. This will be a good validation of the hive-jdbc standalone 
 jar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8267) Exposing hbase cell latest timestamp through hbase columns mappings to hive columns.

2014-09-26 Thread Muhammad Ehsan ul Haque (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Muhammad Ehsan ul Haque updated HIVE-8267:
--
Attachment: HIVE-8267.0.patch

 Exposing hbase cell latest timestamp through hbase columns mappings to hive 
 columns.
 

 Key: HIVE-8267
 URL: https://issues.apache.org/jira/browse/HIVE-8267
 Project: Hive
  Issue Type: Bug
  Components: HBase Handler
Affects Versions: 0.14.0
Reporter: Muhammad Ehsan ul Haque
Priority: Minor
 Fix For: 0.14.0

 Attachments: HIVE-8267.0.patch


 Previous attempts HIVE-2781 (not accepted), HIVE-2828 (broken and proposed 
 with restricted feature).
 The feature is to have hbase cell latest timestamp accessible in hive query, 
 by mapping the cell timestamp with a hive column, using mapping format like 
 {code}:timestamp:cf:[optional qualifier or qualifier prefix]{code}
 The hive create table statement would be like
 h4. For mapping a cell latest timestamp.
 {code}
 CREATE TABLE hive_hbase_table (key STRING, col1 STRING, col1_ts BIGINT)
 STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
 WITH SERDEPROPERTIES (hbase.columns.mapping = :key,cf:qualifier, 
 :timestamp:cf:qualifier)
 TBLPROPERTIES (hbase.table.name = hbase_table);
 {code}
 h4. For mapping a column family latest timestamp.
 {code}
 CREATE TABLE hive_hbase_table (key STRING, valuemap MAPSTRING, STRING, 
 timestampmap MAPSTRING, BIGINT)
 STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
 WITH SERDEPROPERTIES (hbase.columns.mapping = :key,cf:,:timestamp:cf:)
 TBLPROPERTIES (hbase.table.name = hbase_table);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8168) With dynamic partition enabled fact table selectivity is not taken into account when generating the physical plan (Use CBO cardinality using physical plan generation)

2014-09-26 Thread Prasanth J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth J updated HIVE-8168:
-
Priority: Major  (was: Critical)

 With dynamic partition enabled fact table selectivity is not taken into 
 account when generating the physical plan (Use CBO cardinality using physical 
 plan generation)
 --

 Key: HIVE-8168
 URL: https://issues.apache.org/jira/browse/HIVE-8168
 Project: Hive
  Issue Type: Bug
  Components: Tez
Affects Versions: 0.14.0
Reporter: Mostafa Mokhtar
Assignee: Prasanth J
  Labels: performance
 Fix For: vectorization-branch, 0.14.0


 When calculating estimate row counts  data size during physical plan 
 generation in StatsRulesProcFactory doesn't know that there will be dynamic 
 partition pruning and it is hard to know how many partitions will qualify at 
 runtime, as a result with Dynamic partition pruning enabled a query 32 can 
 run with 570 compared to 70 tasks with dynamic partition pruning disabled and 
 actual partition filters on the fact table.
 The long term solution for this issue is to use the cardinality estimates 
 from CBO as it takes into account join selectivity and such, estimate from 
 CBO won't address the number of the tasks used for the partitioned table but 
 they will address the incorrect number of tasks used for the concequent 
 reducers where the majority of the slowdown is coming from.
 Plan dynamic partition pruning on 
 {code}
Map 5 
 Map Operator Tree:
 TableScan
   alias: ss
   filterExpr: ss_store_sk is not null (type: boolean)
   Statistics: Num rows: 550076554 Data size: 47370018896 
 Basic stats: COMPLETE Column stats: NONE
   Filter Operator
 predicate: ss_store_sk is not null (type: boolean)
 Statistics: Num rows: 275038277 Data size: 23685009448 
 Basic stats: COMPLETE Column stats: NONE
 Map Join Operator
   condition map:
Inner Join 0 to 1
   condition expressions:
 0 {ss_store_sk} {ss_net_profit}
 1 
   keys:
 0 ss_sold_date_sk (type: int)
 1 d_date_sk (type: int)
   outputColumnNames: _col6, _col21
   input vertices:
 1 Map 1
   Statistics: Num rows: 302542112 Data size: 26053511168 
 Basic stats: COMPLETE Column stats: NONE
   Map Join Operator
 condition map:
  Inner Join 0 to 1
 condition expressions:
   0 {_col21}
   1 {s_county} {s_state}
 keys:
   0 _col6 (type: int)
   1 s_store_sk (type: int)
 outputColumnNames: _col21, _col80, _col81
 input vertices:
   1 Map 2
 Statistics: Num rows: 332796320 Data size: 
 28658862080 Basic stats: COMPLETE Column stats: NONE
 Map Join Operator
   condition map:
Left Semi Join 0 to 1
   condition expressions:
 0 {_col21} {_col80} {_col81}
 1 
   keys:
 0 _col81 (type: string)
 1 _col0 (type: string)
   outputColumnNames: _col21, _col80, _col81
   input vertices:
 1 Reducer 11
   Statistics: Num rows: 366075968 Data size: 
 31524749312 Basic stats: COMPLETE Column stats: NONE
   Select Operator
 expressions: _col81 (type: string), _col80 (type: 
 string), _col21 (type: float)
 outputColumnNames: _col81, _col80, _col21
 Statistics: Num rows: 366075968 Data size: 
 31524749312 Basic stats: COMPLETE Column stats: NONE
 Group By Operator
   aggregations: sum(_col21)
   keys: _col81 (type: string), _col80 (type: 
 string), '0' (type: string)
   mode: hash
   outputColumnNames: _col0, _col1, _col2, _col3
   

[jira] [Updated] (HIVE-7156) Group-By operator stat-annotation only uses distinct approx to generate rollups

2014-09-26 Thread Prasanth J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth J updated HIVE-7156:
-
Priority: Blocker  (was: Major)

 Group-By operator stat-annotation only uses distinct approx to generate 
 rollups
 ---

 Key: HIVE-7156
 URL: https://issues.apache.org/jira/browse/HIVE-7156
 Project: Hive
  Issue Type: Sub-task
Affects Versions: 0.14.0
Reporter: Gopal V
Assignee: Prasanth J
Priority: Blocker
 Fix For: 0.14.0

 Attachments: HIVE-7156.1.patch, HIVE-7156.2.patch, HIVE-7156.3.patch, 
 HIVE-7156.4.patch, HIVE-7156.5.patch, HIVE-7156.6.patch, HIVE-7156.7.patch, 
 HIVE-7156.8.patch, HIVE-7156.8.patch, hive-debug.log.bz2


 The stats annotation for a group-by only annotates the reduce-side row-count 
 with the distinct values.
 The map-side gets the row-count as the rows output instead of distinct * 
 parallelism, while the reducer side gets the correct parallelism.
 {code}
 hive explain select distinct L_SHIPDATE from lineitem;
   Vertices:
 Map 1 
 Map Operator Tree:
 TableScan
   alias: lineitem
   Statistics: Num rows: 589709 Data size: 4745677733354 
 Basic stats: COMPLETE Column stats: COMPLETE
   Select Operator
 expressions: l_shipdate (type: string)
 outputColumnNames: l_shipdate
 Statistics: Num rows: 589709 Data size: 4745677733354 
 Basic stats: COMPLETE Column stats: COMPLETE
 Group By Operator
   keys: l_shipdate (type: string)
   mode: hash
   outputColumnNames: _col0
   Statistics: Num rows: 589709 Data size: 
 563999032646 Basic stats: COMPLETE Column stats: COMPLETE
   Reduce Output Operator
 key expressions: _col0 (type: string)
 sort order: +
 Map-reduce partition columns: _col0 (type: string)
 Statistics: Num rows: 589709 Data size: 
 563999032646 Basic stats: COMPLETE Column stats: COMPLETE
 Execution mode: vectorized
 Reducer 2 
 Reduce Operator Tree:
   Group By Operator
 keys: KEY._col0 (type: string)
 mode: mergepartial
 outputColumnNames: _col0
 Statistics: Num rows: 1955 Data size: 183770 Basic stats: 
 COMPLETE Column stats: COMPLETE
 Select Operator
   expressions: _col0 (type: string)
   outputColumnNames: _col0
   Statistics: Num rows: 1955 Data size: 183770 Basic stats: 
 COMPLETE Column stats: COMPLETE
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-7156) Group-By operator stat-annotation only uses distinct approx to generate rollups

2014-09-26 Thread Prasanth J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth J updated HIVE-7156:
-
Fix Version/s: 0.14.0

 Group-By operator stat-annotation only uses distinct approx to generate 
 rollups
 ---

 Key: HIVE-7156
 URL: https://issues.apache.org/jira/browse/HIVE-7156
 Project: Hive
  Issue Type: Sub-task
Affects Versions: 0.14.0
Reporter: Gopal V
Assignee: Prasanth J
Priority: Blocker
 Fix For: 0.14.0

 Attachments: HIVE-7156.1.patch, HIVE-7156.2.patch, HIVE-7156.3.patch, 
 HIVE-7156.4.patch, HIVE-7156.5.patch, HIVE-7156.6.patch, HIVE-7156.7.patch, 
 HIVE-7156.8.patch, HIVE-7156.8.patch, hive-debug.log.bz2


 The stats annotation for a group-by only annotates the reduce-side row-count 
 with the distinct values.
 The map-side gets the row-count as the rows output instead of distinct * 
 parallelism, while the reducer side gets the correct parallelism.
 {code}
 hive explain select distinct L_SHIPDATE from lineitem;
   Vertices:
 Map 1 
 Map Operator Tree:
 TableScan
   alias: lineitem
   Statistics: Num rows: 589709 Data size: 4745677733354 
 Basic stats: COMPLETE Column stats: COMPLETE
   Select Operator
 expressions: l_shipdate (type: string)
 outputColumnNames: l_shipdate
 Statistics: Num rows: 589709 Data size: 4745677733354 
 Basic stats: COMPLETE Column stats: COMPLETE
 Group By Operator
   keys: l_shipdate (type: string)
   mode: hash
   outputColumnNames: _col0
   Statistics: Num rows: 589709 Data size: 
 563999032646 Basic stats: COMPLETE Column stats: COMPLETE
   Reduce Output Operator
 key expressions: _col0 (type: string)
 sort order: +
 Map-reduce partition columns: _col0 (type: string)
 Statistics: Num rows: 589709 Data size: 
 563999032646 Basic stats: COMPLETE Column stats: COMPLETE
 Execution mode: vectorized
 Reducer 2 
 Reduce Operator Tree:
   Group By Operator
 keys: KEY._col0 (type: string)
 mode: mergepartial
 outputColumnNames: _col0
 Statistics: Num rows: 1955 Data size: 183770 Basic stats: 
 COMPLETE Column stats: COMPLETE
 Select Operator
   expressions: _col0 (type: string)
   outputColumnNames: _col0
   Statistics: Num rows: 1955 Data size: 183770 Basic stats: 
 COMPLETE Column stats: COMPLETE
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8196) Joining on partition columns with fetch column stats enabled results it very small CE which negatively affects query performance

2014-09-26 Thread Prasanth J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth J updated HIVE-8196:
-
Priority: Blocker  (was: Critical)

 Joining on partition columns with fetch column stats enabled results it very 
 small CE which negatively affects query performance 
 -

 Key: HIVE-8196
 URL: https://issues.apache.org/jira/browse/HIVE-8196
 Project: Hive
  Issue Type: Bug
  Components: Physical Optimizer
Affects Versions: 0.14.0
Reporter: Mostafa Mokhtar
Assignee: Prasanth J
Priority: Blocker
  Labels: performance
 Fix For: 0.14.0

 Attachments: HIVE-8196.1.patch


 To make the best out of dynamic partition pruning joins should be on the 
 partitioning columns which results in dynamically pruning the partitions from 
 the fact table based on the qualifying column keys from the dimension table, 
 this type of joins negatively effects on cardinality estimates with fetch 
 column stats enabled.
 Currently we don't have statistics for partition columns and as a result NDV 
 is set to row count, doing that negatively affects the estimated join 
 selectivity from the join.
 Workaround is to capture statistics for partition columns or use number of 
 partitions incase dynamic partitioning is used.
 In StatsUtils.getColStatisticsFromExpression is where count distincts gets 
 set to row count 
 {code}
   if (encd.getIsPartitionColOrVirtualCol()) {
 // vitual columns
 colType = encd.getTypeInfo().getTypeName();
 countDistincts = numRows;
 oi = encd.getWritableObjectInspector();
 {code}
 Query used to repro the issue :
 {code}
 set hive.stats.fetch.column.stats=true;
 set hive.tez.dynamic.partition.pruning=true;
 explain select d_date 
 from store_sales, date_dim 
 where 
 store_sales.ss_sold_date_sk = date_dim.d_date_sk and 
 date_dim.d_year = 1998;
 {code}
 Plan 
 {code}
 STAGE DEPENDENCIES:
   Stage-1 is a root stage
   Stage-0 depends on stages: Stage-1
 STAGE PLANS:
   Stage: Stage-1
 Tez
   Edges:
 Map 1 - Map 2 (BROADCAST_EDGE)
   DagName: mmokhtar_20140919180404_945d29f5-d041-4420-9666-1c5d64fa6540:8
   Vertices:
 Map 1
 Map Operator Tree:
 TableScan
   alias: store_sales
   filterExpr: ss_sold_date_sk is not null (type: boolean)
   Statistics: Num rows: 550076554 Data size: 47370018816 
 Basic stats: COMPLETE Column stats: COMPLETE
   Map Join Operator
 condition map:
  Inner Join 0 to 1
 condition expressions:
   0 {ss_sold_date_sk}
   1 {d_date_sk} {d_date}
 keys:
   0 ss_sold_date_sk (type: int)
   1 d_date_sk (type: int)
 outputColumnNames: _col22, _col26, _col28
 input vertices:
   1 Map 2
 Statistics: Num rows: 652 Data size: 66504 Basic stats: 
 COMPLETE Column stats: COMPLETE
 Filter Operator
   predicate: (_col22 = _col26) (type: boolean)
   Statistics: Num rows: 326 Data size: 33252 Basic stats: 
 COMPLETE Column stats: COMPLETE
   Select Operator
 expressions: _col28 (type: string)
 outputColumnNames: _col0
 Statistics: Num rows: 326 Data size: 30644 Basic 
 stats: COMPLETE Column stats: COMPLETE
 File Output Operator
   compressed: false
   Statistics: Num rows: 326 Data size: 30644 Basic 
 stats: COMPLETE Column stats: COMPLETE
   table:
   input format: 
 org.apache.hadoop.mapred.TextInputFormat
   output format: 
 org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
   serde: 
 org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
 Execution mode: vectorized
 Map 2
 Map Operator Tree:
 TableScan
   alias: date_dim
   filterExpr: (d_date_sk is not null and (d_year = 1998)) 
 (type: boolean)
   Statistics: Num rows: 73049 Data size: 81741831 Basic 
 stats: COMPLETE Column stats: COMPLETE
   Filter Operator
 predicate: (d_date_sk is not null and (d_year = 1998)) 
 (type: boolean)
 Statistics: Num rows: 652 Data size: 66504 Basic stats: 
 COMPLETE Column stats: COMPLETE

[jira] [Updated] (HIVE-8151) Dynamic partition sort optimization inserts record wrongly to partition when used with GroupBy

2014-09-26 Thread Prasanth J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth J updated HIVE-8151:
-
Priority: Blocker  (was: Critical)

 Dynamic partition sort optimization inserts record wrongly to partition when 
 used with GroupBy
 --

 Key: HIVE-8151
 URL: https://issues.apache.org/jira/browse/HIVE-8151
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0, 0.13.1
Reporter: Prasanth J
Assignee: Prasanth J
Priority: Blocker
 Attachments: HIVE-8151.1.patch, HIVE-8151.2.patch


 HIVE-6455 added dynamic partition sort optimization. It added startGroup() 
 method to FileSink operator to look for changes in reduce key for creating 
 partition directories. This method however is not reliable as the key called 
 with startGroup() is different from the key called with processOp(). 
 startGroup() is called with newly changed key whereas processOp() is called 
 with previously aggregated key. This will result in processOp() writing the 
 last row of previous group as the first row of next group. This happens only 
 when used with group by operator.
 The fix is to not rely on startGroup() and do the partition directory 
 creation in processOp() itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8078) ORC Delta encoding corrupts data when delta overflows long

2014-09-26 Thread Prasanth J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth J updated HIVE-8078:
-
Fix Version/s: 0.14.0

 ORC Delta encoding corrupts data when delta overflows long
 --

 Key: HIVE-8078
 URL: https://issues.apache.org/jira/browse/HIVE-8078
 Project: Hive
  Issue Type: Bug
  Components: File Formats
Affects Versions: 0.14.0, 0.13.1
Reporter: Tim Patterson
Assignee: Prasanth J
Priority: Critical
 Fix For: 0.14.0

 Attachments: HIVE-8078-testcase.patch, HIVE-8078.1.patch, 
 HIVE-8078.2.patch, HIVE-8078.3.patch, HIVE-8078.4.patch, HIVE-8078.5.patch


 There is an issue with the integer encoding that can cause corruption in 
 certain cases.
 The following 3 longs cause this failure.
 4513343538618202711
 2911390882471569739
 -9181829309989854913
 I believe that even though the numbers are in decreasing order, the delta 
 between the last two numbers overflows causing a positive delta, in this case 
 the last digit ends up being corrupted (the delta is applied for the wrong 
 sign resulting in -3442132998776557225 instead of -9181829309989854913.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8162) Dynamic sort optimization propagates additional columns even in the absence of order by

2014-09-26 Thread Prasanth J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth J updated HIVE-8162:
-
Summary: Dynamic sort optimization propagates additional columns even in 
the absence of order by  (was: hive.optimize.sort.dynamic.partition causes 
RuntimeException for inserting into dynamic partitioned table when map function 
is used in the subquery )

 Dynamic sort optimization propagates additional columns even in the absence 
 of order by
 ---

 Key: HIVE-8162
 URL: https://issues.apache.org/jira/browse/HIVE-8162
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.13.0, 0.14.0
Reporter: Na Yang
Assignee: Prasanth J
Priority: Blocker
 Fix For: 0.14.0

 Attachments: 47rows.txt, HIVE-8162.1.patch, HIVE-8162.2.patch


 Exception:
 Diagnostic Messages for this Task:
 java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: 
 Hive Runtime Error: Unable to deserialize reduce input key from 
 x1x129x51x83x14x1x128x0x0x2x1x1x1x120x95x112x114x111x100x117x99x116x95x105x100x0x1x0x0x255
  with properties {columns=reducesinkkey0,reducesinkkey1,reducesinkkey2, 
 serialization.lib=org.apache.hadoop.hive.serde2.binarysortable.BinarySortableSerDe,
  serialization.sort.order=+++, columns.types=int,mapstring,string,int}
   at 
 org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:283)
   at 
 org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:518)
   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:462)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:282)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1122)
   at org.apache.hadoop.mapred.Child.main(Child.java:271)
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
 Error: Unable to deserialize reduce input key from 
 x1x129x51x83x14x1x128x0x0x2x1x1x1x120x95x112x114x111x100x117x99x116x95x105x100x0x1x0x0x255
  with properties {columns=reducesinkkey0,reducesinkkey1,reducesinkkey2, 
 serialization.lib=org.apache.hadoop.hive.serde2.binarysortable.BinarySortableSerDe,
  serialization.sort.order=+++, columns.types=int,mapstring,string,int}
   at 
 org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:222)
   ... 7 more
 Caused by: org.apache.hadoop.hive.serde2.SerDeException: java.io.EOFException
   at 
 org.apache.hadoop.hive.serde2.binarysortable.BinarySortableSerDe.deserialize(BinarySortableSerDe.java:189)
   at 
 org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:220)
   ... 7 more
 Caused by: java.io.EOFException
   at 
 org.apache.hadoop.hive.serde2.binarysortable.InputByteBuffer.read(InputByteBuffer.java:54)
   at 
 org.apache.hadoop.hive.serde2.binarysortable.BinarySortableSerDe.deserializeInt(BinarySortableSerDe.java:533)
   at 
 org.apache.hadoop.hive.serde2.binarysortable.BinarySortableSerDe.deserialize(BinarySortableSerDe.java:236)
   at 
 org.apache.hadoop.hive.serde2.binarysortable.BinarySortableSerDe.deserialize(BinarySortableSerDe.java:185)
   ... 8 more
 Step to reproduce the exception:
 -
 CREATE TABLE associateddata(creative_id int,creative_group_id int,placement_id
 int,sm_campaign_id int,browser_id string, trans_type_p string,trans_time_p
 string,group_name string,event_name string,order_id string,revenue
 float,currency string, trans_type_ci string,trans_time_ci string,f16
 mapstring,string,campaign_id int,user_agent_cat string,geo_country
 string,geo_city string,geo_state string,geo_zip string,geo_dma string,geo_area
 string,geo_isp string,site_id int,section_id int,f16_ci mapstring,string)
 PARTITIONED BY(day_id int, hour_id int) ROW FORMAT DELIMITED FIELDS TERMINATED
 BY '\t';
 LOAD DATA LOCAL INPATH '/tmp/47rows.txt' INTO TABLE associateddata
 PARTITION(day_id=20140814,hour_id=2014081417);
 set hive.exec.dynamic.partition=true;
 set hive.exec.dynamic.partition.mode=nonstrict; 
 CREATE  EXTERNAL TABLE IF NOT EXISTS agg_pv_associateddata_c (
  vt_tran_qty int COMMENT 'The count of view
 thru transactions'
 , pair_value_txt  string  COMMENT 'F16 name values
 pairs'
 )
 PARTITIONED BY (day_id int)
 ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
 STORED AS TEXTFILE
 LOCATION '/user/prodman/agg_pv_associateddata_c';
 INSERT INTO TABLE agg_pv_associateddata_c PARTITION (day_id)
 select 2 as vt_tran_qty, pair_value_txt, day_id
  from (select map( 'x_product_id',coalesce(F16['x_product_id'],'') ) as 
 pair_value_txt , 

[jira] [Updated] (HIVE-8267) Exposing hbase cell latest timestamp through hbase columns mappings to hive columns.

2014-09-26 Thread Muhammad Ehsan ul Haque (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Muhammad Ehsan ul Haque updated HIVE-8267:
--
Status: Patch Available  (was: Open)

Patch available.

Unable to put a review request on review-board, as the patch failing to be 
uploaded. I am new to the review board.

Feature docuentation. Perhaps I should update the page
https://cwiki.apache.org/confluence/display/Hive/HBaseIntegration#HBaseIntegration-HiveHBaseIntegration
* A cell timestamp mapping using {{:timestamp:cf:qualifier}} must be mapped to 
a {{BIGINT}} column of hive.
* A column family cells timestamp mapping using {{:timestamp:cf:}} or 
{{:timestamp:cf:prefix.*}} must be mapped to a {{MAPHIVE PRIMITIVE 
TYPE,BIGINT}} of hive.
* It is not allowed to insert only timestamp without a cell value. Use 
{{hbase.put.default.cell.value = default value}} in the {{SERDEPROPERTIES}} 
to use a default cell value if the cell value is not mapped or may have a null 
value.
* Inserting with a lower timestamp then the current latest timestamp of the 
cell will be inserted as an old version.
* If cell value and timestamp are both mapped and timestamp field is {{null}}, 
then it is filled with {{SERDEPROPERTIES}} {{hbase.put.timestamp}} if provided 
otherwise it will be filled with hbase current timestamp.


 Exposing hbase cell latest timestamp through hbase columns mappings to hive 
 columns.
 

 Key: HIVE-8267
 URL: https://issues.apache.org/jira/browse/HIVE-8267
 Project: Hive
  Issue Type: Bug
  Components: HBase Handler
Affects Versions: 0.14.0
Reporter: Muhammad Ehsan ul Haque
Priority: Minor
 Fix For: 0.14.0

 Attachments: HIVE-8267.0.patch


 Previous attempts HIVE-2781 (not accepted), HIVE-2828 (broken and proposed 
 with restricted feature).
 The feature is to have hbase cell latest timestamp accessible in hive query, 
 by mapping the cell timestamp with a hive column, using mapping format like 
 {code}:timestamp:cf:[optional qualifier or qualifier prefix]{code}
 The hive create table statement would be like
 h4. For mapping a cell latest timestamp.
 {code}
 CREATE TABLE hive_hbase_table (key STRING, col1 STRING, col1_ts BIGINT)
 STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
 WITH SERDEPROPERTIES (hbase.columns.mapping = :key,cf:qualifier, 
 :timestamp:cf:qualifier)
 TBLPROPERTIES (hbase.table.name = hbase_table);
 {code}
 h4. For mapping a column family latest timestamp.
 {code}
 CREATE TABLE hive_hbase_table (key STRING, valuemap MAPSTRING, STRING, 
 timestampmap MAPSTRING, BIGINT)
 STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
 WITH SERDEPROPERTIES (hbase.columns.mapping = :key,cf:,:timestamp:cf:)
 TBLPROPERTIES (hbase.table.name = hbase_table);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8162) Dynamic sort optimization propagates additional columns even in the absence of order by

2014-09-26 Thread Prasanth J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth J updated HIVE-8162:
-
Attachment: HIVE-8162.3.patch

Fixed minimr test failure.

 Dynamic sort optimization propagates additional columns even in the absence 
 of order by
 ---

 Key: HIVE-8162
 URL: https://issues.apache.org/jira/browse/HIVE-8162
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.13.0, 0.14.0
Reporter: Na Yang
Assignee: Prasanth J
Priority: Blocker
 Fix For: 0.14.0

 Attachments: 47rows.txt, HIVE-8162.1.patch, HIVE-8162.2.patch, 
 HIVE-8162.3.patch


 Exception:
 Diagnostic Messages for this Task:
 java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: 
 Hive Runtime Error: Unable to deserialize reduce input key from 
 x1x129x51x83x14x1x128x0x0x2x1x1x1x120x95x112x114x111x100x117x99x116x95x105x100x0x1x0x0x255
  with properties {columns=reducesinkkey0,reducesinkkey1,reducesinkkey2, 
 serialization.lib=org.apache.hadoop.hive.serde2.binarysortable.BinarySortableSerDe,
  serialization.sort.order=+++, columns.types=int,mapstring,string,int}
   at 
 org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:283)
   at 
 org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:518)
   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:462)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:282)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1122)
   at org.apache.hadoop.mapred.Child.main(Child.java:271)
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
 Error: Unable to deserialize reduce input key from 
 x1x129x51x83x14x1x128x0x0x2x1x1x1x120x95x112x114x111x100x117x99x116x95x105x100x0x1x0x0x255
  with properties {columns=reducesinkkey0,reducesinkkey1,reducesinkkey2, 
 serialization.lib=org.apache.hadoop.hive.serde2.binarysortable.BinarySortableSerDe,
  serialization.sort.order=+++, columns.types=int,mapstring,string,int}
   at 
 org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:222)
   ... 7 more
 Caused by: org.apache.hadoop.hive.serde2.SerDeException: java.io.EOFException
   at 
 org.apache.hadoop.hive.serde2.binarysortable.BinarySortableSerDe.deserialize(BinarySortableSerDe.java:189)
   at 
 org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:220)
   ... 7 more
 Caused by: java.io.EOFException
   at 
 org.apache.hadoop.hive.serde2.binarysortable.InputByteBuffer.read(InputByteBuffer.java:54)
   at 
 org.apache.hadoop.hive.serde2.binarysortable.BinarySortableSerDe.deserializeInt(BinarySortableSerDe.java:533)
   at 
 org.apache.hadoop.hive.serde2.binarysortable.BinarySortableSerDe.deserialize(BinarySortableSerDe.java:236)
   at 
 org.apache.hadoop.hive.serde2.binarysortable.BinarySortableSerDe.deserialize(BinarySortableSerDe.java:185)
   ... 8 more
 Step to reproduce the exception:
 -
 CREATE TABLE associateddata(creative_id int,creative_group_id int,placement_id
 int,sm_campaign_id int,browser_id string, trans_type_p string,trans_time_p
 string,group_name string,event_name string,order_id string,revenue
 float,currency string, trans_type_ci string,trans_time_ci string,f16
 mapstring,string,campaign_id int,user_agent_cat string,geo_country
 string,geo_city string,geo_state string,geo_zip string,geo_dma string,geo_area
 string,geo_isp string,site_id int,section_id int,f16_ci mapstring,string)
 PARTITIONED BY(day_id int, hour_id int) ROW FORMAT DELIMITED FIELDS TERMINATED
 BY '\t';
 LOAD DATA LOCAL INPATH '/tmp/47rows.txt' INTO TABLE associateddata
 PARTITION(day_id=20140814,hour_id=2014081417);
 set hive.exec.dynamic.partition=true;
 set hive.exec.dynamic.partition.mode=nonstrict; 
 CREATE  EXTERNAL TABLE IF NOT EXISTS agg_pv_associateddata_c (
  vt_tran_qty int COMMENT 'The count of view
 thru transactions'
 , pair_value_txt  string  COMMENT 'F16 name values
 pairs'
 )
 PARTITIONED BY (day_id int)
 ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
 STORED AS TEXTFILE
 LOCATION '/user/prodman/agg_pv_associateddata_c';
 INSERT INTO TABLE agg_pv_associateddata_c PARTITION (day_id)
 select 2 as vt_tran_qty, pair_value_txt, day_id
  from (select map( 'x_product_id',coalesce(F16['x_product_id'],'') ) as 
 pair_value_txt , day_id , hour_id 
 from associateddata where hour_id = 2014081417 and sm_campaign_id in
 

[jira] [Updated] (HIVE-8267) Exposing hbase cell latest timestamp through hbase columns mappings to hive columns.

2014-09-26 Thread Muhammad Ehsan ul Haque (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Muhammad Ehsan ul Haque updated HIVE-8267:
--
Description: 
Previous attempts HIVE-2781 (not accepted), HIVE-2828 (broken and proposed with 
restricted feature).
The feature is to have hbase cell latest timestamp accessible in hive query, by 
mapping the cell timestamp with a hive column, using mapping format like 
{code}:timestamp:cf:[optional qualifier or qualifier prefix]{code}
The hive create table statement would be like
h4. For mapping a cell latest timestamp.
{code}
CREATE TABLE hive_hbase_table (key STRING, col1 STRING, col1_ts BIGINT)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES (hbase.columns.mapping = :key,cf:qualifier, 
:timestamp:cf:qualifier)
TBLPROPERTIES (hbase.table.name = hbase_table);
{code}
h4. For mapping a column family latest timestamp.
{code}
CREATE TABLE hive_hbase_table (key STRING, valuemap MAPSTRING, STRING, 
timestampmap MAPSTRING, BIGINT)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES (hbase.columns.mapping = :key,cf:,:timestamp:cf:)
TBLPROPERTIES (hbase.table.name = hbase_table);
{code}
h4. Providing default cell value
{code}
CREATE TABLE hive_hbase_table(key int, value string, value_timestamp bigint)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES (hbase.columns.mapping = cf:qualifier, 
:timestamp:cf:qualifier,
  hbase.put.default.cell.value = default value)
TBLPROPERTIES (hbase.table.name = hbase_table);
{code}

  was:
Previous attempts HIVE-2781 (not accepted), HIVE-2828 (broken and proposed with 
restricted feature).
The feature is to have hbase cell latest timestamp accessible in hive query, by 
mapping the cell timestamp with a hive column, using mapping format like 
{code}:timestamp:cf:[optional qualifier or qualifier prefix]{code}
The hive create table statement would be like
h4. For mapping a cell latest timestamp.
{code}
CREATE TABLE hive_hbase_table (key STRING, col1 STRING, col1_ts BIGINT)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES (hbase.columns.mapping = :key,cf:qualifier, 
:timestamp:cf:qualifier)
TBLPROPERTIES (hbase.table.name = hbase_table);
{code}
h4. For mapping a column family latest timestamp.
{code}
CREATE TABLE hive_hbase_table (key STRING, valuemap MAPSTRING, STRING, 
timestampmap MAPSTRING, BIGINT)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES (hbase.columns.mapping = :key,cf:,:timestamp:cf:)
TBLPROPERTIES (hbase.table.name = hbase_table);
{code}


 Exposing hbase cell latest timestamp through hbase columns mappings to hive 
 columns.
 

 Key: HIVE-8267
 URL: https://issues.apache.org/jira/browse/HIVE-8267
 Project: Hive
  Issue Type: Bug
  Components: HBase Handler
Affects Versions: 0.14.0
Reporter: Muhammad Ehsan ul Haque
Priority: Minor
 Fix For: 0.14.0

 Attachments: HIVE-8267.0.patch


 Previous attempts HIVE-2781 (not accepted), HIVE-2828 (broken and proposed 
 with restricted feature).
 The feature is to have hbase cell latest timestamp accessible in hive query, 
 by mapping the cell timestamp with a hive column, using mapping format like 
 {code}:timestamp:cf:[optional qualifier or qualifier prefix]{code}
 The hive create table statement would be like
 h4. For mapping a cell latest timestamp.
 {code}
 CREATE TABLE hive_hbase_table (key STRING, col1 STRING, col1_ts BIGINT)
 STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
 WITH SERDEPROPERTIES (hbase.columns.mapping = :key,cf:qualifier, 
 :timestamp:cf:qualifier)
 TBLPROPERTIES (hbase.table.name = hbase_table);
 {code}
 h4. For mapping a column family latest timestamp.
 {code}
 CREATE TABLE hive_hbase_table (key STRING, valuemap MAPSTRING, STRING, 
 timestampmap MAPSTRING, BIGINT)
 STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
 WITH SERDEPROPERTIES (hbase.columns.mapping = :key,cf:,:timestamp:cf:)
 TBLPROPERTIES (hbase.table.name = hbase_table);
 {code}
 h4. Providing default cell value
 {code}
 CREATE TABLE hive_hbase_table(key int, value string, value_timestamp bigint)
 STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
 WITH SERDEPROPERTIES (hbase.columns.mapping = cf:qualifier, 
 :timestamp:cf:qualifier,
   hbase.put.default.cell.value = default value)
 TBLPROPERTIES (hbase.table.name = hbase_table);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8264) Math UDFs in Reducer-with-vectorization fail with ArrayIndexOutOfBoundsException

2014-09-26 Thread Thiruvel Thirumoolan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14148981#comment-14148981
 ] 

Thiruvel Thirumoolan commented on HIVE-8264:


Thanks [~mmccline], it appears to fix the problem. After applying the patch in 
HIVE-8171, I don't see any exceptions, I see the query running fine.

 Math UDFs in Reducer-with-vectorization fail with 
 ArrayIndexOutOfBoundsException
 

 Key: HIVE-8264
 URL: https://issues.apache.org/jira/browse/HIVE-8264
 Project: Hive
  Issue Type: Bug
  Components: Tez, UDF, Vectorization
Affects Versions: 0.14.0
 Environment: Hive trunk - as of today
 Tez - 0.5.0
 Hadoop - 2.5
Reporter: Thiruvel Thirumoolan
  Labels: mathfunction, tez, vectorization

 Following queries are representative of the exceptions we are seeing with 
 trunk. These queries pass if vectorization is disabled (or if limit is 
 removed, which means no reducer).
 select name, log2(0) from (select name from mytable limit 1) t;
 select name, rand() from (select name from mytable limit 1) t;
 .. similar patterns with other Math UDFs'.
 Exception:
 ], TaskAttempt 3 failed, info=[Error: Failure while running 
 task:java.lang.RuntimeException: java.lang.RuntimeException: 
 org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
 processing vector batch (tag=0)
   at 
 org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:177)
   at 
 org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:142)
   at 
 org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:324)
   at 
 org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:180)
   at 
 org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:172)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1637)
   at 
 org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:172)
   at 
 org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:167)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:722)
 Caused by: java.lang.RuntimeException: 
 org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
 processing vector batch (tag=0)
   at 
 org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:254)
   at 
 org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:167)
   at 
 org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:154)
   ... 14 more
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
 Error while processing vector batch (tag=0)
   at 
 org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.processVectors(ReduceRecordSource.java:360)
   at 
 org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:242)
   ... 16 more
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error evaluating 
 null
   at 
 org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.processOp(VectorSelectOperator.java:127)
   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:801)
   at 
 org.apache.hadoop.hive.ql.exec.vector.VectorLimitOperator.processOp(VectorLimitOperator.java:47)
   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:801)
   at 
 org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.processOp(VectorSelectOperator.java:139)
   at 
 org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.processVectors(ReduceRecordSource.java:347)
   ... 17 more
 Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
   at 
 org.apache.hadoop.hive.ql.exec.vector.expressions.ConstantVectorExpression.evaluateLong(ConstantVectorExpression.java:102)
   at 
 org.apache.hadoop.hive.ql.exec.vector.expressions.ConstantVectorExpression.evaluate(ConstantVectorExpression.java:150)
   at 
 org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.processOp(VectorSelectOperator.java:125)
   ... 22 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8200) Make beeline use the hive-jdbc standalone jar

2014-09-26 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14148982#comment-14148982
 ] 

Vaibhav Gumashta commented on HIVE-8200:


[~damien.carol] I'm happy if someone takes this over from me: 
https://issues.apache.org/jira/browse/HIVE-8270. I won't be able to get to it 
in the next few days. 
Meanwhile, beeline will complain about class loading issues like the following 
(in a secure cluster): 
{code}
Connecting to 
jdbc:hive2://ip-172-31-36-90.ec2.internal:1/;principal=hive/_h...@example.com
14/09/25 05:55:04 INFO jdbc.Utils: Supplied authorities: 
ip-172-31-36-90.ec2.internal:1
Could not load shims in class 
org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge23
{code}

I think it is better to revert this (which is an optimization anyway) if we 
don't plan to fix the uber jar soon.

 Make beeline use the hive-jdbc standalone jar
 -

 Key: HIVE-8200
 URL: https://issues.apache.org/jira/browse/HIVE-8200
 Project: Hive
  Issue Type: Bug
  Components: CLI, HiveServer2
Affects Versions: 0.14.0
Reporter: Deepesh Khandelwal
Assignee: Deepesh Khandelwal
 Fix For: 0.14.0

 Attachments: HIVE-8200.1.patch


 Hiveserver2 JDBC client beeline currently generously includes all the jars 
 under $HIVE_HOME/lib in its invocation. With the fix from HIVE-8129 it should 
 only need a few. This will be a good validation of the hive-jdbc standalone 
 jar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-8264) Math UDFs in Reducer-with-vectorization fail with ArrayIndexOutOfBoundsException

2014-09-26 Thread Thiruvel Thirumoolan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thiruvel Thirumoolan resolved HIVE-8264.

Resolution: Duplicate

 Math UDFs in Reducer-with-vectorization fail with 
 ArrayIndexOutOfBoundsException
 

 Key: HIVE-8264
 URL: https://issues.apache.org/jira/browse/HIVE-8264
 Project: Hive
  Issue Type: Bug
  Components: Tez, UDF, Vectorization
Affects Versions: 0.14.0
 Environment: Hive trunk - as of today
 Tez - 0.5.0
 Hadoop - 2.5
Reporter: Thiruvel Thirumoolan
  Labels: mathfunction, tez, vectorization

 Following queries are representative of the exceptions we are seeing with 
 trunk. These queries pass if vectorization is disabled (or if limit is 
 removed, which means no reducer).
 select name, log2(0) from (select name from mytable limit 1) t;
 select name, rand() from (select name from mytable limit 1) t;
 .. similar patterns with other Math UDFs'.
 Exception:
 ], TaskAttempt 3 failed, info=[Error: Failure while running 
 task:java.lang.RuntimeException: java.lang.RuntimeException: 
 org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
 processing vector batch (tag=0)
   at 
 org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:177)
   at 
 org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:142)
   at 
 org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:324)
   at 
 org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:180)
   at 
 org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:172)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1637)
   at 
 org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:172)
   at 
 org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:167)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:722)
 Caused by: java.lang.RuntimeException: 
 org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
 processing vector batch (tag=0)
   at 
 org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:254)
   at 
 org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:167)
   at 
 org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:154)
   ... 14 more
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
 Error while processing vector batch (tag=0)
   at 
 org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.processVectors(ReduceRecordSource.java:360)
   at 
 org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:242)
   ... 16 more
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error evaluating 
 null
   at 
 org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.processOp(VectorSelectOperator.java:127)
   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:801)
   at 
 org.apache.hadoop.hive.ql.exec.vector.VectorLimitOperator.processOp(VectorLimitOperator.java:47)
   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:801)
   at 
 org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.processOp(VectorSelectOperator.java:139)
   at 
 org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.processVectors(ReduceRecordSource.java:347)
   ... 17 more
 Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
   at 
 org.apache.hadoop.hive.ql.exec.vector.expressions.ConstantVectorExpression.evaluateLong(ConstantVectorExpression.java:102)
   at 
 org.apache.hadoop.hive.ql.exec.vector.expressions.ConstantVectorExpression.evaluate(ConstantVectorExpression.java:150)
   at 
 org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.processOp(VectorSelectOperator.java:125)
   ... 22 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8267) Exposing hbase cell latest timestamp through hbase columns mappings to hive columns.

2014-09-26 Thread Muhammad Ehsan ul Haque (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Muhammad Ehsan ul Haque updated HIVE-8267:
--
Issue Type: New Feature  (was: Bug)

 Exposing hbase cell latest timestamp through hbase columns mappings to hive 
 columns.
 

 Key: HIVE-8267
 URL: https://issues.apache.org/jira/browse/HIVE-8267
 Project: Hive
  Issue Type: New Feature
  Components: HBase Handler
Affects Versions: 0.14.0
Reporter: Muhammad Ehsan ul Haque
Priority: Minor
 Fix For: 0.14.0

 Attachments: HIVE-8267.0.patch


 Previous attempts HIVE-2781 (not accepted), HIVE-2828 (broken and proposed 
 with restricted feature).
 The feature is to have hbase cell latest timestamp accessible in hive query, 
 by mapping the cell timestamp with a hive column, using mapping format like 
 {code}:timestamp:cf:[optional qualifier or qualifier prefix]{code}
 The hive create table statement would be like
 h4. For mapping a cell latest timestamp.
 {code}
 CREATE TABLE hive_hbase_table (key STRING, col1 STRING, col1_ts BIGINT)
 STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
 WITH SERDEPROPERTIES (hbase.columns.mapping = :key,cf:qualifier, 
 :timestamp:cf:qualifier)
 TBLPROPERTIES (hbase.table.name = hbase_table);
 {code}
 h4. For mapping a column family latest timestamp.
 {code}
 CREATE TABLE hive_hbase_table (key STRING, valuemap MAPSTRING, STRING, 
 timestampmap MAPSTRING, BIGINT)
 STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
 WITH SERDEPROPERTIES (hbase.columns.mapping = :key,cf:,:timestamp:cf:)
 TBLPROPERTIES (hbase.table.name = hbase_table);
 {code}
 h4. Providing default cell value
 {code}
 CREATE TABLE hive_hbase_table(key int, value string, value_timestamp bigint)
 STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
 WITH SERDEPROPERTIES (hbase.columns.mapping = cf:qualifier, 
 :timestamp:cf:qualifier,
   hbase.put.default.cell.value = default value)
 TBLPROPERTIES (hbase.table.name = hbase_table);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HIVE-8246) HiveServer2 in http-kerberos mode is restrictive on client usernames

2014-09-26 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta reopened HIVE-8246:


Reopening to commit to branch-14. Just noticed the mail in the dev list.

 HiveServer2 in http-kerberos mode is restrictive on client usernames
 

 Key: HIVE-8246
 URL: https://issues.apache.org/jira/browse/HIVE-8246
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2, JDBC
Affects Versions: 0.13.0
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.14.0

 Attachments: HIVE-8246.1.patch


 Unable to use client usernames of the format:
 {code}
 username/host@REALM
 username@FOREIGN_REALM
 {code}
 The following works fine:
 {code}
 username@REALM 
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-6683) Beeline does not accept comments at end of line

2014-09-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149007#comment-14149007
 ] 

Hive QA commented on HIVE-6683:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12671244/HIVE-6683.1.patch

{color:green}SUCCESS:{color} +1 6353 tests passed

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/992/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/992/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-992/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12671244

 Beeline does not accept comments at end of line
 ---

 Key: HIVE-6683
 URL: https://issues.apache.org/jira/browse/HIVE-6683
 Project: Hive
  Issue Type: Bug
  Components: CLI
Affects Versions: 0.10.0
Reporter: Jeremy Beard
Assignee: Sergio Peña
 Fix For: 0.14.0

 Attachments: HIVE-6683.1.patch, HIVE-6683.1.patch


 Beeline fails to read queries where lines have comments at the end. This 
 works in the embedded Hive CLI.
 Example:
 SELECT
 1 -- this is a comment about this value
 FROM
 table;
 Error: Error while processing statement: FAILED: ParseException line 1:36 
 mismatched input 'EOF' expecting FROM near '1' in from clause 
 (state=42000,code=4)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-6090) Audit logs for HiveServer2

2014-09-26 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149014#comment-14149014
 ] 

Vaibhav Gumashta commented on HIVE-6090:


[~thiruvel] This is a very useful functionality. Thanks for taking it up! Can 
you also upload the patch to review board? 

 Audit logs for HiveServer2
 --

 Key: HIVE-6090
 URL: https://issues.apache.org/jira/browse/HIVE-6090
 Project: Hive
  Issue Type: Improvement
  Components: Diagnosability, HiveServer2
Reporter: Thiruvel Thirumoolan
Assignee: Thiruvel Thirumoolan
 Attachments: HIVE-6090.1.WIP.patch, HIVE-6090.patch


 HiveMetastore has audit logs and would like to audit all queries or requests 
 to HiveServer2 also. This will help in understanding how the APIs were used, 
 queries submitted, users etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8180) Update SparkReduceRecordHandler for processing the vectors [spark branch]

2014-09-26 Thread Chinna Rao Lalam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinna Rao Lalam updated HIVE-8180:
---
Attachment: HIVE-8180-spark.patch

 Update SparkReduceRecordHandler for processing the vectors [spark branch]
 -

 Key: HIVE-8180
 URL: https://issues.apache.org/jira/browse/HIVE-8180
 Project: Hive
  Issue Type: Bug
Reporter: Chinna Rao Lalam
Assignee: Chinna Rao Lalam
 Attachments: HIVE-8180-spark.patch


 Update SparkReduceRecordHandler for processing the vectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8180) Update SparkReduceRecordHandler for processing the vectors [spark branch]

2014-09-26 Thread Chinna Rao Lalam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinna Rao Lalam updated HIVE-8180:
---
Status: Patch Available  (was: Open)

SparkReduceRecordHandler is updated with processVectors(Iterator values, byte 
tag) method for processing vectors.

 Update SparkReduceRecordHandler for processing the vectors [spark branch]
 -

 Key: HIVE-8180
 URL: https://issues.apache.org/jira/browse/HIVE-8180
 Project: Hive
  Issue Type: Bug
Reporter: Chinna Rao Lalam
Assignee: Chinna Rao Lalam
 Attachments: HIVE-8180-spark.patch


 Update SparkReduceRecordHandler for processing the vectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8171) Tez and Vectorized Reduce doesn't create scratch columns

2014-09-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149067#comment-14149067
 ] 

Hive QA commented on HIVE-8171:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12671252/HIVE-8171.04.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 6355 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_schemeAuthority2
org.apache.hadoop.hive.ql.parse.TestParse.testParse_union
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/993/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/993/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-993/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12671252

 Tez and Vectorized Reduce doesn't create scratch columns
 

 Key: HIVE-8171
 URL: https://issues.apache.org/jira/browse/HIVE-8171
 Project: Hive
  Issue Type: Bug
Reporter: Matt McCline
Assignee: Matt McCline
Priority: Critical
 Fix For: 0.14.0

 Attachments: HIVE-8171.01.patch, HIVE-8171.02.patch, 
 HIVE-8171.03.patch, HIVE-8171.04.patch


 This query fails with ArrayIndexOutofBound exception in the reducer.
 {code}
 create table varchar_3 (
   field varchar(25)
 ) stored as orc;
 insert into table varchar_3 select cint from alltypesorc limit 10;
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-7389) Reduce number of metastore calls in MoveTask (when loading dynamic partitions)

2014-09-26 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HIVE-7389:
---
Attachment: HIVE-7389.2.patch

rebasing the patch to trunk.

 Reduce number of metastore calls in MoveTask (when loading dynamic partitions)
 --

 Key: HIVE-7389
 URL: https://issues.apache.org/jira/browse/HIVE-7389
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0
Reporter: Rajesh Balamohan
Assignee: Rajesh Balamohan
  Labels: performance
 Attachments: HIVE-7389.1.patch, HIVE-7389.2.patch, 
 local_vm_testcase.txt


 When the number of dynamic partitions to be loaded are high, the time taken 
 for 'MoveTask' is greater than the actual job in some scenarios.  It would be 
 possible to reduce overall runtime by reducing the number of calls made to 
 metastore from MoveTask operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8180) Update SparkReduceRecordHandler for processing the vectors [spark branch]

2014-09-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149130#comment-14149130
 ] 

Hive QA commented on HIVE-8180:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12671435/HIVE-8180-spark.patch

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 6512 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample_islocalmode_hook
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_cast_constant
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_cast_constant
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_fs_default_name2
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/163/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/163/console
Test logs: 
http://ec2-54-176-176-199.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-163/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12671435

 Update SparkReduceRecordHandler for processing the vectors [spark branch]
 -

 Key: HIVE-8180
 URL: https://issues.apache.org/jira/browse/HIVE-8180
 Project: Hive
  Issue Type: Bug
Reporter: Chinna Rao Lalam
Assignee: Chinna Rao Lalam
 Attachments: HIVE-8180-spark.patch


 Update SparkReduceRecordHandler for processing the vectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8231) Error when insert into empty table with ACID

2014-09-26 Thread Damien Carol (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149134#comment-14149134
 ] 

Damien Carol commented on HIVE-8231:


*seams to only affects ORC format*

More investigation on last trunk (26/09/2014):
{noformat}
0: jdbc:hive2://nc-h04:1/casino drop table if exists foo6;
No rows affected (0.121 seconds)
0: jdbc:hive2://nc-h04:1/casino create table foo6 (id int);
No rows affected (0.08 seconds)
0: jdbc:hive2://nc-h04:1/casino insert into table foo6 VALUES(1);
No rows affected (2.823 seconds)
0: jdbc:hive2://nc-h04:1/casino select * from foo6;
+--+--+
| foo6.id  |
+--+--+
| 1|
+--+--+
1 row selected (0.079 seconds)
0: jdbc:hive2://nc-h04:1/casino
0: jdbc:hive2://nc-h04:1/casino
0: jdbc:hive2://nc-h04:1/casino drop table if exists foo7;
No rows affected (0.127 seconds)
0: jdbc:hive2://nc-h04:1/casino create table foo7 (id int) STORED AS ORC;
No rows affected (0.059 seconds)
0: jdbc:hive2://nc-h04:1/casino insert into table foo7 VALUES(1);
No rows affected (1.707 seconds)
0: jdbc:hive2://nc-h04:1/casino select * from foo7;
+--+--+
| foo7.id  |
+--+--+
+--+--+
No rows selected (0.084 seconds)
{noformat}

 Error when insert into empty table with ACID
 

 Key: HIVE-8231
 URL: https://issues.apache.org/jira/browse/HIVE-8231
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0
Reporter: Damien Carol

 Steps to show the bug :
 1. create table 
 {code}
 create table encaissement_1b_64m like encaissement_1b;
 {code}
 2. check table 
 {code}
 desc encaissement_1b_64m;
 dfs -ls hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m;
 {code}
 everything is ok:
 {noformat}
 0: jdbc:hive2://nc-h04:1/casino desc encaissement_1b_64m;
   
 +++--+--+
 |  col_name  | data_type  | comment  |
 +++--+--+
 | id | int|  |
 | idmagasin  | int|  |
 | zibzin | string |  |
 | cheque | int|  |
 | montant| double |  |
 | date   | timestamp  |  |
 | col_6  | string |  |
 | col_7  | string |  |
 | col_8  | string |  |
 +++--+--+
 9 rows selected (0.158 seconds)
 0: jdbc:hive2://nc-h04:1/casino dfs -ls 
 hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m/;
 +-+--+
 | DFS Output  |
 +-+--+
 +-+--+
 No rows selected (0.01 seconds)
 {noformat}
 3. Insert values into the new table
 {noformat}
 insert into table encaissement_1b_64m VALUES (1, 1, 
 '8909', 1, 12.5, '12/05/2014', '','','');
 {noformat}
 4. Check
 {noformat}
 0: jdbc:hive2://nc-h04:1/casino select id from encaissement_1b_64m;
 +-+--+
 | id  |
 +-+--+
 +-+--+
 No rows selected (0.091 seconds)
 {noformat}
 There are already a pb. I don't see the inserted row.
 5. When I'm checking HDFS directory, I see {{delta_421_421}} folder
 {noformat}
 0: jdbc:hive2://nc-h04:1/casino dfs -ls 
 hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m/;
 +-+--+
 | DFS 
 Output  |
 +-+--+
 | Found 1 items   
 |
 | drwxr-xr-x   - hduser supergroup  0 2014-09-23 12:17 
 hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m/delta_421_421
   |
 +-+--+
 2 rows selected (0.014 seconds)
 {noformat}
 6. Doing a major compaction solves the bug
 {noformat}
 0: jdbc:hive2://nc-h04:1/casino alter table encaissement_1b_64m compact 
 'major';
 No rows affected (0.046 seconds)
 0: jdbc:hive2://nc-h04:1/casino dfs -ls 
 hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m/;
 ++--+
 |   

[jira] [Commented] (HIVE-8203) ACID operations result in NPE when run through HS2

2014-09-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149133#comment-14149133
 ] 

Hive QA commented on HIVE-8203:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12671253/HIVE-8203.2.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 6353 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.parse.TestParse.testParse_union
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/994/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/994/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-994/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12671253

 ACID operations result in NPE when run through HS2
 --

 Key: HIVE-8203
 URL: https://issues.apache.org/jira/browse/HIVE-8203
 Project: Hive
  Issue Type: Bug
  Components: Transactions
Affects Versions: 0.14.0
Reporter: Alan Gates
Assignee: Alan Gates
Priority: Critical
 Fix For: 0.14.0

 Attachments: HIVE-8203.2.patch, HIVE-8203.patch


 When accessing Hive via HS2, any operation requiring the DbTxnManager results 
 in an NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8231) Error when insert into empty table with ACID

2014-09-26 Thread Damien Carol (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damien Carol updated HIVE-8231:
---
Fix Version/s: 0.14.0

 Error when insert into empty table with ACID
 

 Key: HIVE-8231
 URL: https://issues.apache.org/jira/browse/HIVE-8231
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0
Reporter: Damien Carol
 Fix For: 0.14.0


 Steps to show the bug :
 1. create table 
 {code}
 create table encaissement_1b_64m like encaissement_1b;
 {code}
 2. check table 
 {code}
 desc encaissement_1b_64m;
 dfs -ls hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m;
 {code}
 everything is ok:
 {noformat}
 0: jdbc:hive2://nc-h04:1/casino desc encaissement_1b_64m;
   
 +++--+--+
 |  col_name  | data_type  | comment  |
 +++--+--+
 | id | int|  |
 | idmagasin  | int|  |
 | zibzin | string |  |
 | cheque | int|  |
 | montant| double |  |
 | date   | timestamp  |  |
 | col_6  | string |  |
 | col_7  | string |  |
 | col_8  | string |  |
 +++--+--+
 9 rows selected (0.158 seconds)
 0: jdbc:hive2://nc-h04:1/casino dfs -ls 
 hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m/;
 +-+--+
 | DFS Output  |
 +-+--+
 +-+--+
 No rows selected (0.01 seconds)
 {noformat}
 3. Insert values into the new table
 {noformat}
 insert into table encaissement_1b_64m VALUES (1, 1, 
 '8909', 1, 12.5, '12/05/2014', '','','');
 {noformat}
 4. Check
 {noformat}
 0: jdbc:hive2://nc-h04:1/casino select id from encaissement_1b_64m;
 +-+--+
 | id  |
 +-+--+
 +-+--+
 No rows selected (0.091 seconds)
 {noformat}
 There are already a pb. I don't see the inserted row.
 5. When I'm checking HDFS directory, I see {{delta_421_421}} folder
 {noformat}
 0: jdbc:hive2://nc-h04:1/casino dfs -ls 
 hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m/;
 +-+--+
 | DFS 
 Output  |
 +-+--+
 | Found 1 items   
 |
 | drwxr-xr-x   - hduser supergroup  0 2014-09-23 12:17 
 hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m/delta_421_421
   |
 +-+--+
 2 rows selected (0.014 seconds)
 {noformat}
 6. Doing a major compaction solves the bug
 {noformat}
 0: jdbc:hive2://nc-h04:1/casino alter table encaissement_1b_64m compact 
 'major';
 No rows affected (0.046 seconds)
 0: jdbc:hive2://nc-h04:1/casino dfs -ls 
 hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m/;
 ++--+
 | DFS Output  
|
 ++--+
 | Found 1 items   
|
 | drwxr-xr-x   - hduser supergroup  0 2014-09-23 12:21 
 hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m/base_421  
 |
 ++--+
 2 rows selected (0.02 seconds)
 {noformat}
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8231) Error when insert into empty table with ACID

2014-09-26 Thread Damien Carol (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149142#comment-14149142
 ] 

Damien Carol commented on HIVE-8231:


I see in log :
{noformat}
2014-09-26 15:32:18,483 ERROR [Thread-8]: compactor.Initiator 
(Initiator.java:run(111)) - Caught exception while trying to determine if we 
should compact testsimon.values__tmp__table__11.  Marking clean to avoid 
repeated failures, java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.txn.compactor.Initiator.run(Initiator.java:88)

2014-09-26 15:32:18,484 ERROR [Thread-8]: txn.CompactionTxnHandler 
(CompactionTxnHandler.java:markCleaned(355)) - Unable to delete compaction 
record
{noformat}

I wonder if there is a problem with the tables that store values.

Every values table stay in database :
{noformat}
0: jdbc:hive2://nc-h04:1/casino show tables;
+---+--+
| tab_name  |
+---+--+
| classification_compte |
| dim_hotesse   |
...
| societe   |
| testsimon__dim_lieu_sorted_dls__  |
| values__tmp__table__10|
| values__tmp__table__11|
| values__tmp__table__12|
| values__tmp__table__2 |
| values__tmp__table__3 |
| values__tmp__table__4 |
| values__tmp__table__5 |
| values__tmp__table__6 |
| values__tmp__table__7 |
| values__tmp__table__8 |
| values__tmp__table__9 |
+---+--+
47 rows selected (0.061 seconds)
0: jdbc:hive2://nc-h04:1/casino
{noformat}

 Error when insert into empty table with ACID
 

 Key: HIVE-8231
 URL: https://issues.apache.org/jira/browse/HIVE-8231
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0
Reporter: Damien Carol
 Fix For: 0.14.0


 Steps to show the bug :
 1. create table 
 {code}
 create table encaissement_1b_64m like encaissement_1b;
 {code}
 2. check table 
 {code}
 desc encaissement_1b_64m;
 dfs -ls hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m;
 {code}
 everything is ok:
 {noformat}
 0: jdbc:hive2://nc-h04:1/casino desc encaissement_1b_64m;
   
 +++--+--+
 |  col_name  | data_type  | comment  |
 +++--+--+
 | id | int|  |
 | idmagasin  | int|  |
 | zibzin | string |  |
 | cheque | int|  |
 | montant| double |  |
 | date   | timestamp  |  |
 | col_6  | string |  |
 | col_7  | string |  |
 | col_8  | string |  |
 +++--+--+
 9 rows selected (0.158 seconds)
 0: jdbc:hive2://nc-h04:1/casino dfs -ls 
 hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m/;
 +-+--+
 | DFS Output  |
 +-+--+
 +-+--+
 No rows selected (0.01 seconds)
 {noformat}
 3. Insert values into the new table
 {noformat}
 insert into table encaissement_1b_64m VALUES (1, 1, 
 '8909', 1, 12.5, '12/05/2014', '','','');
 {noformat}
 4. Check
 {noformat}
 0: jdbc:hive2://nc-h04:1/casino select id from encaissement_1b_64m;
 +-+--+
 | id  |
 +-+--+
 +-+--+
 No rows selected (0.091 seconds)
 {noformat}
 There are already a pb. I don't see the inserted row.
 5. When I'm checking HDFS directory, I see {{delta_421_421}} folder
 {noformat}
 0: jdbc:hive2://nc-h04:1/casino dfs -ls 
 hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m/;
 +-+--+
 | DFS 
 Output  |
 +-+--+
 | Found 1 items   
 |
 | drwxr-xr-x   - hduser supergroup  0 2014-09-23 12:17 
 hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m/delta_421_421
   |
 +-+--+
 2 rows selected (0.014 seconds)
 {noformat}
 6. Doing a major compaction 

[jira] [Assigned] (HIVE-8231) Error when insert into empty table with ACID

2014-09-26 Thread Damien Carol (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damien Carol reassigned HIVE-8231:
--

Assignee: Damien Carol

 Error when insert into empty table with ACID
 

 Key: HIVE-8231
 URL: https://issues.apache.org/jira/browse/HIVE-8231
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0
Reporter: Damien Carol
Assignee: Damien Carol
 Fix For: 0.14.0


 Steps to show the bug :
 1. create table 
 {code}
 create table encaissement_1b_64m like encaissement_1b;
 {code}
 2. check table 
 {code}
 desc encaissement_1b_64m;
 dfs -ls hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m;
 {code}
 everything is ok:
 {noformat}
 0: jdbc:hive2://nc-h04:1/casino desc encaissement_1b_64m;
   
 +++--+--+
 |  col_name  | data_type  | comment  |
 +++--+--+
 | id | int|  |
 | idmagasin  | int|  |
 | zibzin | string |  |
 | cheque | int|  |
 | montant| double |  |
 | date   | timestamp  |  |
 | col_6  | string |  |
 | col_7  | string |  |
 | col_8  | string |  |
 +++--+--+
 9 rows selected (0.158 seconds)
 0: jdbc:hive2://nc-h04:1/casino dfs -ls 
 hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m/;
 +-+--+
 | DFS Output  |
 +-+--+
 +-+--+
 No rows selected (0.01 seconds)
 {noformat}
 3. Insert values into the new table
 {noformat}
 insert into table encaissement_1b_64m VALUES (1, 1, 
 '8909', 1, 12.5, '12/05/2014', '','','');
 {noformat}
 4. Check
 {noformat}
 0: jdbc:hive2://nc-h04:1/casino select id from encaissement_1b_64m;
 +-+--+
 | id  |
 +-+--+
 +-+--+
 No rows selected (0.091 seconds)
 {noformat}
 There are already a pb. I don't see the inserted row.
 5. When I'm checking HDFS directory, I see {{delta_421_421}} folder
 {noformat}
 0: jdbc:hive2://nc-h04:1/casino dfs -ls 
 hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m/;
 +-+--+
 | DFS 
 Output  |
 +-+--+
 | Found 1 items   
 |
 | drwxr-xr-x   - hduser supergroup  0 2014-09-23 12:17 
 hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m/delta_421_421
   |
 +-+--+
 2 rows selected (0.014 seconds)
 {noformat}
 6. Doing a major compaction solves the bug
 {noformat}
 0: jdbc:hive2://nc-h04:1/casino alter table encaissement_1b_64m compact 
 'major';
 No rows affected (0.046 seconds)
 0: jdbc:hive2://nc-h04:1/casino dfs -ls 
 hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m/;
 ++--+
 | DFS Output  
|
 ++--+
 | Found 1 items   
|
 | drwxr-xr-x   - hduser supergroup  0 2014-09-23 12:21 
 hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m/base_421  
 |
 ++--+
 2 rows selected (0.02 seconds)
 {noformat}
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8186) Self join may fail if one side has VCs and other doesn't

2014-09-26 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149152#comment-14149152
 ] 

Xuefu Zhang commented on HIVE-8186:
---

What's VC, by the way? Venture Captial :)

 Self join may fail if one side has VCs and other doesn't
 

 Key: HIVE-8186
 URL: https://issues.apache.org/jira/browse/HIVE-8186
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HIVE-8186.1.patch.txt


 See comments. This also fails on trunk, although not on original join_vc query



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8186) Self join may fail if one side has VCs and other doesn't

2014-09-26 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149169#comment-14149169
 ] 

Xuefu Zhang commented on HIVE-8186:
---

I think I got it. Virtual Column. But I really hope we publish a DICT for these 
ABBRs. Before we use any, we put it in the DICT first.

 Self join may fail if one side has VCs and other doesn't
 

 Key: HIVE-8186
 URL: https://issues.apache.org/jira/browse/HIVE-8186
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HIVE-8186.1.patch.txt


 See comments. This also fails on trunk, although not on original join_vc query



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8180) Update SparkReduceRecordHandler for processing the vectors [spark branch]

2014-09-26 Thread Chinna Rao Lalam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinna Rao Lalam updated HIVE-8180:
---
Status: Open  (was: Patch Available)

Need to update .out files for MR and TEZ for the updated vector_cast_constant.q 
file

 Update SparkReduceRecordHandler for processing the vectors [spark branch]
 -

 Key: HIVE-8180
 URL: https://issues.apache.org/jira/browse/HIVE-8180
 Project: Hive
  Issue Type: Bug
Reporter: Chinna Rao Lalam
Assignee: Chinna Rao Lalam
 Attachments: HIVE-8180-spark.patch


 Update SparkReduceRecordHandler for processing the vectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8231) Error when insert into empty table with ACID

2014-09-26 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149206#comment-14149206
 ] 

Alan Gates commented on HIVE-8231:
--

The values table should vanish as soon as you close the session that did the 
insert, since they are temp tables.

 Error when insert into empty table with ACID
 

 Key: HIVE-8231
 URL: https://issues.apache.org/jira/browse/HIVE-8231
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0
Reporter: Damien Carol
Assignee: Damien Carol
 Fix For: 0.14.0


 Steps to show the bug :
 1. create table 
 {code}
 create table encaissement_1b_64m like encaissement_1b;
 {code}
 2. check table 
 {code}
 desc encaissement_1b_64m;
 dfs -ls hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m;
 {code}
 everything is ok:
 {noformat}
 0: jdbc:hive2://nc-h04:1/casino desc encaissement_1b_64m;
   
 +++--+--+
 |  col_name  | data_type  | comment  |
 +++--+--+
 | id | int|  |
 | idmagasin  | int|  |
 | zibzin | string |  |
 | cheque | int|  |
 | montant| double |  |
 | date   | timestamp  |  |
 | col_6  | string |  |
 | col_7  | string |  |
 | col_8  | string |  |
 +++--+--+
 9 rows selected (0.158 seconds)
 0: jdbc:hive2://nc-h04:1/casino dfs -ls 
 hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m/;
 +-+--+
 | DFS Output  |
 +-+--+
 +-+--+
 No rows selected (0.01 seconds)
 {noformat}
 3. Insert values into the new table
 {noformat}
 insert into table encaissement_1b_64m VALUES (1, 1, 
 '8909', 1, 12.5, '12/05/2014', '','','');
 {noformat}
 4. Check
 {noformat}
 0: jdbc:hive2://nc-h04:1/casino select id from encaissement_1b_64m;
 +-+--+
 | id  |
 +-+--+
 +-+--+
 No rows selected (0.091 seconds)
 {noformat}
 There are already a pb. I don't see the inserted row.
 5. When I'm checking HDFS directory, I see {{delta_421_421}} folder
 {noformat}
 0: jdbc:hive2://nc-h04:1/casino dfs -ls 
 hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m/;
 +-+--+
 | DFS 
 Output  |
 +-+--+
 | Found 1 items   
 |
 | drwxr-xr-x   - hduser supergroup  0 2014-09-23 12:17 
 hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m/delta_421_421
   |
 +-+--+
 2 rows selected (0.014 seconds)
 {noformat}
 6. Doing a major compaction solves the bug
 {noformat}
 0: jdbc:hive2://nc-h04:1/casino alter table encaissement_1b_64m compact 
 'major';
 No rows affected (0.046 seconds)
 0: jdbc:hive2://nc-h04:1/casino dfs -ls 
 hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m/;
 ++--+
 | DFS Output  
|
 ++--+
 | Found 1 items   
|
 | drwxr-xr-x   - hduser supergroup  0 2014-09-23 12:21 
 hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m/base_421  
 |
 ++--+
 2 rows selected (0.02 seconds)
 {noformat}
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8231) Error when insert into empty table with ACID

2014-09-26 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149215#comment-14149215
 ] 

Alan Gates commented on HIVE-8231:
--

[~damien.carol], based on your output above it looks like you are reading the 
data via HS2 (since it says jdbc in your command line).  There was a definite 
bug in using ACID via HS2.  I'm curious if you still see this after HIVE-8203.  
I'll have that one checked in as soon as the 24 hours after Eugene's +1 passes.

 Error when insert into empty table with ACID
 

 Key: HIVE-8231
 URL: https://issues.apache.org/jira/browse/HIVE-8231
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0
Reporter: Damien Carol
Assignee: Damien Carol
 Fix For: 0.14.0


 Steps to show the bug :
 1. create table 
 {code}
 create table encaissement_1b_64m like encaissement_1b;
 {code}
 2. check table 
 {code}
 desc encaissement_1b_64m;
 dfs -ls hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m;
 {code}
 everything is ok:
 {noformat}
 0: jdbc:hive2://nc-h04:1/casino desc encaissement_1b_64m;
   
 +++--+--+
 |  col_name  | data_type  | comment  |
 +++--+--+
 | id | int|  |
 | idmagasin  | int|  |
 | zibzin | string |  |
 | cheque | int|  |
 | montant| double |  |
 | date   | timestamp  |  |
 | col_6  | string |  |
 | col_7  | string |  |
 | col_8  | string |  |
 +++--+--+
 9 rows selected (0.158 seconds)
 0: jdbc:hive2://nc-h04:1/casino dfs -ls 
 hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m/;
 +-+--+
 | DFS Output  |
 +-+--+
 +-+--+
 No rows selected (0.01 seconds)
 {noformat}
 3. Insert values into the new table
 {noformat}
 insert into table encaissement_1b_64m VALUES (1, 1, 
 '8909', 1, 12.5, '12/05/2014', '','','');
 {noformat}
 4. Check
 {noformat}
 0: jdbc:hive2://nc-h04:1/casino select id from encaissement_1b_64m;
 +-+--+
 | id  |
 +-+--+
 +-+--+
 No rows selected (0.091 seconds)
 {noformat}
 There are already a pb. I don't see the inserted row.
 5. When I'm checking HDFS directory, I see {{delta_421_421}} folder
 {noformat}
 0: jdbc:hive2://nc-h04:1/casino dfs -ls 
 hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m/;
 +-+--+
 | DFS 
 Output  |
 +-+--+
 | Found 1 items   
 |
 | drwxr-xr-x   - hduser supergroup  0 2014-09-23 12:17 
 hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m/delta_421_421
   |
 +-+--+
 2 rows selected (0.014 seconds)
 {noformat}
 6. Doing a major compaction solves the bug
 {noformat}
 0: jdbc:hive2://nc-h04:1/casino alter table encaissement_1b_64m compact 
 'major';
 No rows affected (0.046 seconds)
 0: jdbc:hive2://nc-h04:1/casino dfs -ls 
 hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m/;
 ++--+
 | DFS Output  
|
 ++--+
 | Found 1 items   
|
 | drwxr-xr-x   - hduser supergroup  0 2014-09-23 12:21 
 hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m/base_421  
 |
 ++--+
 2 rows selected (0.02 seconds)

[jira] [Commented] (HIVE-8182) beeline fails when executing multiple-line queries with trailing spaces

2014-09-26 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149231#comment-14149231
 ] 

Brock Noland commented on HIVE-8182:


+1

 beeline fails when executing multiple-line queries with trailing spaces
 ---

 Key: HIVE-8182
 URL: https://issues.apache.org/jira/browse/HIVE-8182
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0, 0.13.1
Reporter: Yongzhi Chen
Assignee: Sergio Peña
 Fix For: 0.14.0

 Attachments: HIVE-8181.1.patch, HIVE-8182.1.patch


 As title indicates, when executing a multi-line query with trailing spaces, 
 beeline reports syntax error: 
 Error: Error while compiling statement: FAILED: ParseException line 1:76 
 extraneous input ';' expecting EOF near 'EOF' (state=42000,code=4)
 If put this query in one single line, beeline succeeds to execute it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8266) create function using resource statement compilation should include resource URI entity

2014-09-26 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149234#comment-14149234
 ] 

Brock Noland commented on HIVE-8266:


+1 pending tests

 create function using resource statement compilation should include 
 resource URI entity
 -

 Key: HIVE-8266
 URL: https://issues.apache.org/jira/browse/HIVE-8266
 Project: Hive
  Issue Type: Bug
  Components: SQL
Affects Versions: 0.13.1
Reporter: Prasad Mujumdar
Assignee: Prasad Mujumdar
 Attachments: HIVE-8266.2.patch


 The compiler add function name and db name as write entities for create 
 function using resource statement. We should also include the resource URI 
 path in the write entity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8258) Compactor cleaners can be starved on a busy table or partition.

2014-09-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149243#comment-14149243
 ] 

Hive QA commented on HIVE-8258:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12671308/HIVE-8258.patch

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 6355 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_schemeAuthority
org.apache.hadoop.hive.ql.parse.TestParse.testParse_union
org.apache.hadoop.hive.ql.txn.compactor.TestCleaner.partitionNotBlockedBySubsequentLock
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/995/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/995/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-995/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12671308

 Compactor cleaners can be starved on a busy table or partition.
 ---

 Key: HIVE-8258
 URL: https://issues.apache.org/jira/browse/HIVE-8258
 Project: Hive
  Issue Type: Bug
  Components: Transactions
Affects Versions: 0.13.1
Reporter: Alan Gates
Assignee: Alan Gates
Priority: Critical
 Attachments: HIVE-8258.patch


 Currently the cleaning thread in the compactor does not run on a table or 
 partition while any locks are held on this partition.  This leaves it open to 
 starvation in the case of a busy table or partition.  It only needs to wait 
 until all locks on the table/partition at the time of the compaction have 
 expired.  Any jobs initiated after that (and thus any locks obtained) will be 
 for the new versions of the files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8240) VectorColumnAssignFactory throws Incompatible Bytes vector column and primitive category VARCHAR

2014-09-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149602#comment-14149602
 ] 

Hive QA commented on HIVE-8240:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12671326/HIVE-8240.04.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 6359 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_schemeAuthority
org.apache.hadoop.hive.ql.parse.TestParse.testParse_union
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/997/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/997/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-997/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12671326

 VectorColumnAssignFactory throws Incompatible Bytes vector column and 
 primitive category VARCHAR
 --

 Key: HIVE-8240
 URL: https://issues.apache.org/jira/browse/HIVE-8240
 Project: Hive
  Issue Type: Bug
  Components: Vectorization
Reporter: Matt McCline
Assignee: Matt McCline
Priority: Critical
 Attachments: HIVE-8240.01.patch, HIVE-8240.02.patch, 
 HIVE-8240.04.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-6090) Audit logs for HiveServer2

2014-09-26 Thread Adam Faris (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149598#comment-14149598
 ] 

Adam Faris commented on HIVE-6090:
--

[~thiruvel] Thanks for the patch update. FYI: instructions for using 
reviewboard are here.  
https://cwiki.apache.org/confluence/display/Hive/Review+Board

 Audit logs for HiveServer2
 --

 Key: HIVE-6090
 URL: https://issues.apache.org/jira/browse/HIVE-6090
 Project: Hive
  Issue Type: Improvement
  Components: Diagnosability, HiveServer2
Reporter: Thiruvel Thirumoolan
Assignee: Thiruvel Thirumoolan
 Attachments: HIVE-6090.1.WIP.patch, HIVE-6090.patch


 HiveMetastore has audit logs and would like to audit all queries or requests 
 to HiveServer2 also. This will help in understanding how the APIs were used, 
 queries submitted, users etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8256) Add SORT_QUERY_RESULTS for test that doesn't guarantee order #2

2014-09-26 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149611#comment-14149611
 ] 

Xuefu Zhang commented on HIVE-8256:
---

[~csun], since your patch is for trunk, could you please rename it so that test 
can be run agaist trunk instead?

 Add SORT_QUERY_RESULTS for test that doesn't guarantee order #2
 ---

 Key: HIVE-8256
 URL: https://issues.apache.org/jira/browse/HIVE-8256
 Project: Hive
  Issue Type: Test
Reporter: Chao
Assignee: Chao
Priority: Minor
 Attachments: HIVE-8256.1-spark.patch


 Following HIVE-8035, we need to further add {{SORT_QUERY_RESULTS}} to a few 
 more tests that doesn't guarantee output order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8256) Add SORT_QUERY_RESULTS for test that doesn't guarantee order #2

2014-09-26 Thread Chao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao updated HIVE-8256:
---
Attachment: HIVE-8256.patch

Sorry, I forgot this patch is for the trunk.

 Add SORT_QUERY_RESULTS for test that doesn't guarantee order #2
 ---

 Key: HIVE-8256
 URL: https://issues.apache.org/jira/browse/HIVE-8256
 Project: Hive
  Issue Type: Test
Reporter: Chao
Assignee: Chao
Priority: Minor
 Attachments: HIVE-8256.1-spark.patch, HIVE-8256.patch


 Following HIVE-8035, we need to further add {{SORT_QUERY_RESULTS}} to a few 
 more tests that doesn't guarantee output order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8191) Update and delete on tables with non Acid output formats gives runtime error

2014-09-26 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-8191:
-
Priority: Blocker  (was: Critical)

 Update and delete on tables with non Acid output formats gives runtime error
 

 Key: HIVE-8191
 URL: https://issues.apache.org/jira/browse/HIVE-8191
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.14.0
Reporter: Alan Gates
Assignee: Alan Gates
Priority: Blocker
 Attachments: HIVE-8191.2.patch, HIVE-8191.patch


 {code}
 create table not_an_acid_table(a int, b varchar(128));
 insert into table not_an_acid_table select cint, cast(cstring1 as 
 varchar(128)) from alltypesorc where cint is not null order by cint limit 10;
 delete from not_an_acid_table where b = '0ruyd6Y50JpdGRf6HqD';
 {code}
 This generates a runtime error.  It should get a compile error instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8203) ACID operations result in NPE when run through HS2

2014-09-26 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-8203:
-
Priority: Blocker  (was: Critical)

 ACID operations result in NPE when run through HS2
 --

 Key: HIVE-8203
 URL: https://issues.apache.org/jira/browse/HIVE-8203
 Project: Hive
  Issue Type: Bug
  Components: Transactions
Affects Versions: 0.14.0
Reporter: Alan Gates
Assignee: Alan Gates
Priority: Blocker
 Fix For: 0.14.0

 Attachments: HIVE-8203.2.patch, HIVE-8203.patch


 When accessing Hive via HS2, any operation requiring the DbTxnManager results 
 in an NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8191) Update and delete on tables with non Acid output formats gives runtime error

2014-09-26 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149625#comment-14149625
 ] 

Alan Gates commented on HIVE-8191:
--

[~vikram.dixit] I'd like to get this into 0.14 as it improves user experience 
and is a simple patch.  It will be ready once HIVE-8203 goes in later this 
afternoon.

 Update and delete on tables with non Acid output formats gives runtime error
 

 Key: HIVE-8191
 URL: https://issues.apache.org/jira/browse/HIVE-8191
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.14.0
Reporter: Alan Gates
Assignee: Alan Gates
Priority: Blocker
 Attachments: HIVE-8191.2.patch, HIVE-8191.patch


 {code}
 create table not_an_acid_table(a int, b varchar(128));
 insert into table not_an_acid_table select cint, cast(cstring1 as 
 varchar(128)) from alltypesorc where cint is not null order by cint limit 10;
 delete from not_an_acid_table where b = '0ruyd6Y50JpdGRf6HqD';
 {code}
 This generates a runtime error.  It should get a compile error instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8203) ACID operations result in NPE when run through HS2

2014-09-26 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149626#comment-14149626
 ] 

Alan Gates commented on HIVE-8203:
--

[~vikram.dixit] This is important to get into 0.14 as it causes NPEs for anyone 
coming in via HiveServer2

 ACID operations result in NPE when run through HS2
 --

 Key: HIVE-8203
 URL: https://issues.apache.org/jira/browse/HIVE-8203
 Project: Hive
  Issue Type: Bug
  Components: Transactions
Affects Versions: 0.14.0
Reporter: Alan Gates
Assignee: Alan Gates
Priority: Blocker
 Fix For: 0.14.0

 Attachments: HIVE-8203.2.patch, HIVE-8203.patch


 When accessing Hive via HS2, any operation requiring the DbTxnManager results 
 in an NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8189) A select statement with a subquery is failing with HBaseSerde

2014-09-26 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-8189:
---
   Resolution: Fixed
Fix Version/s: 0.14.0
   Status: Resolved  (was: Patch Available)

Thank you Yongzhi! I have committed this to trunk.

 A select statement with a subquery is failing with HBaseSerde
 -

 Key: HIVE-8189
 URL: https://issues.apache.org/jira/browse/HIVE-8189
 Project: Hive
  Issue Type: Bug
  Components: HBase Handler
Affects Versions: 0.12.0, 0.13.1
Reporter: Yongzhi Chen
Assignee: Yongzhi Chen
 Fix For: 0.14.0

 Attachments: HIVE-8189.1.patch, hbase_ppd_join.q


 Hive tables in the query are hbase tables, and the subquery is a join 
 statement.
 When
 set hive.optimize.ppd=true;
   and
 set hive.auto.convert.join=false;
 The query does not return data. 
 While hive.optimize.ppd=true and hive.auto.convert.join=true return values 
 back. See attached query file. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8072) TesParse_union is failing on trunk

2014-09-26 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-8072:
-
Assignee: Navis

 TesParse_union is failing on trunk
 --

 Key: HIVE-8072
 URL: https://issues.apache.org/jira/browse/HIVE-8072
 Project: Hive
  Issue Type: Task
  Components: Tests
Affects Versions: 0.14.0
Reporter: Ashutosh Chauhan
Assignee: Navis
 Attachments: HIVE-8072.1.patch.txt, HIVE-8072.2.patch, HIVE-8072.patch


 Needs golden file update



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8257) Accumulo introduces old hadoop-client dependency

2014-09-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149742#comment-14149742
 ] 

Hive QA commented on HIVE-8257:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12671310/HIVE-8257.1.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 6352 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.parse.TestParse.testParse_union
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/998/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/998/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-998/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12671310

 Accumulo introduces old hadoop-client dependency
 

 Key: HIVE-8257
 URL: https://issues.apache.org/jira/browse/HIVE-8257
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Reporter: Josh Elser
Assignee: Josh Elser
Priority: Critical
 Fix For: 0.14.0

 Attachments: HIVE-8257.1.patch


 It was brought to my attention that Accumulo is transitively bringing in some 
 artifacts with the wrong version of Hadoop.
 Accumulo-1.6.0 sets the Hadoop version at 2.2.0 and uses hadoop-client to get 
 its necessary dependencies. Because there is no dependency with the correct 
 version in Hive, this introduces hadoop-2.2.0 dependencies.
 A solution is to make sure that hadoop-client is set with the correct 
 {{hadoop-20S.version}} or {{hadoop-23.version}}.
 Snippet from {{mvn dependency:tree -Phadoop-2}}
 {noformat}
 [INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ 
 hive-accumulo-handler ---
 [INFO] org.apache.hive:hive-accumulo-handler:jar:0.14.0-SNAPSHOT
 [INFO] +- commons-lang:commons-lang:jar:2.6:compile
 [INFO] +- commons-logging:commons-logging:jar:1.1.3:compile
 [INFO] +- org.apache.accumulo:accumulo-core:jar:1.6.0:compile
 ...
 [INFO] |  +- org.apache.hadoop:hadoop-client:jar:2.2.0:compile
 [INFO] |  |  +- org.apache.hadoop:hadoop-hdfs:jar:2.4.0:compile
 ...
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8072) TesParse_union is failing on trunk

2014-09-26 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-8072:
-
Fix Version/s: 0.14.0

 TesParse_union is failing on trunk
 --

 Key: HIVE-8072
 URL: https://issues.apache.org/jira/browse/HIVE-8072
 Project: Hive
  Issue Type: Task
  Components: Tests
Affects Versions: 0.14.0
Reporter: Ashutosh Chauhan
Assignee: Navis
 Fix For: 0.14.0

 Attachments: HIVE-8072.1.patch.txt, HIVE-8072.2.patch, HIVE-8072.patch


 Needs golden file update



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8072) TesParse_union is failing on trunk

2014-09-26 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149752#comment-14149752
 ] 

Gunther Hagleitner commented on HIVE-8072:
--

[~vikram.dixit] I'd like to commit this to .14 too. It helps get unit tests 
clean and also fixes the fact that table object is considered immutable (which 
it isn't).

 TesParse_union is failing on trunk
 --

 Key: HIVE-8072
 URL: https://issues.apache.org/jira/browse/HIVE-8072
 Project: Hive
  Issue Type: Task
  Components: Tests
Affects Versions: 0.14.0
Reporter: Ashutosh Chauhan
Assignee: Navis
 Fix For: 0.14.0

 Attachments: HIVE-8072.1.patch.txt, HIVE-8072.2.patch, HIVE-8072.patch


 Needs golden file update



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8072) TesParse_union is failing on trunk

2014-09-26 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149755#comment-14149755
 ] 

Gunther Hagleitner commented on HIVE-8072:
--

Committed to trunk. Will resolve once [~vikram.dixit] weighs in.

 TesParse_union is failing on trunk
 --

 Key: HIVE-8072
 URL: https://issues.apache.org/jira/browse/HIVE-8072
 Project: Hive
  Issue Type: Task
  Components: Tests
Affects Versions: 0.14.0
Reporter: Ashutosh Chauhan
Assignee: Navis
 Fix For: 0.14.0

 Attachments: HIVE-8072.1.patch.txt, HIVE-8072.2.patch, HIVE-8072.patch


 Needs golden file update



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8180) Update SparkReduceRecordHandler for processing the vectors [spark branch]

2014-09-26 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-8180:
--
Labels: Spark-M1  (was: )

 Update SparkReduceRecordHandler for processing the vectors [spark branch]
 -

 Key: HIVE-8180
 URL: https://issues.apache.org/jira/browse/HIVE-8180
 Project: Hive
  Issue Type: Bug
  Components: Spark
Reporter: Chinna Rao Lalam
Assignee: Chinna Rao Lalam
  Labels: Spark-M1
 Attachments: HIVE-8180-spark.patch


 Update SparkReduceRecordHandler for processing the vectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8180) Update SparkReduceRecordHandler for processing the vectors [spark branch]

2014-09-26 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-8180:
--
Component/s: Spark

 Update SparkReduceRecordHandler for processing the vectors [spark branch]
 -

 Key: HIVE-8180
 URL: https://issues.apache.org/jira/browse/HIVE-8180
 Project: Hive
  Issue Type: Bug
  Components: Spark
Reporter: Chinna Rao Lalam
Assignee: Chinna Rao Lalam
  Labels: Spark-M1
 Attachments: HIVE-8180-spark.patch


 Update SparkReduceRecordHandler for processing the vectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-7156) Group-By operator stat-annotation only uses distinct approx to generate rollups

2014-09-26 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149785#comment-14149785
 ] 

Gopal V commented on HIVE-7156:
---

LGTM - +1, tests pending.

36910400 = (4833087637230 / (256*1024*1024.0)) * 1955

{code}
STAGE PLANS:
  Stage: Stage-1
Map Reduce
  Map Operator Tree:
  TableScan
alias: lineitem
Statistics: Num rows: 589709 Data size: 4833087637230 Basic 
stats: COMPLETE Column stats: COMPLETE
Select Operator
  expressions: l_shipdate (type: string)
  outputColumnNames: l_shipdate
  Statistics: Num rows: 589709 Data size: 4833087637230 Basic 
stats: COMPLETE Column stats: COMPLETE
  Group By Operator
keys: l_shipdate (type: string)
mode: hash
outputColumnNames: _col0
Statistics: Num rows: 36910400 Data size: 3469577600 Basic 
stats: COMPLETE Column stats: COMPLETE
Reduce Output Operator
  key expressions: _col0 (type: string)
  sort order: +
  Map-reduce partition columns: _col0 (type: string)
  Statistics: Num rows: 36910400 Data size: 3469577600 Basic 
stats: COMPLETE Column stats: COMPLETE
  Execution mode: vectorized
  Reduce Operator Tree:
Group By Operator
  keys: KEY._col0 (type: string)
  mode: mergepartial
  outputColumnNames: _col0
  Statistics: Num rows: 1955 Data size: 183770 Basic stats: COMPLETE 
Column stats: COMPLETE
  Select Operator
expressions: _col0 (type: string)
outputColumnNames: _col0
Statistics: Num rows: 1955 Data size: 183770 Basic stats: COMPLETE 
Column stats: COMPLETE
File Output Operator
  compressed: false
  Statistics: Num rows: 1955 Data size: 183770 Basic stats: 
COMPLETE Column stats: COMPLETE
  table:
  input format: org.apache.hadoop.mapred.TextInputFormat
  output format: 
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
  serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
{code}

 Group-By operator stat-annotation only uses distinct approx to generate 
 rollups
 ---

 Key: HIVE-7156
 URL: https://issues.apache.org/jira/browse/HIVE-7156
 Project: Hive
  Issue Type: Sub-task
Affects Versions: 0.14.0
Reporter: Gopal V
Assignee: Prasanth J
Priority: Blocker
 Fix For: 0.14.0

 Attachments: HIVE-7156.1.patch, HIVE-7156.2.patch, HIVE-7156.3.patch, 
 HIVE-7156.4.patch, HIVE-7156.5.patch, HIVE-7156.6.patch, HIVE-7156.7.patch, 
 HIVE-7156.8.patch, HIVE-7156.8.patch, hive-debug.log.bz2


 The stats annotation for a group-by only annotates the reduce-side row-count 
 with the distinct values.
 The map-side gets the row-count as the rows output instead of distinct * 
 parallelism, while the reducer side gets the correct parallelism.
 {code}
 hive explain select distinct L_SHIPDATE from lineitem;
   Vertices:
 Map 1 
 Map Operator Tree:
 TableScan
   alias: lineitem
   Statistics: Num rows: 589709 Data size: 4745677733354 
 Basic stats: COMPLETE Column stats: COMPLETE
   Select Operator
 expressions: l_shipdate (type: string)
 outputColumnNames: l_shipdate
 Statistics: Num rows: 589709 Data size: 4745677733354 
 Basic stats: COMPLETE Column stats: COMPLETE
 Group By Operator
   keys: l_shipdate (type: string)
   mode: hash
   outputColumnNames: _col0
   Statistics: Num rows: 589709 Data size: 
 563999032646 Basic stats: COMPLETE Column stats: COMPLETE
   Reduce Output Operator
 key expressions: _col0 (type: string)
 sort order: +
 Map-reduce partition columns: _col0 (type: string)
 Statistics: Num rows: 589709 Data size: 
 563999032646 Basic stats: COMPLETE Column stats: COMPLETE
 Execution mode: vectorized
 Reducer 2 
 Reduce Operator Tree:
   Group By Operator
 keys: KEY._col0 (type: string)
 mode: mergepartial
 outputColumnNames: _col0
 Statistics: Num rows: 1955 Data size: 183770 Basic stats: 
 COMPLETE Column stats: COMPLETE
 Select Operator
   expressions: _col0 (type: string)
   outputColumnNames: 

[jira] [Updated] (HIVE-8204) Dynamic partition pruning fails with IndexOutOfBoundsException

2014-09-26 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-8204:
-
Status: Open  (was: Patch Available)

 Dynamic partition pruning fails with IndexOutOfBoundsException
 --

 Key: HIVE-8204
 URL: https://issues.apache.org/jira/browse/HIVE-8204
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0
Reporter: Prasanth J
Assignee: Gunther Hagleitner
 Attachments: HIVE-8204.1.patch, HIVE-8204.2.patch


 Dynamic partition pruning fails with IndexOutOfBounds exception when 
 dimension table is partitioned and fact table is not.
 Steps to reproduce:
 1) Partition date_dim table from tpcds on d_date_sk
 2) Fact table is store_sales which is not partitioned
 3) Run the following
 {code}
 set hive.stats.fetch.column.stats=ture;
 set hive.tez.dynamic.partition.pruning=true;
 explain select d_date 
 from store_sales, date_dim 
 where 
 store_sales.ss_sold_date_sk = date_dim.d_date_sk and 
 date_dim.d_year = 1998;
 {code}
 The stack trace is:
 {code}
 2014-09-19 19:06:16,254 ERROR ql.Driver (SessionState.java:printError(825)) - 
 FAILED: IndexOutOfBoundsException Index: 0, Size: 0
 java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
   at java.util.ArrayList.rangeCheck(ArrayList.java:635)
   at java.util.ArrayList.get(ArrayList.java:411)
   at 
 org.apache.hadoop.hive.ql.optimizer.RemoveDynamicPruningBySize.process(RemoveDynamicPruningBySize.java:61)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:94)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:78)
   at 
 org.apache.hadoop.hive.ql.lib.ForwardWalker.walk(ForwardWalker.java:61)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:109)
   at 
 org.apache.hadoop.hive.ql.parse.TezCompiler.runStatsDependentOptimizations(TezCompiler.java:277)
   at 
 org.apache.hadoop.hive.ql.parse.TezCompiler.optimizeOperatorPlan(TezCompiler.java:120)
   at 
 org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:97)
   at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:9781)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:221)
   at 
 org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:74)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:221)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:407)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:303)
   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1060)
   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1130)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:997)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:987)
   at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:246)
   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:198)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:408)
   at 
 org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:781)
   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675)
   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8204) Dynamic partition pruning fails with IndexOutOfBoundsException

2014-09-26 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-8204:
-
Attachment: HIVE-8204.2.patch

 Dynamic partition pruning fails with IndexOutOfBoundsException
 --

 Key: HIVE-8204
 URL: https://issues.apache.org/jira/browse/HIVE-8204
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0
Reporter: Prasanth J
Assignee: Gunther Hagleitner
 Attachments: HIVE-8204.1.patch, HIVE-8204.2.patch


 Dynamic partition pruning fails with IndexOutOfBounds exception when 
 dimension table is partitioned and fact table is not.
 Steps to reproduce:
 1) Partition date_dim table from tpcds on d_date_sk
 2) Fact table is store_sales which is not partitioned
 3) Run the following
 {code}
 set hive.stats.fetch.column.stats=ture;
 set hive.tez.dynamic.partition.pruning=true;
 explain select d_date 
 from store_sales, date_dim 
 where 
 store_sales.ss_sold_date_sk = date_dim.d_date_sk and 
 date_dim.d_year = 1998;
 {code}
 The stack trace is:
 {code}
 2014-09-19 19:06:16,254 ERROR ql.Driver (SessionState.java:printError(825)) - 
 FAILED: IndexOutOfBoundsException Index: 0, Size: 0
 java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
   at java.util.ArrayList.rangeCheck(ArrayList.java:635)
   at java.util.ArrayList.get(ArrayList.java:411)
   at 
 org.apache.hadoop.hive.ql.optimizer.RemoveDynamicPruningBySize.process(RemoveDynamicPruningBySize.java:61)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:94)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:78)
   at 
 org.apache.hadoop.hive.ql.lib.ForwardWalker.walk(ForwardWalker.java:61)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:109)
   at 
 org.apache.hadoop.hive.ql.parse.TezCompiler.runStatsDependentOptimizations(TezCompiler.java:277)
   at 
 org.apache.hadoop.hive.ql.parse.TezCompiler.optimizeOperatorPlan(TezCompiler.java:120)
   at 
 org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:97)
   at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:9781)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:221)
   at 
 org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:74)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:221)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:407)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:303)
   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1060)
   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1130)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:997)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:987)
   at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:246)
   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:198)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:408)
   at 
 org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:781)
   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675)
   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8204) Dynamic partition pruning fails with IndexOutOfBoundsException

2014-09-26 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-8204:
-
Status: Patch Available  (was: Open)

 Dynamic partition pruning fails with IndexOutOfBoundsException
 --

 Key: HIVE-8204
 URL: https://issues.apache.org/jira/browse/HIVE-8204
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0
Reporter: Prasanth J
Assignee: Gunther Hagleitner
 Attachments: HIVE-8204.1.patch, HIVE-8204.2.patch


 Dynamic partition pruning fails with IndexOutOfBounds exception when 
 dimension table is partitioned and fact table is not.
 Steps to reproduce:
 1) Partition date_dim table from tpcds on d_date_sk
 2) Fact table is store_sales which is not partitioned
 3) Run the following
 {code}
 set hive.stats.fetch.column.stats=ture;
 set hive.tez.dynamic.partition.pruning=true;
 explain select d_date 
 from store_sales, date_dim 
 where 
 store_sales.ss_sold_date_sk = date_dim.d_date_sk and 
 date_dim.d_year = 1998;
 {code}
 The stack trace is:
 {code}
 2014-09-19 19:06:16,254 ERROR ql.Driver (SessionState.java:printError(825)) - 
 FAILED: IndexOutOfBoundsException Index: 0, Size: 0
 java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
   at java.util.ArrayList.rangeCheck(ArrayList.java:635)
   at java.util.ArrayList.get(ArrayList.java:411)
   at 
 org.apache.hadoop.hive.ql.optimizer.RemoveDynamicPruningBySize.process(RemoveDynamicPruningBySize.java:61)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:94)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:78)
   at 
 org.apache.hadoop.hive.ql.lib.ForwardWalker.walk(ForwardWalker.java:61)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:109)
   at 
 org.apache.hadoop.hive.ql.parse.TezCompiler.runStatsDependentOptimizations(TezCompiler.java:277)
   at 
 org.apache.hadoop.hive.ql.parse.TezCompiler.optimizeOperatorPlan(TezCompiler.java:120)
   at 
 org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:97)
   at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:9781)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:221)
   at 
 org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:74)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:221)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:407)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:303)
   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1060)
   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1130)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:997)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:987)
   at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:246)
   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:198)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:408)
   at 
 org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:781)
   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675)
   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8180) Update SparkReduceRecordHandler for processing the vectors [spark branch]

2014-09-26 Thread Chinna Rao Lalam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinna Rao Lalam updated HIVE-8180:
---
Attachment: HIVE-8180.1-spark.patch

 Update SparkReduceRecordHandler for processing the vectors [spark branch]
 -

 Key: HIVE-8180
 URL: https://issues.apache.org/jira/browse/HIVE-8180
 Project: Hive
  Issue Type: Bug
  Components: Spark
Reporter: Chinna Rao Lalam
Assignee: Chinna Rao Lalam
  Labels: Spark-M1
 Attachments: HIVE-8180-spark.patch, HIVE-8180.1-spark.patch


 Update SparkReduceRecordHandler for processing the vectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8180) Update SparkReduceRecordHandler for processing the vectors [spark branch]

2014-09-26 Thread Chinna Rao Lalam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinna Rao Lalam updated HIVE-8180:
---
Status: Patch Available  (was: Open)

updated patch with .out files

 Update SparkReduceRecordHandler for processing the vectors [spark branch]
 -

 Key: HIVE-8180
 URL: https://issues.apache.org/jira/browse/HIVE-8180
 Project: Hive
  Issue Type: Bug
  Components: Spark
Reporter: Chinna Rao Lalam
Assignee: Chinna Rao Lalam
  Labels: Spark-M1
 Attachments: HIVE-8180-spark.patch, HIVE-8180.1-spark.patch


 Update SparkReduceRecordHandler for processing the vectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-7615) Beeline should have an option for user to see the query progress

2014-09-26 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-7615:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Patch committed to trunk and 0.14 branch.
Thanks for the contribution and perseverance [~dongc]!


 Beeline should have an option for user to see the query progress
 

 Key: HIVE-7615
 URL: https://issues.apache.org/jira/browse/HIVE-7615
 Project: Hive
  Issue Type: Improvement
  Components: CLI
Reporter: Dong Chen
Assignee: Dong Chen
Priority: Blocker
 Fix For: 0.14.0

 Attachments: HIVE-7615.1.patch, HIVE-7615.2.patch, HIVE-7615.3.patch, 
 HIVE-7615.4.patch, HIVE-7615.patch, complete_logs, simple_logs


 When executing query in Beeline, user should have a option to see the 
 progress through the outputs.
 Beeline could use the API introduced in HIVE-4629 to get and display the logs 
 to the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8236) VectorHashKeyWrapper allocates too many zero sized arrays

2014-09-26 Thread Prasanth J (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149847#comment-14149847
 ] 

Prasanth J commented on HIVE-8236:
--

LGTM, +1

 VectorHashKeyWrapper allocates too many zero sized arrays
 -

 Key: HIVE-8236
 URL: https://issues.apache.org/jira/browse/HIVE-8236
 Project: Hive
  Issue Type: Bug
  Components: Vectorization
Affects Versions: 0.14.0
Reporter: Gopal V
Assignee: Gopal V
Priority: Minor
 Attachments: HIVE-8236.1.patch


 VectorHashKeyWrappper creation allocates too many zero sized arrays and 
 thrashes the TLAB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8236) VectorHashKeyWrapper allocates too many zero sized arrays

2014-09-26 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-8236:
--
Priority: Blocker  (was: Minor)

 VectorHashKeyWrapper allocates too many zero sized arrays
 -

 Key: HIVE-8236
 URL: https://issues.apache.org/jira/browse/HIVE-8236
 Project: Hive
  Issue Type: Bug
  Components: Vectorization
Affects Versions: 0.14.0
Reporter: Gopal V
Assignee: Gopal V
Priority: Blocker
 Fix For: 0.14.0

 Attachments: HIVE-8236.1.patch


 VectorHashKeyWrappper creation allocates too many zero sized arrays and 
 thrashes the TLAB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8236) VectorHashKeyWrapper allocates too many zero sized arrays

2014-09-26 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-8236:
--
Fix Version/s: 0.14.0

 VectorHashKeyWrapper allocates too many zero sized arrays
 -

 Key: HIVE-8236
 URL: https://issues.apache.org/jira/browse/HIVE-8236
 Project: Hive
  Issue Type: Bug
  Components: Vectorization
Affects Versions: 0.14.0
Reporter: Gopal V
Assignee: Gopal V
Priority: Minor
 Fix For: 0.14.0

 Attachments: HIVE-8236.1.patch


 VectorHashKeyWrappper creation allocates too many zero sized arrays and 
 thrashes the TLAB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8236) VectorHashKeyWrapper allocates too many zero sized arrays

2014-09-26 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-8236:
--
Labels: OOM  (was: )

 VectorHashKeyWrapper allocates too many zero sized arrays
 -

 Key: HIVE-8236
 URL: https://issues.apache.org/jira/browse/HIVE-8236
 Project: Hive
  Issue Type: Bug
  Components: Vectorization
Affects Versions: 0.14.0
Reporter: Gopal V
Assignee: Gopal V
Priority: Blocker
  Labels: OOM
 Fix For: 0.14.0

 Attachments: HIVE-8236.1.patch


 VectorHashKeyWrappper creation allocates too many zero sized arrays and 
 thrashes the TLAB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-7615) Beeline should have an option for user to see the query progress

2014-09-26 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149862#comment-14149862
 ] 

Thejas M Nair commented on HIVE-7615:
-

Also documented this in wiki page - 
https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=30758725selectedPageVersions=63selectedPageVersions=61


 Beeline should have an option for user to see the query progress
 

 Key: HIVE-7615
 URL: https://issues.apache.org/jira/browse/HIVE-7615
 Project: Hive
  Issue Type: Improvement
  Components: CLI
Reporter: Dong Chen
Assignee: Dong Chen
Priority: Blocker
 Fix For: 0.14.0

 Attachments: HIVE-7615.1.patch, HIVE-7615.2.patch, HIVE-7615.3.patch, 
 HIVE-7615.4.patch, HIVE-7615.patch, complete_logs, simple_logs


 When executing query in Beeline, user should have a option to see the 
 progress through the outputs.
 Beeline could use the API introduced in HIVE-4629 to get and display the logs 
 to the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8261) CBO : Predicate pushdown is removed by Optiq

2014-09-26 Thread Harish Butani (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149868#comment-14149868
 ] 

Harish Butani commented on HIVE-8261:
-

This is because there is no PushFilterPastAgg rule. Just uploaded a patch for 
OPTIQ-425.
btw here is a simple query to reproduce this:
{code}
select syear, cnt
 from
 (select d1.d_year as syear ,count(*) as cnt
FROM   store_sales
  JOIN date_dim d1 ON store_sales.ss_sold_date_sk = d1.d_date_sk
 group by d1.d_year
  ) cs
 where cs.syear = 2000
{code}

 CBO : Predicate pushdown is removed by Optiq 
 -

 Key: HIVE-8261
 URL: https://issues.apache.org/jira/browse/HIVE-8261
 Project: Hive
  Issue Type: Bug
  Components: CBO
Affects Versions: 0.14.0, 0.13.1
Reporter: Mostafa Mokhtar
Assignee: Harish Butani
 Fix For: 0.14.0


 Plan for TPC-DS Q64 wasn't optimal upon looking at the logical plan I 
 realized that predicate pushdown is not applied on date_dim d1.
 Interestingly before optiq we have the predicate pushed :
 {code}
 HiveFilterRel(condition=[=($5, $1)])
 HiveJoinRel(condition=[=($3, $6)], joinType=[inner])
   HiveProjectRel(_o__col0=[$0], _o__col1=[$2], _o__col2=[$3], 
 _o__col3=[$1])
 HiveFilterRel(condition=[=($0, 2000)])
   HiveAggregateRel(group=[{0, 1}], agg#0=[count()], agg#1=[sum($2)])
 HiveProjectRel($f0=[$4], $f1=[$5], $f2=[$2])
   HiveJoinRel(condition=[=($1, $8)], joinType=[inner])
 HiveJoinRel(condition=[=($1, $5)], joinType=[inner])
   HiveJoinRel(condition=[=($0, $3)], joinType=[inner])
 HiveProjectRel(ss_sold_date_sk=[$0], ss_item_sk=[$2], 
 ss_wholesale_cost=[$11])
   
 HiveTableScanRel(table=[[tpcds_bin_partitioned_orc_200.store_sales]])
 HiveProjectRel(d_date_sk=[$0], d_year=[$6])
   
 HiveTableScanRel(table=[[tpcds_bin_partitioned_orc_200.date_dim]])
   HiveFilterRel(condition=[AND(in($2, 'maroon', 'burnished', 
 'dim', 'steel', 'navajo', 'chocolate'), between(false, $1, 35, +(35, 10)), 
 between(false, $1, +(35, 1), +(35, 15)))])
 HiveProjectRel(i_item_sk=[$0], i_current_price=[$5], 
 i_color=[$17])
   
 HiveTableScanRel(table=[[tpcds_bin_partitioned_orc_200.item]])
 HiveProjectRel(_o__col0=[$0])
   HiveAggregateRel(group=[{0}])
 HiveProjectRel($f0=[$0])
   HiveJoinRel(condition=[AND(=($0, $2), =($1, $3))], 
 joinType=[inner])
 HiveProjectRel(cs_item_sk=[$15], 
 cs_order_number=[$17])
   
 HiveTableScanRel(table=[[tpcds_bin_partitioned_orc_200.catalog_sales]])
 HiveProjectRel(cr_item_sk=[$2], cr_order_number=[$16])
   
 HiveTableScanRel(table=[[tpcds_bin_partitioned_orc_200.catalog_returns]])
   HiveProjectRel(_o__col0=[$0], _o__col1=[$2], _o__col3=[$1])
 HiveFilterRel(condition=[=($0, +(2000, 1))])
   HiveAggregateRel(group=[{0, 1}], agg#0=[count()])
 HiveProjectRel($f0=[$4], $f1=[$5], $f2=[$2])
   HiveJoinRel(condition=[=($1, $8)], joinType=[inner])
 HiveJoinRel(condition=[=($1, $5)], joinType=[inner])
   HiveJoinRel(condition=[=($0, $3)], joinType=[inner])
 HiveProjectRel(ss_sold_date_sk=[$0], ss_item_sk=[$2], 
 ss_wholesale_cost=[$11])
   
 HiveTableScanRel(table=[[tpcds_bin_partitioned_orc_200.store_sales]])
 HiveProjectRel(d_date_sk=[$0], d_year=[$6])
   
 HiveTableScanRel(table=[[tpcds_bin_partitioned_orc_200.date_dim]])
   HiveFilterRel(condition=[AND(in($2, 'maroon', 'burnished', 
 'dim', 'steel', 'navajo', 'chocolate'), between(false, $1, 35, +(35, 10)), 
 between(false, $1, +(35, 1), +(35, 15)))])
 HiveProjectRel(i_item_sk=[$0], i_current_price=[$5], 
 i_color=[$17])
   
 HiveTableScanRel(table=[[tpcds_bin_partitioned_orc_200.item]])
 HiveProjectRel(_o__col0=[$0])
   HiveAggregateRel(group=[{0}])
 HiveProjectRel($f0=[$0])
   HiveJoinRel(condition=[AND(=($0, $2), =($1, $3))], 
 joinType=[inner])
 HiveProjectRel(cs_item_sk=[$15], 
 cs_order_number=[$17])
   
 HiveTableScanRel(table=[[tpcds_bin_partitioned_orc_200.catalog_sales]])
 HiveProjectRel(cr_item_sk=[$2], cr_order_number=[$16])
   
 HiveTableScanRel(table=[[tpcds_bin_partitioned_orc_200.catalog_returns]])
 {code}
 While after Optiq the filter on date_dim gets pulled up 

[jira] [Created] (HIVE-8271) Jackson incompatibility between hadoop-2.4 and hive-14

2014-09-26 Thread Gopal V (JIRA)
Gopal V created HIVE-8271:
-

 Summary: Jackson incompatibility between hadoop-2.4 and hive-14
 Key: HIVE-8271
 URL: https://issues.apache.org/jira/browse/HIVE-8271
 Project: Hive
  Issue Type: Bug
  Components: UDF
Affects Versions: 0.14.0
Reporter: Gopal V
Assignee: Gopal V
Priority: Blocker
 Fix For: 0.14.0


jackson-1.8 is not API compatible with jackson-1.9 (abstract classes).

{code}
threw an Error.  Shutting down now...
java.lang.AbstractMethodError: 
org.codehaus.jackson.map.AnnotationIntrospector.findSerializer(Lorg/codehaus/jackson/map/introspect/Annotated;)Ljava/lang/Object;
{code}

hadoop-common (2.4) depends on jackson-1.8 and hive-14 depends on jackson-1.9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-7971) Support alter table change/replace/add columns for existing partitions

2014-09-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149870#comment-14149870
 ] 

Hive QA commented on HIVE-7971:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12671358/HIVE-7971.3.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 6357 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.parse.TestParse.testParse_union
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/999/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/999/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-999/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12671358

 Support alter table change/replace/add columns for existing partitions
 --

 Key: HIVE-7971
 URL: https://issues.apache.org/jira/browse/HIVE-7971
 Project: Hive
  Issue Type: Bug
Reporter: Jason Dere
Assignee: Jason Dere
 Attachments: HIVE-7971.1.patch, HIVE-7971.2.patch, HIVE-7971.3.patch


 ALTER TABLE CHANGE COLUMN is allowed for tables, but not for partitions. Same 
 for add/replace columns.
 Allowing this for partitions can be useful in some cases. For example, one 
 user has tables with Hive 0.12 Decimal columns, which do not specify 
 precision/scale. To be able to properly read the decimal values from the 
 existing partitions, the column types in the partitions need to be changed to 
 decimal types with precision/scale.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-8272) Query with particular decimal expression causes NPE during execution initialization

2014-09-26 Thread Matt McCline (JIRA)
Matt McCline created HIVE-8272:
--

 Summary: Query with particular decimal expression causes NPE 
during execution initialization
 Key: HIVE-8272
 URL: https://issues.apache.org/jira/browse/HIVE-8272
 Project: Hive
  Issue Type: Bug
  Components: Logical Optimizer, Physical Optimizer
Reporter: Matt McCline
Priority: Critical
 Fix For: 0.14.0



Query:
{code}
select 
  cast(sum(dc)*100 as decimal(11,3)) as c1
  from somedecimaltable
  order by c1
  limit 100;
{code}

Fails during execution initialization due to *null* ExprNodeDesc.

Noticed while trying to simplify a Vectorization issue and realized it was a 
more general issue.

{code}
Caused by: java.lang.RuntimeException: Map operator initialization failed
at 
org.apache.hadoop.hive.ql.exec.mr.ExecMapper.configure(ExecMapper.java:154)
... 22 more
Caused by: java.lang.RuntimeException: java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.initializeOp(ReduceSinkOperator.java:215)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:380)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:464)
at 
org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:420)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.initializeOp(GroupByOperator.java:427)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:380)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:464)
at 
org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:420)
at 
org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:65)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:380)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:464)
at 
org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:420)
at 
org.apache.hadoop.hive.ql.exec.TableScanOperator.initializeOp(TableScanOperator.java:193)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:380)
at 
org.apache.hadoop.hive.ql.exec.MapOperator.initializeOp(MapOperator.java:425)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:380)
at 
org.apache.hadoop.hive.ql.exec.mr.ExecMapper.configure(ExecMapper.java:133)
... 22 more
Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc.getExprString(ExprNodeGenericFuncDesc.java:154)
at 
org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc.getExprString(ExprNodeGenericFuncDesc.java:154)
at 
org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.initializeOp(ReduceSinkOperator.java:148)
... 38 more
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8094) add LIKE keyword support for SHOW FUNCTIONS

2014-09-26 Thread peter liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

peter liu updated HIVE-8094:

Attachment: HIVE-8094.3.patch

 add LIKE keyword support for SHOW FUNCTIONS
 ---

 Key: HIVE-8094
 URL: https://issues.apache.org/jira/browse/HIVE-8094
 Project: Hive
  Issue Type: Improvement
Affects Versions: 0.14.0, 0.13.1
Reporter: peter liu
Assignee: peter liu
 Fix For: 0.14.0

 Attachments: HIVE-8094.1.patch, HIVE-8094.2.patch, HIVE-8094.3.patch


 It would be nice to  add LIKE keyword support for SHOW FUNCTIONS as below, 
 and keep the patterns consistent to the way as SHOW DATABASES, SHOW TABLES.
 bq. SHOW FUNCTIONS LIKE 'foo*';



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8180) Update SparkReduceRecordHandler for processing the vectors [spark branch]

2014-09-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149925#comment-14149925
 ] 

Hive QA commented on HIVE-8180:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12671513/HIVE-8180.1-spark.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 6512 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample_islocalmode_hook
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_fs_default_name2
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/164/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/164/console
Test logs: 
http://ec2-54-176-176-199.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-164/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12671513

 Update SparkReduceRecordHandler for processing the vectors [spark branch]
 -

 Key: HIVE-8180
 URL: https://issues.apache.org/jira/browse/HIVE-8180
 Project: Hive
  Issue Type: Bug
  Components: Spark
Reporter: Chinna Rao Lalam
Assignee: Chinna Rao Lalam
  Labels: Spark-M1
 Attachments: HIVE-8180-spark.patch, HIVE-8180.1-spark.patch


 Update SparkReduceRecordHandler for processing the vectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8231) Error when insert into empty table with ACID

2014-09-26 Thread Damien Carol (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149938#comment-14149938
 ] 

Damien Carol commented on HIVE-8231:


bq. ... based on your output above it looks like you are reading the data via 
HS2.
[~alangates] yes, I'm using HS2. I saw your comment on HIVE-8203 and I made the 
link. I'm waiting the fix of HIVE-8203 and I will re-test on real data.
The behaviour of values tables is still confusing.

 Error when insert into empty table with ACID
 

 Key: HIVE-8231
 URL: https://issues.apache.org/jira/browse/HIVE-8231
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0
Reporter: Damien Carol
Assignee: Damien Carol
 Fix For: 0.14.0


 Steps to show the bug :
 1. create table 
 {code}
 create table encaissement_1b_64m like encaissement_1b;
 {code}
 2. check table 
 {code}
 desc encaissement_1b_64m;
 dfs -ls hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m;
 {code}
 everything is ok:
 {noformat}
 0: jdbc:hive2://nc-h04:1/casino desc encaissement_1b_64m;
   
 +++--+--+
 |  col_name  | data_type  | comment  |
 +++--+--+
 | id | int|  |
 | idmagasin  | int|  |
 | zibzin | string |  |
 | cheque | int|  |
 | montant| double |  |
 | date   | timestamp  |  |
 | col_6  | string |  |
 | col_7  | string |  |
 | col_8  | string |  |
 +++--+--+
 9 rows selected (0.158 seconds)
 0: jdbc:hive2://nc-h04:1/casino dfs -ls 
 hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m/;
 +-+--+
 | DFS Output  |
 +-+--+
 +-+--+
 No rows selected (0.01 seconds)
 {noformat}
 3. Insert values into the new table
 {noformat}
 insert into table encaissement_1b_64m VALUES (1, 1, 
 '8909', 1, 12.5, '12/05/2014', '','','');
 {noformat}
 4. Check
 {noformat}
 0: jdbc:hive2://nc-h04:1/casino select id from encaissement_1b_64m;
 +-+--+
 | id  |
 +-+--+
 +-+--+
 No rows selected (0.091 seconds)
 {noformat}
 There are already a pb. I don't see the inserted row.
 5. When I'm checking HDFS directory, I see {{delta_421_421}} folder
 {noformat}
 0: jdbc:hive2://nc-h04:1/casino dfs -ls 
 hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m/;
 +-+--+
 | DFS 
 Output  |
 +-+--+
 | Found 1 items   
 |
 | drwxr-xr-x   - hduser supergroup  0 2014-09-23 12:17 
 hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m/delta_421_421
   |
 +-+--+
 2 rows selected (0.014 seconds)
 {noformat}
 6. Doing a major compaction solves the bug
 {noformat}
 0: jdbc:hive2://nc-h04:1/casino alter table encaissement_1b_64m compact 
 'major';
 No rows affected (0.046 seconds)
 0: jdbc:hive2://nc-h04:1/casino dfs -ls 
 hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m/;
 ++--+
 | DFS Output  
|
 ++--+
 | Found 1 items   
|
 | drwxr-xr-x   - hduser supergroup  0 2014-09-23 12:21 
 hdfs://nc-h04/user/hive/warehouse/casino.db/encaissement_1b_64m/base_421  
 |
 ++--+
 2 rows selected (0.02 seconds)
 {noformat}
  



--
This 

[jira] [Commented] (HIVE-8186) Self join may fail if one side has VCs and other doesn't

2014-09-26 Thread Damien Carol (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149945#comment-14149945
 ] 

Damien Carol commented on HIVE-8186:


I think the use of these ABBRs keep the JIRA title short enough to stay in the 
git commit message.
BUT yes it is painful.

 Self join may fail if one side has VCs and other doesn't
 

 Key: HIVE-8186
 URL: https://issues.apache.org/jira/browse/HIVE-8186
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HIVE-8186.1.patch.txt


 See comments. This also fails on trunk, although not on original join_vc query



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8180) Update SparkReduceRecordHandler for processing the vectors [spark branch]

2014-09-26 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149942#comment-14149942
 ] 

Xuefu Zhang commented on HIVE-8180:
---

Hi [~chinnalalam], could you please provide a RB link for this? Thanks.

 Update SparkReduceRecordHandler for processing the vectors [spark branch]
 -

 Key: HIVE-8180
 URL: https://issues.apache.org/jira/browse/HIVE-8180
 Project: Hive
  Issue Type: Bug
  Components: Spark
Reporter: Chinna Rao Lalam
Assignee: Chinna Rao Lalam
  Labels: Spark-M1
 Attachments: HIVE-8180-spark.patch, HIVE-8180.1-spark.patch


 Update SparkReduceRecordHandler for processing the vectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8271) Jackson incompatibility between hadoop-2.4 and hive-14

2014-09-26 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-8271:
--
Attachment: HIVE-8271.1.patch

 Jackson incompatibility between hadoop-2.4 and hive-14
 --

 Key: HIVE-8271
 URL: https://issues.apache.org/jira/browse/HIVE-8271
 Project: Hive
  Issue Type: Bug
  Components: UDF
Affects Versions: 0.14.0
Reporter: Gopal V
Assignee: Gopal V
Priority: Blocker
 Fix For: 0.14.0

 Attachments: HIVE-8271.1.patch


 jackson-1.8 is not API compatible with jackson-1.9 (abstract classes).
 {code}
 threw an Error.  Shutting down now...
 java.lang.AbstractMethodError: 
 org.codehaus.jackson.map.AnnotationIntrospector.findSerializer(Lorg/codehaus/jackson/map/introspect/Annotated;)Ljava/lang/Object;
 {code}
 hadoop-common (2.4) depends on jackson-1.8 and hive-14 depends on jackson-1.9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8271) Jackson incompatibility between hadoop-2.4 and hive-14

2014-09-26 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-8271:
--
Status: Patch Available  (was: Open)

 Jackson incompatibility between hadoop-2.4 and hive-14
 --

 Key: HIVE-8271
 URL: https://issues.apache.org/jira/browse/HIVE-8271
 Project: Hive
  Issue Type: Bug
  Components: UDF
Affects Versions: 0.14.0
Reporter: Gopal V
Assignee: Gopal V
Priority: Blocker
 Fix For: 0.14.0

 Attachments: HIVE-8271.1.patch


 jackson-1.8 is not API compatible with jackson-1.9 (abstract classes).
 {code}
 threw an Error.  Shutting down now...
 java.lang.AbstractMethodError: 
 org.codehaus.jackson.map.AnnotationIntrospector.findSerializer(Lorg/codehaus/jackson/map/introspect/Annotated;)Ljava/lang/Object;
 {code}
 hadoop-common (2.4) depends on jackson-1.8 and hive-14 depends on jackson-1.9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8072) TesParse_union is failing on trunk

2014-09-26 Thread Vikram Dixit K (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149985#comment-14149985
 ] 

Vikram Dixit K commented on HIVE-8072:
--

+1 for 0.14

 TesParse_union is failing on trunk
 --

 Key: HIVE-8072
 URL: https://issues.apache.org/jira/browse/HIVE-8072
 Project: Hive
  Issue Type: Task
  Components: Tests
Affects Versions: 0.14.0
Reporter: Ashutosh Chauhan
Assignee: Navis
 Fix For: 0.14.0

 Attachments: HIVE-8072.1.patch.txt, HIVE-8072.2.patch, HIVE-8072.patch


 Needs golden file update



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8265) Build failure on hadoop-1

2014-09-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149987#comment-14149987
 ] 

Hive QA commented on HIVE-8265:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12671364/HIVE-8265.1.patch.txt

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 6357 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_schemeAuthority2
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1000/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1000/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1000/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12671364

 Build failure on hadoop-1 
 --

 Key: HIVE-8265
 URL: https://issues.apache.org/jira/browse/HIVE-8265
 Project: Hive
  Issue Type: Task
  Components: Tests
Reporter: Navis
Assignee: Navis
Priority: Blocker
 Attachments: HIVE-8265.1.patch.txt


 no pre-commit-tests
 Fails from CustomPartitionVertex and TestHive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >