[jira] [Commented] (HIVE-9907) insert into table values() when UTF-8 character is not correct

2015-03-11 Thread Fanhong Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14357085#comment-14357085
 ] 

Fanhong Li commented on HIVE-9907:
--

i found that

 DEBUG orc.OrcRecordUpdater (OrcRecordUpdater.java:insert(331)) -   
org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct bs = 
(org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct) row;

List ls = bs.getFieldsAsList();
for (int i = 0; i  ls.size(); i++) {
LOG.debug( lfh insert ls  + i  +  =  + ls.get(i));
}


 lfh insert ls 0 = -�_2

 insert into table values()   when UTF-8 character is not correct
 

 Key: HIVE-9907
 URL: https://issues.apache.org/jira/browse/HIVE-9907
 Project: Hive
  Issue Type: Bug
  Components: CLI, Clients, JDBC
Affects Versions: 0.14.0, 0.13.1, 1.0.0
 Environment: centos 6   LANG=zh_CN.UTF-8
 hadoop 2.6
 hive 1.1.0
Reporter: Fanhong Li
Priority: Critical

 insert into table test_acid partition(pt='pt_2')
 values( 2, '中文_2' , 'city_2' )
 ;
 hive select *
  from test_acid 
  ;
 OK
 2 -�_2city_2  pt_2
 Time taken: 0.237 seconds, Fetched: 1 row(s)
 hive 
 CREATE TABLE test_acid(id INT, 
 name STRING, 
 city STRING) 
 PARTITIONED BY (pt STRING)
 clustered by (id) into 1 buckets
 stored as ORCFILE
 TBLPROPERTIES('transactional'='true')
 ;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9914) Post success comments on Jira from Jenkins metastore upgrades scripts

2015-03-11 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14357138#comment-14357138
 ] 

Hive QA commented on HIVE-9914:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12703935/HIVE-9914.1.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 7 tests executed
*Failed tests:*
{noformat}
Test failed: mysql/upgrade-0.14.0-to-1.1.0.mysql.sql
Test failed: mysql/upgrade-1.1.0-to-1.2.0.mysql.sql
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/HIVE-HMS-TESTING/29/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/HIVE-HMS-TESTING/29/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/HIVE-HMS-TESTING-29/

Messages:
{noformat}
LXC mysql found.
Preparing mysql container...
Container prepared.
Calling /tmp/hive/testutils/metastore/dbs/mysql/prepare.sh ...
Server prepared.
Executing sql test: mysql/hive-schema-0.10.0.mysql.sql
Executing sql test: mysql/upgrade-0.10.0-to-0.11.0.mysql.sql
Executing sql test: mysql/upgrade-0.11.0-to-0.12.0.mysql.sql
Executing sql test: mysql/upgrade-0.12.0-to-0.13.0.mysql.sql
Executing sql test: mysql/upgrade-0.13.0-to-0.14.0.mysql.sql
Executing sql test: mysql/upgrade-0.14.0-to-1.1.0.mysql.sql
Test failed: mysql/upgrade-0.14.0-to-1.1.0.mysql.sql
Executing sql test: mysql/upgrade-1.1.0-to-1.2.0.mysql.sql
Test failed: mysql/upgrade-1.1.0-to-1.2.0.mysql.sql
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12703935 - HIVE-HMS-TESTING

 Post success comments on Jira from Jenkins metastore upgrades scripts
 -

 Key: HIVE-9914
 URL: https://issues.apache.org/jira/browse/HIVE-9914
 Project: Hive
  Issue Type: Improvement
Reporter: Sergio Peña
Assignee: Sergio Peña
 Attachments: HIVE-9914.1.patch


 Currently, the HMS upgrade testing post failure comments on Jira only. We 
 need to post success comments as well so that users know that their upgrade 
 changes are working.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9924) Add SORT_QUERY_RESULTS to union12.q

2015-03-11 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14357009#comment-14357009
 ] 

Rui Li commented on HIVE-9924:
--

Thanks Xuefu for taking care of this. I realized HIVE-9569 wasn't merged to 
trunk. So maybe we can just fix this on spark branch. What do you think?

 Add SORT_QUERY_RESULTS to union12.q
 ---

 Key: HIVE-9924
 URL: https://issues.apache.org/jira/browse/HIVE-9924
 Project: Hive
  Issue Type: Test
Reporter: Rui Li
Assignee: Rui Li
Priority: Minor
 Attachments: HIVE-9924.1-spark.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9914) Post success comments on Jira from Jenkins metastore upgrades scripts

2015-03-11 Thread Hive QA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hive QA updated HIVE-9914:
--
Attachment: (was: HIVE-9914.1.patch)

 Post success comments on Jira from Jenkins metastore upgrades scripts
 -

 Key: HIVE-9914
 URL: https://issues.apache.org/jira/browse/HIVE-9914
 Project: Hive
  Issue Type: Improvement
Reporter: Sergio Peña
Assignee: Sergio Peña

 Currently, the HMS upgrade testing post failure comments on Jira only. We 
 need to post success comments as well so that users know that their upgrade 
 changes are working.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9914) Post success comments on Jira from Jenkins metastore upgrades scripts

2015-03-11 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-9914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-9914:
--
Attachment: HIVE-9914.1.patch

 Post success comments on Jira from Jenkins metastore upgrades scripts
 -

 Key: HIVE-9914
 URL: https://issues.apache.org/jira/browse/HIVE-9914
 Project: Hive
  Issue Type: Improvement
Reporter: Sergio Peña
Assignee: Sergio Peña
 Attachments: HIVE-9914.1.patch


 Currently, the HMS upgrade testing post failure comments on Jira only. We 
 need to post success comments as well so that users know that their upgrade 
 changes are working.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9914) Post success comments on Jira from Jenkins metastore upgrades scripts

2015-03-11 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14357184#comment-14357184
 ] 

Hive QA commented on HIVE-9914:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12703914/HIVE-9914.2.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 7762 tests executed
*Failed tests:*
{noformat}
org.apache.hive.hcatalog.streaming.TestStreaming.testTransactionBatchCommit_Json
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3001/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3001/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3001/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12703914 - PreCommit-HIVE-TRUNK-Build

 Post success comments on Jira from Jenkins metastore upgrades scripts
 -

 Key: HIVE-9914
 URL: https://issues.apache.org/jira/browse/HIVE-9914
 Project: Hive
  Issue Type: Improvement
Reporter: Sergio Peña
Assignee: Sergio Peña
 Attachments: HIVE-9914.1.patch


 Currently, the HMS upgrade testing post failure comments on Jira only. We 
 need to post success comments as well so that users know that their upgrade 
 changes are working.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9914) Post success comments on Jira from Jenkins metastore upgrades scripts

2015-03-11 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14357131#comment-14357131
 ] 

Hive QA commented on HIVE-9914:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12703914/HIVE-9914.2.patch

{color:green}SUCCESS:{color} +1 8 tests passed

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/HIVE-HMS-TESTING/25/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/HIVE-HMS-TESTING/25/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/HIVE-HMS-TESTING-25/

Messages:
{noformat}
LXC mysql found.
Preparing mysql container...
Container prepared.
Calling /tmp/hive/testutils/metastore/dbs/mysql/prepare.sh ...
Server prepared.
Executing sql test: mysql/hive-schema-0.10.0.mysql.sql
Executing sql test: mysql/upgrade-0.10.0-to-0.11.0.mysql.sql
Executing sql test: mysql/upgrade-0.11.0-to-0.12.0.mysql.sql
Executing sql test: mysql/upgrade-0.12.0-to-0.13.0.mysql.sql
Executing sql test: mysql/upgrade-0.13.0-to-0.14.0.mysql.sql
Executing sql test: mysql/upgrade-0.14.0-to-1.1.0.mysql.sql
Executing sql test: mysql/upgrade-1.1.0-to-1.2.0.mysql.sql
Executing sql test: mysql/upgrade-1.2.0-to-1.3.0.mysql.sql
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12703914 - HIVE-HMS-TESTING

 Post success comments on Jira from Jenkins metastore upgrades scripts
 -

 Key: HIVE-9914
 URL: https://issues.apache.org/jira/browse/HIVE-9914
 Project: Hive
  Issue Type: Improvement
Reporter: Sergio Peña
Assignee: Sergio Peña
 Attachments: HIVE-9914.2.patch


 Currently, the HMS upgrade testing post failure comments on Jira only. We 
 need to post success comments as well so that users know that their upgrade 
 changes are working.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9924) Add SORT_QUERY_RESULTS to union12.q

2015-03-11 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14357035#comment-14357035
 ] 

Xuefu Zhang commented on HIVE-9924:
---

Yes. Let's fix for Spark branch first.

 Add SORT_QUERY_RESULTS to union12.q
 ---

 Key: HIVE-9924
 URL: https://issues.apache.org/jira/browse/HIVE-9924
 Project: Hive
  Issue Type: Test
Reporter: Rui Li
Assignee: Rui Li
Priority: Minor
 Attachments: HIVE-9924.1-spark.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9658) Reduce parquet memory use by bypassing java primitive objects on ETypeConverter

2015-03-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-9658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14357593#comment-14357593
 ] 

Sergio Peña commented on HIVE-9658:
---

This patch can be applied to 'parquet', but not to 'trunk'.

[~brocknoland] Does 'parquet' need another merge from 'trunk'? [~csun] did it 
before, but this is still failing.

 Reduce parquet memory use by bypassing java primitive objects on 
 ETypeConverter
 ---

 Key: HIVE-9658
 URL: https://issues.apache.org/jira/browse/HIVE-9658
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergio Peña
Assignee: Sergio Peña
 Attachments: HIVE-9658.1.patch, HIVE-9658.2.patch, HIVE-9658.3.patch


 The ETypeConverter class passes Writable objects to the collection converters 
 in order to be read later by the map/reduce functions. These objects are all 
 wrapped in a unique ArrayWritable object.
 We can save some memory by returning the java primitive objects instead in 
 order to prevent memory allocation. The only writable object needed by 
 map/reduce is ArrayWritable. If we create another writable class where to 
 store primitive objects (Object), then we can stop using all primitive 
 wirtables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9857) Create Factorial UDF

2015-03-11 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9857:
--
Attachment: HIVE-9857.3.patch

patch #3 - removed dependency on commons-math3 because no other hive classes 
use it and factorial() implementations is simple.

 Create Factorial UDF
 

 Key: HIVE-9857
 URL: https://issues.apache.org/jira/browse/HIVE-9857
 Project: Hive
  Issue Type: Improvement
  Components: UDF
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
 Attachments: HIVE-9857.1.patch, HIVE-9857.2.patch, HIVE-9857.3.patch


 Function signature: factorial(int a): bigint
 For example 5!= 5*4*3*2*1=120
 {code}
 select factorial(5);
 OK
 120
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9658) Reduce parquet memory use by bypassing java primitive objects on ETypeConverter

2015-03-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-9658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14357597#comment-14357597
 ] 

Sergio Peña commented on HIVE-9658:
---

Btw, this patch is still on review

 Reduce parquet memory use by bypassing java primitive objects on 
 ETypeConverter
 ---

 Key: HIVE-9658
 URL: https://issues.apache.org/jira/browse/HIVE-9658
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergio Peña
Assignee: Sergio Peña
 Attachments: HIVE-9658.1.patch, HIVE-9658.2.patch, HIVE-9658.3.patch


 The ETypeConverter class passes Writable objects to the collection converters 
 in order to be read later by the map/reduce functions. These objects are all 
 wrapped in a unique ArrayWritable object.
 We can save some memory by returning the java primitive objects instead in 
 order to prevent memory allocation. The only writable object needed by 
 map/reduce is ArrayWritable. If we create another writable class where to 
 store primitive objects (Object), then we can stop using all primitive 
 wirtables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9658) Reduce parquet memory use by bypassing java primitive objects on ETypeConverter

2015-03-11 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14357601#comment-14357601
 ] 

Brock Noland commented on HIVE-9658:


My patch to make pre-commits working on branches is nearly ready but for now 
the patch must work on trunk...

 Reduce parquet memory use by bypassing java primitive objects on 
 ETypeConverter
 ---

 Key: HIVE-9658
 URL: https://issues.apache.org/jira/browse/HIVE-9658
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergio Peña
Assignee: Sergio Peña
 Attachments: HIVE-9658.1.patch, HIVE-9658.2.patch, HIVE-9658.3.patch


 The ETypeConverter class passes Writable objects to the collection converters 
 in order to be read later by the map/reduce functions. These objects are all 
 wrapped in a unique ArrayWritable object.
 We can save some memory by returning the java primitive objects instead in 
 order to prevent memory allocation. The only writable object needed by 
 map/reduce is ArrayWritable. If we create another writable class where to 
 store primitive objects (Object), then we can stop using all primitive 
 wirtables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9934) Vulnerability in LdapAuthenticationProviderImpl enables HiveServer2 client to degrade the authentication mechanism to none, allowing authentication without password

2015-03-11 Thread Chao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao updated HIVE-9934:
---
Attachment: HIVE-9934.1.patch

Check if password is null or blank. If so, throw exception.

 Vulnerability in LdapAuthenticationProviderImpl enables HiveServer2 client to 
 degrade the authentication mechanism to none, allowing authentication 
 without password
 --

 Key: HIVE-9934
 URL: https://issues.apache.org/jira/browse/HIVE-9934
 Project: Hive
  Issue Type: Bug
  Components: Security
Affects Versions: 1.1.0
Reporter: Chao
Assignee: Chao
 Attachments: HIVE-9934.1.patch


 Vulnerability in LdapAuthenticationProviderImpl enables HiveServer2 client to 
 degrade the authentication mechanism to none, allowing authentication 
 without password.
 See: http://docs.oracle.com/javase/jndi/tutorial/ldap/security/simple.html
 “If you supply an empty string, an empty byte/char array, or null to the 
 Context.SECURITY_CREDENTIALS environment property, then the authentication 
 mechanism will be none. This is because the LDAP requires the password to 
 be nonempty for simple authentication. The protocol automatically converts 
 the authentication to none if a password is not supplied.”
  
 Since the LdapAuthenticationProviderImpl.Authenticate method is relying on a 
 NamingException being thrown during creation of initial context, it does not 
 fail when the context result is an “unauthenticated” positive response from 
 the LDAP server. The end result is, one can authenticate with HiveServer2 
 using the LdapAuthenticationProviderImpl with only a user name and an empty 
 password.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9931) Approximate nDV statistics from ORC bloom filter population

2015-03-11 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-9931:
--
Labels: ORC  (was: )

 Approximate nDV statistics from ORC bloom filter population
 ---

 Key: HIVE-9931
 URL: https://issues.apache.org/jira/browse/HIVE-9931
 Project: Hive
  Issue Type: Improvement
  Components: Statistics
Affects Versions: 1.2.0
Reporter: Gopal V
  Labels: ORC

 The current CBO implementation requires column nDV statistics to produce good 
 estimates of JOIN selectivity and filter selectivity.
 The ORC bloom filters provides an opportunity to estimate the net population 
 of a row-group with false-positive rates capped for each row-group.
 This is not useful for filter conditions or join conditions with a 
 cardinality which is a large fraction of the row-count, but can collect 
 viable statistics for low-cardinality filter columns (de-normalization 
 scenarios) or for JOIN dimension columns of low cardinality (demographics or 
 store location).
 The challenge in this feature is in distinguishing between these two 
 scenarios, not in the derivation of the approximate nDV itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9930) fix QueryPlan.makeQueryId time format

2015-03-11 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9930:
--
Attachment: HIVE-9930.1.patch

patch #1

 fix QueryPlan.makeQueryId time format
 -

 Key: HIVE-9930
 URL: https://issues.apache.org/jira/browse/HIVE-9930
 Project: Hive
  Issue Type: Bug
  Components: Query Planning
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
Priority: Minor
 Attachments: HIVE-9930.1.patch


 String format uses Minutes value (25 in the example below) for both minutes 
 and seconds (positions 5 and 6)
 {code}
 now:
 apivovarov_20150311102525_6a149732-8360-43b8-9858-a6e59a8be68c
 should be:
 apivovarov_20150311102551_6a149732-8360-43b8-9858-a6e59a8be68c
 {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9929) StatsUtil#getAvailableMemory could return negative value

2015-03-11 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HIVE-9929:
--
Attachment: HIVE-9929.1.patch

 StatsUtil#getAvailableMemory could return negative value
 

 Key: HIVE-9929
 URL: https://issues.apache.org/jira/browse/HIVE-9929
 Project: Hive
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 1.2.0

 Attachments: HIVE-9929.1.patch


 In MAPREDUCE-5785, the default value of mapreduce.map.memory.mb is set to -1. 
 We need fix StatsUtil#getAvailableMemory not to return negative value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9916) Fix TestSparkSessionManagerImpl [Spark Branch]

2015-03-11 Thread Chao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao updated HIVE-9916:
---
Attachment: HIVE-9916.2-spark.patch

Attaching the same patch to trigger run.

 Fix TestSparkSessionManagerImpl [Spark Branch]
 --

 Key: HIVE-9916
 URL: https://issues.apache.org/jira/browse/HIVE-9916
 Project: Hive
  Issue Type: Bug
  Components: spark-branch
Affects Versions: spark-branch
Reporter: Chao
Assignee: Chao
 Attachments: HIVE-9916.1-spark.patch, HIVE-9916.2-spark.patch


 Looks like in HIVE-9872, wrong patch is committed, and therefore 
 TestSparkSessionManagerImpl will still fail. This JIRA should fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9511) Switch Tez to 0.6.0

2015-03-11 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14357328#comment-14357328
 ] 

Hive QA commented on HIVE-9511:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12703916/HIVE-9511.3.patch.txt

{color:green}SUCCESS:{color} +1 7762 tests passed

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3002/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3002/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3002/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12703916 - PreCommit-HIVE-TRUNK-Build

 Switch Tez to 0.6.0
 ---

 Key: HIVE-9511
 URL: https://issues.apache.org/jira/browse/HIVE-9511
 Project: Hive
  Issue Type: Improvement
Reporter: Damien Carol
Assignee: Damien Carol
 Attachments: HIVE-9511.2.patch, HIVE-9511.3.patch.txt, 
 HIVE-9511.patch.txt


 Tez 0.6.0 has been released.
 Research to switch to version 0.6.0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9914) Post success comments on Jira from Jenkins metastore upgrades scripts

2015-03-11 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14357441#comment-14357441
 ] 

Hive QA commented on HIVE-9914:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12703938/HIVE-9914.1.patch

{color:green}SUCCESS:{color} +1 7762 tests passed

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3003/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3003/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3003/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12703938 - PreCommit-HIVE-TRUNK-Build

 Post success comments on Jira from Jenkins metastore upgrades scripts
 -

 Key: HIVE-9914
 URL: https://issues.apache.org/jira/browse/HIVE-9914
 Project: Hive
  Issue Type: Improvement
Reporter: Sergio Peña
Assignee: Sergio Peña
 Attachments: HIVE-9914.1.patch, HIVE-9914.2.patch


 Currently, the HMS upgrade testing post failure comments on Jira only. We 
 need to post success comments as well so that users know that their upgrade 
 changes are working.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9916) Fix TestSparkSessionManagerImpl [Spark Branch]

2015-03-11 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14357466#comment-14357466
 ] 

Hive QA commented on HIVE-9916:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12703970/HIVE-9916.2-spark.patch

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 7644 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udaf_percentile_approx_23
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_union12
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_union31
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/784/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/784/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-784/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12703970 - PreCommit-HIVE-SPARK-Build

 Fix TestSparkSessionManagerImpl [Spark Branch]
 --

 Key: HIVE-9916
 URL: https://issues.apache.org/jira/browse/HIVE-9916
 Project: Hive
  Issue Type: Bug
  Components: spark-branch
Affects Versions: spark-branch
Reporter: Chao
Assignee: Chao
 Attachments: HIVE-9916.1-spark.patch, HIVE-9916.2-spark.patch


 Looks like in HIVE-9872, wrong patch is committed, and therefore 
 TestSparkSessionManagerImpl will still fail. This JIRA should fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9916) Fix TestSparkSessionManagerImpl [Spark Branch]

2015-03-11 Thread Chao (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14357474#comment-14357474
 ] 

Chao commented on HIVE-9916:


Test failures are not related, although we do need to look at union12, and 
union31. They probably came from HIVE-9569.

 Fix TestSparkSessionManagerImpl [Spark Branch]
 --

 Key: HIVE-9916
 URL: https://issues.apache.org/jira/browse/HIVE-9916
 Project: Hive
  Issue Type: Bug
  Components: spark-branch
Affects Versions: spark-branch
Reporter: Chao
Assignee: Chao
 Attachments: HIVE-9916.1-spark.patch, HIVE-9916.2-spark.patch


 Looks like in HIVE-9872, wrong patch is committed, and therefore 
 TestSparkSessionManagerImpl will still fail. This JIRA should fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9932) DDLTask.conf hides base class Task.conf

2015-03-11 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9932:
--
Attachment: HIVE-9932.1.patch

patch #1

 DDLTask.conf hides base class Task.conf
 ---

 Key: HIVE-9932
 URL: https://issues.apache.org/jira/browse/HIVE-9932
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
Priority: Minor
 Attachments: HIVE-9932.1.patch


 DDLTask defines field conf
 DDLTask extends Task
 Task also defines protected field conf  (which is accessible from DDLTask)
 Probably we should remove field conf from DDLTask.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9813) Hive JDBC - DatabaseMetaData.getColumns method cannot find classes added with add jar command

2015-03-11 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14357697#comment-14357697
 ] 

Xuefu Zhang commented on HIVE-9813:
---

+1

 Hive JDBC - DatabaseMetaData.getColumns method cannot find classes added with 
 add jar command
 ---

 Key: HIVE-9813
 URL: https://issues.apache.org/jira/browse/HIVE-9813
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Reporter: Yongzhi Chen
Assignee: Yongzhi Chen
 Attachments: HIVE-9813.1.patch, HIVE-9813.3.patch


 Execute following JDBC client program:
 {code}
 import java.sql.*;
 public class TestAddJar {
 private static Connection makeConnection(String connString, String 
 classPath) throws ClassNotFoundException, SQLException
 {
 System.out.println(Current Connection info: + connString);
 Class.forName(classPath);
 System.out.println(Current driver info: + classPath);
 return DriverManager.getConnection(connString);
 }
 public static void main(String[] args)
 {
 if(2 != args.length)
 {
 System.out.println(Two arguments needed: connection string, path 
 to jar to be added (include jar name));
 System.out.println(Example: java -jar TestApp.jar 
 jdbc:hive2://192.168.111.111 /tmp/json-serde-1.3-jar-with-dependencies.jar);
 return;
 }
 Connection conn;
 try
 {
 conn = makeConnection(args[0], org.apache.hive.jdbc.HiveDriver);
 
 System.out.println(---);
 System.out.println(DONE);
 
 System.out.println(---);
 System.out.println(Execute query: add jar  + args[1] + ;);
 Statement stmt = conn.createStatement();
 int c = stmt.executeUpdate(add jar  + args[1]);
 System.out.println(Returned value is: [ + c + ]\n);
 
 System.out.println(---);
 final String createTableQry = Create table if not exists 
 json_test(id int, content string)  +
 row format serde 'org.openx.data.jsonserde.JsonSerDe';
 System.out.println(Execute query: + createTableQry + ;);
 stmt.execute(createTableQry);
 
 System.out.println(---);
 System.out.println(getColumn() 
 Call---\n);
 DatabaseMetaData md = conn.getMetaData();
 System.out.println(Test get all column in a schema:);
 ResultSet rs = md.getColumns(Hive, default, json_test, 
 null);
 while (rs.next()) {
 System.out.println(rs.getString(1));
 }
 conn.close();
 }
 catch (ClassNotFoundException e)
 {
 e.printStackTrace();
 }
 catch (SQLException e)
 {
 e.printStackTrace();
 }
 }
 }
 {code}
 Get Exception, and from metastore log:
 7:41:30.316 PMERROR   hive.log
 error in initSerDe: java.lang.ClassNotFoundException Class 
 org.openx.data.jsonserde.JsonSerDe not found
 java.lang.ClassNotFoundException: Class org.openx.data.jsonserde.JsonSerDe 
 not found
 at 
 org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1803)
 at 
 org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:183)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_fields(HiveMetaStore.java:2487)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_schema(HiveMetaStore.java:2542)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
 at com.sun.proxy.$Proxy5.get_schema(Unknown Source)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_schema.getResult(ThriftHiveMetastore.java:6425)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_schema.getResult(ThriftHiveMetastore.java:6409)
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
 at 
 org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)
 at 
 

[jira] [Updated] (HIVE-9277) Hybrid Hybrid Grace Hash Join

2015-03-11 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-9277:

Attachment: HIVE-9277.08.patch

Upload 8th patch for testing. The main change is we don't use RowContainer 
anymore, instead Kryo is used for serialization/deserialization.

 Hybrid Hybrid Grace Hash Join
 -

 Key: HIVE-9277
 URL: https://issues.apache.org/jira/browse/HIVE-9277
 Project: Hive
  Issue Type: New Feature
  Components: Physical Optimizer
Reporter: Wei Zheng
Assignee: Wei Zheng
  Labels: join
 Attachments: HIVE-9277.01.patch, HIVE-9277.02.patch, 
 HIVE-9277.03.patch, HIVE-9277.04.patch, HIVE-9277.05.patch, 
 HIVE-9277.06.patch, HIVE-9277.07.patch, HIVE-9277.08.patch, 
 High-leveldesignforHybridHybridGraceHashJoinv1.0.pdf


 We are proposing an enhanced hash join algorithm called _“hybrid hybrid grace 
 hash join”_.
 We can benefit from this feature as illustrated below:
 * The query will not fail even if the estimated memory requirement is 
 slightly wrong
 * Expensive garbage collection overhead can be avoided when hash table grows
 * Join execution using a Map join operator even though the small table 
 doesn't fit in memory as spilling some data from the build and probe sides 
 will still be cheaper than having to shuffle the large fact table
 The design was based on Hadoop’s parallel processing capability and 
 significant amount of memory available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9915) Allow specifying file format for managed tables

2015-03-11 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14357711#comment-14357711
 ] 

Gopal V commented on HIVE-9915:
---

LGTM - +1

 Allow specifying file format for managed tables
 ---

 Key: HIVE-9915
 URL: https://issues.apache.org/jira/browse/HIVE-9915
 Project: Hive
  Issue Type: Bug
  Components: File Formats
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Attachments: HIVE-9915.1.patch


 We already allow setting a system wide default format. In some cases it's 
 useful though to specify this only for managed tables, or distinguish 
 external and managed via two variables. You might want to set a more 
 efficient (than text) format for managed tables, but leave external to text 
 (as they often are log files etc.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9916) Fix TestSparkSessionManagerImpl [Spark Branch]

2015-03-11 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14357739#comment-14357739
 ] 

Xuefu Zhang commented on HIVE-9916:
---

+1

 Fix TestSparkSessionManagerImpl [Spark Branch]
 --

 Key: HIVE-9916
 URL: https://issues.apache.org/jira/browse/HIVE-9916
 Project: Hive
  Issue Type: Bug
  Components: spark-branch
Affects Versions: spark-branch
Reporter: Chao
Assignee: Chao
 Attachments: HIVE-9916.1-spark.patch, HIVE-9916.2-spark.patch


 Looks like in HIVE-9872, wrong patch is committed, and therefore 
 TestSparkSessionManagerImpl will still fail. This JIRA should fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9813) Hive JDBC - DatabaseMetaData.getColumns method cannot find classes added with add jar command

2015-03-11 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14357828#comment-14357828
 ] 

Hive QA commented on HIVE-9813:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12703959/HIVE-9813.3.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 7762 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby3_map_multi_distinct
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3006/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3006/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3006/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12703959 - PreCommit-HIVE-TRUNK-Build

 Hive JDBC - DatabaseMetaData.getColumns method cannot find classes added with 
 add jar command
 ---

 Key: HIVE-9813
 URL: https://issues.apache.org/jira/browse/HIVE-9813
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Reporter: Yongzhi Chen
Assignee: Yongzhi Chen
 Attachments: HIVE-9813.1.patch, HIVE-9813.3.patch


 Execute following JDBC client program:
 {code}
 import java.sql.*;
 public class TestAddJar {
 private static Connection makeConnection(String connString, String 
 classPath) throws ClassNotFoundException, SQLException
 {
 System.out.println(Current Connection info: + connString);
 Class.forName(classPath);
 System.out.println(Current driver info: + classPath);
 return DriverManager.getConnection(connString);
 }
 public static void main(String[] args)
 {
 if(2 != args.length)
 {
 System.out.println(Two arguments needed: connection string, path 
 to jar to be added (include jar name));
 System.out.println(Example: java -jar TestApp.jar 
 jdbc:hive2://192.168.111.111 /tmp/json-serde-1.3-jar-with-dependencies.jar);
 return;
 }
 Connection conn;
 try
 {
 conn = makeConnection(args[0], org.apache.hive.jdbc.HiveDriver);
 
 System.out.println(---);
 System.out.println(DONE);
 
 System.out.println(---);
 System.out.println(Execute query: add jar  + args[1] + ;);
 Statement stmt = conn.createStatement();
 int c = stmt.executeUpdate(add jar  + args[1]);
 System.out.println(Returned value is: [ + c + ]\n);
 
 System.out.println(---);
 final String createTableQry = Create table if not exists 
 json_test(id int, content string)  +
 row format serde 'org.openx.data.jsonserde.JsonSerDe';
 System.out.println(Execute query: + createTableQry + ;);
 stmt.execute(createTableQry);
 
 System.out.println(---);
 System.out.println(getColumn() 
 Call---\n);
 DatabaseMetaData md = conn.getMetaData();
 System.out.println(Test get all column in a schema:);
 ResultSet rs = md.getColumns(Hive, default, json_test, 
 null);
 while (rs.next()) {
 System.out.println(rs.getString(1));
 }
 conn.close();
 }
 catch (ClassNotFoundException e)
 {
 e.printStackTrace();
 }
 catch (SQLException e)
 {
 e.printStackTrace();
 }
 }
 }
 {code}
 Get Exception, and from metastore log:
 7:41:30.316 PMERROR   hive.log
 error in initSerDe: java.lang.ClassNotFoundException Class 
 org.openx.data.jsonserde.JsonSerDe not found
 java.lang.ClassNotFoundException: Class org.openx.data.jsonserde.JsonSerDe 
 not found
 at 
 org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1803)
 at 
 org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:183)
 at 
 

[jira] [Updated] (HIVE-9935) Fix tests for java 1.8 [Spark Branch]

2015-03-11 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HIVE-9935:
--
Attachment: HIVE-9935.1-spark.patch

 Fix tests for java 1.8 [Spark Branch]
 -

 Key: HIVE-9935
 URL: https://issues.apache.org/jira/browse/HIVE-9935
 Project: Hive
  Issue Type: Bug
  Components: spark-branch
Affects Versions: spark-branch
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: spark-branch

 Attachments: HIVE-9935.1-spark.patch


 In spark branch, these tests don't have java 1.8 golden file:
 join0.q
 list_bucket_dml_2
 subquery_multiinsert.q



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9928) Empty buckets are not created on non-HDFS file system

2015-03-11 Thread Ankit Kamboj (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14357830#comment-14357830
 ] 

Ankit Kamboj commented on HIVE-9928:


The above test failures doesn't seem to be related to bucketing. Could somebody 
please take a look and suggest?

 Empty buckets are not created on non-HDFS file system
 -

 Key: HIVE-9928
 URL: https://issues.apache.org/jira/browse/HIVE-9928
 Project: Hive
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Ankit Kamboj
 Attachments: HIVE-9928.1.patch


 Bucketing should create empty buckets on the destination file system. There 
 is a problem in that logic that it uses path.toUri().getPath().toString() to 
 find the relevant path. But this chain of methods always resolves to relative 
 path which ends up creating the empty buckets in hdfs rather than actual 
 destination fs. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9924) Add SORT_QUERY_RESULTS to union12.q [Spark Branch]

2015-03-11 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-9924:
--
Component/s: Spark

 Add SORT_QUERY_RESULTS to union12.q [Spark Branch]
 --

 Key: HIVE-9924
 URL: https://issues.apache.org/jira/browse/HIVE-9924
 Project: Hive
  Issue Type: Test
  Components: Spark
Reporter: Rui Li
Assignee: Rui Li
Priority: Minor
 Attachments: HIVE-9924.1-spark.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9924) Add SORT_QUERY_RESULTS to union12.q [Spark Branch]

2015-03-11 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-9924:
--
Summary: Add SORT_QUERY_RESULTS to union12.q [Spark Branch]  (was: Add 
SORT_QUERY_RESULTS to union12.q)

 Add SORT_QUERY_RESULTS to union12.q [Spark Branch]
 --

 Key: HIVE-9924
 URL: https://issues.apache.org/jira/browse/HIVE-9924
 Project: Hive
  Issue Type: Test
  Components: Spark
Reporter: Rui Li
Assignee: Rui Li
Priority: Minor
 Attachments: HIVE-9924.1-spark.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9929) StatsUtil#getAvailableMemory could return negative value

2015-03-11 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14357744#comment-14357744
 ] 

Jimmy Xiang commented on HIVE-9929:
---

This test is ok on my box.

 StatsUtil#getAvailableMemory could return negative value
 

 Key: HIVE-9929
 URL: https://issues.apache.org/jira/browse/HIVE-9929
 Project: Hive
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 1.2.0

 Attachments: HIVE-9929.1.patch


 In MAPREDUCE-5785, the default value of mapreduce.map.memory.mb is set to -1. 
 We need fix StatsUtil#getAvailableMemory not to return negative value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-9916) Fix TestSparkSessionManagerImpl [Spark Branch]

2015-03-11 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14357739#comment-14357739
 ] 

Xuefu Zhang edited comment on HIVE-9916 at 3/11/15 10:43 PM:
-

+1

The union related test failures will be addressed in HIVE-9924.


was (Author: xuefuz):
+1

 Fix TestSparkSessionManagerImpl [Spark Branch]
 --

 Key: HIVE-9916
 URL: https://issues.apache.org/jira/browse/HIVE-9916
 Project: Hive
  Issue Type: Bug
  Components: spark-branch
Affects Versions: spark-branch
Reporter: Chao
Assignee: Chao
 Attachments: HIVE-9916.1-spark.patch, HIVE-9916.2-spark.patch


 Looks like in HIVE-9872, wrong patch is committed, and therefore 
 TestSparkSessionManagerImpl will still fail. This JIRA should fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9743) Incorrect result set for vectorized left outer join

2015-03-11 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14357896#comment-14357896
 ] 

Gunther Hagleitner commented on HIVE-9743:
--

[~vikram.dixit] ready to commit?

 Incorrect result set for vectorized left outer join
 ---

 Key: HIVE-9743
 URL: https://issues.apache.org/jira/browse/HIVE-9743
 Project: Hive
  Issue Type: Bug
  Components: SQL
Affects Versions: 0.14.0
Reporter: N Campbell
Assignee: Matt McCline
 Attachments: HIVE-9743.01.patch, HIVE-9743.02.patch, 
 HIVE-9743.03.patch, HIVE-9743.04.patch, HIVE-9743.05.patch


 This query is supposed to return 3 rows and will when run without Tez but 
 returns 2 rows when run with Tez.
 select tjoin1.rnum, tjoin1.c1, tjoin1.c2, tjoin2.c2 as c2j2 from tjoin1 left 
 outer join tjoin2 on ( tjoin1.c1 = tjoin2.c1 and tjoin1.c2  15 )
 tjoin1.rnum   tjoin1.c1   tjoin1.c2   c2j2
 1 20  25  null
 2 null  50  null
 instead of
 tjoin1.rnum   tjoin1.c1   tjoin1.c2   c2j2
 0 10  15  null
 1 20  25  null
 2 null  50  null
 create table  if not exists TJOIN1 (RNUM int , C1 int, C2 int)
  STORED AS orc ;
 0|10|15
 1|20|25
 2|\N|50
 create table  if not exists TJOIN2 (RNUM int , C1 int, C2 char(2))
 ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
  STORED AS TEXTFILE ;
 0|10|BB
 1|15|DD
 2|\N|EE
 3|10|FF



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9930) fix QueryPlan.makeQueryId time format

2015-03-11 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14357913#comment-14357913
 ] 

Hive QA commented on HIVE-9930:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12703966/HIVE-9930.1.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 7762 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_schemeAuthority
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3007/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3007/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3007/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12703966 - PreCommit-HIVE-TRUNK-Build

 fix QueryPlan.makeQueryId time format
 -

 Key: HIVE-9930
 URL: https://issues.apache.org/jira/browse/HIVE-9930
 Project: Hive
  Issue Type: Bug
  Components: Query Planning
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
Priority: Minor
 Attachments: HIVE-9930.1.patch


 String format uses Minutes value (25 in the example below) for both minutes 
 and seconds (positions 5 and 6)
 {code}
 now:
 apivovarov_20150311102525_6a149732-8360-43b8-9858-a6e59a8be68c
 should be:
 apivovarov_20150311102551_6a149732-8360-43b8-9858-a6e59a8be68c
 {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9924) Add SORT_QUERY_RESULTS to union12.q [Spark Branch]

2015-03-11 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14357746#comment-14357746
 ] 

Xuefu Zhang commented on HIVE-9924:
---

We need to address the two union-related test failures. The other two will be 
fixed in HIVE-9916.

 Add SORT_QUERY_RESULTS to union12.q [Spark Branch]
 --

 Key: HIVE-9924
 URL: https://issues.apache.org/jira/browse/HIVE-9924
 Project: Hive
  Issue Type: Test
  Components: Spark
Reporter: Rui Li
Assignee: Rui Li
Priority: Minor
 Attachments: HIVE-9924.1-spark.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9935) Fix tests for java 1.8 [Spark Branch]

2015-03-11 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HIVE-9935:
--
Description: 
In spark branch, these tests don't have java 1.8 golden file:

join0.q
list_bucket_dml_2.q
subquery_multiinsert.q


  was:
In spark branch, these tests don't have java 1.8 golden file:

join0.q
list_bucket_dml_2
subquery_multiinsert.q



 Fix tests for java 1.8 [Spark Branch]
 -

 Key: HIVE-9935
 URL: https://issues.apache.org/jira/browse/HIVE-9935
 Project: Hive
  Issue Type: Bug
  Components: spark-branch
Affects Versions: spark-branch
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: spark-branch

 Attachments: HIVE-9935.1-spark.patch


 In spark branch, these tests don't have java 1.8 golden file:
 join0.q
 list_bucket_dml_2.q
 subquery_multiinsert.q



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9935) Fix tests for java 1.8 [Spark Branch]

2015-03-11 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14357826#comment-14357826
 ] 

Jimmy Xiang commented on HIVE-9935:
---

The results for 1.8 is similar to those for 1.7.

 Fix tests for java 1.8 [Spark Branch]
 -

 Key: HIVE-9935
 URL: https://issues.apache.org/jira/browse/HIVE-9935
 Project: Hive
  Issue Type: Bug
  Components: spark-branch
Affects Versions: spark-branch
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: spark-branch

 Attachments: HIVE-9935.1-spark.patch


 In spark branch, these tests don't have java 1.8 golden file:
 join0.q
 list_bucket_dml_2.q
 subquery_multiinsert.q



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-6410) Allow output serializations separators to be set for HDFS path as well.

2015-03-11 Thread Nemon Lou (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14356433#comment-14356433
 ] 

Nemon Lou commented on HIVE-6410:
-

[~amareshwari]Do you mind uploading this patch to HIVE-5672?
I have tried hive 1.0, this bug still exists.

 Allow output serializations separators to be set for HDFS path as well.
 ---

 Key: HIVE-6410
 URL: https://issues.apache.org/jira/browse/HIVE-6410
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Amareshwari Sriramadasu
Assignee: Amareshwari Sriramadasu
 Fix For: 0.14.0

 Attachments: HIVE-6410.patch


 HIVE-3682 adds functionality for users to set serialization constants for 
 'insert overwrite local directory'. The same functionality should be 
 available for hdfs path as well. The workaround suggested is to create a 
 table with required format and insert into the table, which enforces the 
 users to know the schema of the result and create the table ahead. Though 
 that works, it is good to have the functionality for loading into directory 
 as well.
 I'm planning to add the same functionality in 'insert overwrite directory' in 
 this jira.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9924) Add SORT_QUERY_RESULT to union12.q

2015-03-11 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-9924:
-
Priority: Minor  (was: Major)

 Add SORT_QUERY_RESULT to union12.q
 --

 Key: HIVE-9924
 URL: https://issues.apache.org/jira/browse/HIVE-9924
 Project: Hive
  Issue Type: Test
Reporter: Rui Li
Assignee: Rui Li
Priority: Minor





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9924) Add SORT_QUERY_RESULTS to union12.q

2015-03-11 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-9924:
-
Summary: Add SORT_QUERY_RESULTS to union12.q  (was: Add SORT_QUERY_RESULT 
to union12.q)

 Add SORT_QUERY_RESULTS to union12.q
 ---

 Key: HIVE-9924
 URL: https://issues.apache.org/jira/browse/HIVE-9924
 Project: Hive
  Issue Type: Test
Reporter: Rui Li
Assignee: Rui Li
Priority: Minor





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9920) DROP DATABASE IF EXISTS throws exception if database does not exist

2015-03-11 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang updated HIVE-9920:
--
Attachment: HIVE-9920.patch

The two failed tests seem not be related to the patch. I was not able to 
reproduce them in my local machine. Resubmit patch to trigger build to see if 
they can still be reproduced.

 DROP DATABASE IF EXISTS throws exception if database does not exist
 ---

 Key: HIVE-9920
 URL: https://issues.apache.org/jira/browse/HIVE-9920
 Project: Hive
  Issue Type: Bug
  Components: Logging, Metastore
Affects Versions: 1.0.0
Reporter: Chaoyu Tang
Assignee: Chaoyu Tang
Priority: Minor
 Attachments: HIVE-9920.patch, HIVE-9920.patch


 drop database if exists noexistingdb throws and logs full exception if the 
 database (noexistingdb) does not exist:
 15/03/10 22:47:22 WARN metastore.ObjectStore: Failed to get database 
 statsdb2, returning NoSuchObjectException
 15/03/11 00:19:55 ERROR metastore.RetryingHMSHandler: 
 NoSuchObjectException(message:statsdb2)
   at 
 org.apache.hadoop.hive.metastore.ObjectStore.getDatabase(ObjectStore.java:569)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98)
   at com.sun.proxy.$Proxy6.getDatabase(Unknown Source)
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_database_core(HiveMetaStore.java:953)
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_database(HiveMetaStore.java:927)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
   at com.sun.proxy.$Proxy8.get_database(Unknown Source)
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDatabase(HiveMetaStoreClient.java:1150)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:91)
   at com.sun.proxy.$Proxy9.getDatabase(Unknown Source)
   at org.apache.hadoop.hive.ql.metadata.Hive.getDatabase(Hive.java:1291)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.getDatabase(BaseSemanticAnalyzer.java:1364)
   at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeDropDatabase(DDLSemanticAnalyzer.java:777)
   at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:427)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:224)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:425)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:309)
   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1116)
   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1164)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1053)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1043)
   at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:207)
   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:159)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:370)
   at 
 org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:754)
   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675)
   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9659) 'Error while trying to create table container' occurs during hive query case execution when hive.optimize.skewjoin set to 'true' [Spark Branch]

2015-03-11 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14356828#comment-14356828
 ] 

Rui Li commented on HIVE-9659:
--

{{union12}} needs SORT_QUERY_RESULT label.
{{union31}} failed because it was merged from trunk and trunk doesn't have 
HIVE-9561.
Other failures seem unrelated.

 'Error while trying to create table container' occurs during hive query case 
 execution when hive.optimize.skewjoin set to 'true' [Spark Branch]
 ---

 Key: HIVE-9659
 URL: https://issues.apache.org/jira/browse/HIVE-9659
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xin Hao
Assignee: Rui Li
 Attachments: HIVE-9659.1-spark.patch, HIVE-9659.2-spark.patch, 
 HIVE-9659.3-spark.patch, HIVE-9659.4-spark.patch, HIVE-9659.4-spark.patch


 We found that 'Error while trying to create table container'  occurs during 
 Big-Bench Q12 case execution when hive.optimize.skewjoin set to 'true'.
 If hive.optimize.skewjoin set to 'false', the case could pass.
 How to reproduce:
 1. set hive.optimize.skewjoin=true;
 2. Run BigBench case Q12 and it will fail. 
 Check the executor log (e.g. /usr/lib/spark/work/app-/2/stderr) and you 
 will found error 'Error while trying to create table container' in the log 
 and also a NullPointerException near the end of the log.
 (a) Detail error message for 'Error while trying to create table container':
 {noformat}
 15/02/12 01:29:49 ERROR SparkMapRecordHandler: Error processing row: 
 org.apache.hadoop.hive.ql.metadata.HiveException: 
 org.apache.hadoop.hive.ql.metadata.HiveException: Error while trying to 
 create table container
 org.apache.hadoop.hive.ql.metadata.HiveException: 
 org.apache.hadoop.hive.ql.metadata.HiveException: Error while trying to 
 create table container
   at 
 org.apache.hadoop.hive.ql.exec.spark.HashTableLoader.load(HashTableLoader.java:118)
   at 
 org.apache.hadoop.hive.ql.exec.MapJoinOperator.loadHashTable(MapJoinOperator.java:193)
   at 
 org.apache.hadoop.hive.ql.exec.MapJoinOperator.cleanUpInputFileChangedOp(MapJoinOperator.java:219)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1051)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
   at 
 org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:486)
   at 
 org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:141)
   at 
 org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:47)
   at 
 org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27)
   at 
 org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:98)
   at 
 scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
   at 
 org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:217)
   at 
 org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:65)
   at 
 org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
   at 
 org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
   at org.apache.spark.scheduler.Task.run(Task.scala:56)
   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error while 
 trying to create table container
   at 
 org.apache.hadoop.hive.ql.exec.persistence.MapJoinTableContainerSerDe.load(MapJoinTableContainerSerDe.java:158)
   at 
 org.apache.hadoop.hive.ql.exec.spark.HashTableLoader.load(HashTableLoader.java:115)
   ... 21 more
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error, not a 
 directory: 
 hdfs://bhx1:8020/tmp/hive/root/d22ef465-bff5-4edb-a822-0a9f1c25b66c/hive_2015-02-12_01-28-10_008_6897031694580088767-1/-mr-10009/HashTable-Stage-6/MapJoin-mapfile01--.hashtable
   at 
 org.apache.hadoop.hive.ql.exec.persistence.MapJoinTableContainerSerDe.load(MapJoinTableContainerSerDe.java:106)
   ... 22 more
 15/02/12 01:29:49 INFO SparkRecordHandler: maximum 

[jira] [Updated] (HIVE-9936) fix potential NPE in DefaultUDAFEvaluatorResolver

2015-03-11 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9936:
--
Description: 
In some cases DefaultUDAFEvaluatorResolver calls new 
AmbiguousMethodException(udafClass, null, null)  (line 94)
This will throw NPE because AmbiguousMethodException calls 
argTypeInfos.toString()
argTypeInfos is the second parameter and it can not be null.

  was:
In some cases DefaultUDAFEvaluatorResolver calls new 
AmbiguousMethodException(udafClass, null, null)
This will throw NPE because AmbiguousMethodException calls 
argTypeInfos.toString()
argTypeInfos is the second parameter and it can not be null.


 fix potential NPE in DefaultUDAFEvaluatorResolver
 -

 Key: HIVE-9936
 URL: https://issues.apache.org/jira/browse/HIVE-9936
 Project: Hive
  Issue Type: Bug
  Components: UDF
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov

 In some cases DefaultUDAFEvaluatorResolver calls new 
 AmbiguousMethodException(udafClass, null, null)  (line 94)
 This will throw NPE because AmbiguousMethodException calls 
 argTypeInfos.toString()
 argTypeInfos is the second parameter and it can not be null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9813) Hive JDBC - DatabaseMetaData.getColumns method cannot find classes added with add jar command

2015-03-11 Thread Yongzhi Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358070#comment-14358070
 ] 

Yongzhi Chen commented on HIVE-9813:


The failure is more like a data precision issue and not related to the patch. 
 130091.0  260.182 256.10355987055016  98.00.0 
142.92680950752379  143.06995106518903  20428.0728759   
20469.010897795582  79136.0 309.0
---
 130091.0  260.182 256.10355987055016  98.00.0 
 142.9268095075238   143.06995106518906  20428.072876
 20469.01089779559   79136.0 309.0

 Hive JDBC - DatabaseMetaData.getColumns method cannot find classes added with 
 add jar command
 ---

 Key: HIVE-9813
 URL: https://issues.apache.org/jira/browse/HIVE-9813
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Reporter: Yongzhi Chen
Assignee: Yongzhi Chen
 Attachments: HIVE-9813.1.patch, HIVE-9813.3.patch


 Execute following JDBC client program:
 {code}
 import java.sql.*;
 public class TestAddJar {
 private static Connection makeConnection(String connString, String 
 classPath) throws ClassNotFoundException, SQLException
 {
 System.out.println(Current Connection info: + connString);
 Class.forName(classPath);
 System.out.println(Current driver info: + classPath);
 return DriverManager.getConnection(connString);
 }
 public static void main(String[] args)
 {
 if(2 != args.length)
 {
 System.out.println(Two arguments needed: connection string, path 
 to jar to be added (include jar name));
 System.out.println(Example: java -jar TestApp.jar 
 jdbc:hive2://192.168.111.111 /tmp/json-serde-1.3-jar-with-dependencies.jar);
 return;
 }
 Connection conn;
 try
 {
 conn = makeConnection(args[0], org.apache.hive.jdbc.HiveDriver);
 
 System.out.println(---);
 System.out.println(DONE);
 
 System.out.println(---);
 System.out.println(Execute query: add jar  + args[1] + ;);
 Statement stmt = conn.createStatement();
 int c = stmt.executeUpdate(add jar  + args[1]);
 System.out.println(Returned value is: [ + c + ]\n);
 
 System.out.println(---);
 final String createTableQry = Create table if not exists 
 json_test(id int, content string)  +
 row format serde 'org.openx.data.jsonserde.JsonSerDe';
 System.out.println(Execute query: + createTableQry + ;);
 stmt.execute(createTableQry);
 
 System.out.println(---);
 System.out.println(getColumn() 
 Call---\n);
 DatabaseMetaData md = conn.getMetaData();
 System.out.println(Test get all column in a schema:);
 ResultSet rs = md.getColumns(Hive, default, json_test, 
 null);
 while (rs.next()) {
 System.out.println(rs.getString(1));
 }
 conn.close();
 }
 catch (ClassNotFoundException e)
 {
 e.printStackTrace();
 }
 catch (SQLException e)
 {
 e.printStackTrace();
 }
 }
 }
 {code}
 Get Exception, and from metastore log:
 7:41:30.316 PMERROR   hive.log
 error in initSerDe: java.lang.ClassNotFoundException Class 
 org.openx.data.jsonserde.JsonSerDe not found
 java.lang.ClassNotFoundException: Class org.openx.data.jsonserde.JsonSerDe 
 not found
 at 
 org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1803)
 at 
 org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:183)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_fields(HiveMetaStore.java:2487)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_schema(HiveMetaStore.java:2542)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
 at com.sun.proxy.$Proxy5.get_schema(Unknown Source)
 at 
 

[jira] [Commented] (HIVE-9929) StatsUtil#getAvailableMemory could return negative value

2015-03-11 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14357722#comment-14357722
 ] 

Hive QA commented on HIVE-9929:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12703956/HIVE-9929.1.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 7761 tests executed
*Failed tests:*
{noformat}
TestCustomAuthentication - did not produce a TEST-*.xml file
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3005/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3005/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3005/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12703956 - PreCommit-HIVE-TRUNK-Build

 StatsUtil#getAvailableMemory could return negative value
 

 Key: HIVE-9929
 URL: https://issues.apache.org/jira/browse/HIVE-9929
 Project: Hive
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 1.2.0

 Attachments: HIVE-9929.1.patch


 In MAPREDUCE-5785, the default value of mapreduce.map.memory.mb is set to -1. 
 We need fix StatsUtil#getAvailableMemory not to return negative value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9739) Various queries fails with Tez/ORC file org.apache.hadoop.hive.ql.exec.tez.TezTask due to Caused by: java.lang.ClassCastException

2015-03-11 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14357884#comment-14357884
 ] 

Gunther Hagleitner commented on HIVE-9739:
--

[~mmccline] can you confirm this is the same as HIVE-9249?

 Various queries fails with Tez/ORC file 
 org.apache.hadoop.hive.ql.exec.tez.TezTask due to Caused by: 
 java.lang.ClassCastException
 -

 Key: HIVE-9739
 URL: https://issues.apache.org/jira/browse/HIVE-9739
 Project: Hive
  Issue Type: Bug
  Components: SQL
Reporter: N Campbell

 This fails when using Tez and ORC. 
 It will run when text files are used or text/ORC and MapReduce and not Tez 
 used.
 Is this another example of a type issue per 
 https://issues.apache.org/jira/browse/HIVE-9735
 select rnum, c1, c2 from tset1 as t1 where exists ( select c1 from tset2 
 where c1 = t1.c1 )
 This will run in both Tez and MapReduce using a text file
 select rnum, c1, c2 from t_tset1 as t1 where exists ( select c1 from t_tset2 
 where c1 = t1.c1 )
 Caused by: java.lang.RuntimeException: 
 org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
 processing row 
   at 
 org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:91)
   at 
 org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:68)
   at 
 org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:294)
   at 
 org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:163)
   ... 13 more
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
 Error while processing row 
   at 
 org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:52)
   at 
 org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:83)
   ... 16 more
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unexpected 
 exception: org.apache.hadoop.hive.serde2.io.HiveCharWritable cannot be cast 
 to org.apache.hadoop.hive.common.type.HiveChar
   at 
 org.apache.hadoop.hive.ql.exec.MapJoinOperator.processOp(MapJoinOperator.java:311)
   at 
 org.apache.hadoop.hive.ql.exec.vector.VectorMapJoinOperator.processOp(VectorMapJoinOperator.java:249)
   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
   at 
 org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator.processOp(VectorFilterOperator.java:111)
   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
   at 
 org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:95)
   at 
 org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:157)
   at 
 org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:45)
   ... 17 more
 Caused by: java.lang.ClassCastException: 
 org.apache.hadoop.hive.serde2.io.HiveCharWritable cannot be cast to 
 org.apache.hadoop.hive.common.type.HiveChar
   at 
 org.apache.hadoop.hive.ql.exec.vector.VectorColumnAssignFactory$18.assignObjectValue(VectorColumnAssignFactory.java:432)
   at 
 org.apache.hadoop.hive.ql.exec.vector.VectorMapJoinOperator.internalForward(VectorMapJoinOperator.java:196)
   at 
 org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genAllOneUniqueJoinObject(CommonJoinOperator.java:670)
   at 
 org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:748)
   at 
 org.apache.hadoop.hive.ql.exec.MapJoinOperator.processOp(MapJoinOperator.java:299)
   ... 24 more
 create table  if not exists T_TSET1 (RNUM int , C1 int, C2 char(3))
  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
  STORED AS textfile ;
 create table  if not exists T_TSET1 (RNUM int , C1 int, C2 char(3))
  STORED AS ORC ;
 create table  if not exists T_TSET2 (RNUM int , C1 int, C2 char(3))
  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
  STORED AS textfile ;
 TSET1 data
 0|10|AAA
 1|10|AAA
 2|10|AAA
 3|20|BBB
 4|30|CCC
 5|40|DDD
 6|50|\N
 7|60|\N
 8|\N|AAA
 9|\N|AAA
 10|\N|\N
 11|\N|\N
 TSET2 DATA
 0|10|AAA
 1|10|AAA
 2|40|DDD
 3|50|EEE
 4|60|FFF



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9839) HiveServer2 leaks OperationHandle on async queries which fail at compile phase

2015-03-11 Thread Nemon Lou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nemon Lou updated HIVE-9839:

Attachment: OperationHandleMonitor.java

Uploading a Btrace script which can catch this leak .

 HiveServer2 leaks OperationHandle on async queries which fail at compile phase
 --

 Key: HIVE-9839
 URL: https://issues.apache.org/jira/browse/HIVE-9839
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.14.0, 0.13.1, 1.0.0
Reporter: Nemon Lou
 Attachments: OperationHandleMonitor.java, hive-9839.patch


 Using beeline to connect to HiveServer2.And type the following:
 drop table if exists table_not_exists;
 select * from table_not_exists;
 There will be an OperationHandle object staying in HiveServer2's memory for 
 ever even after quit from beeline .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9839) HiveServer2 leaks OperationHandle on async queries which fail at compile phase

2015-03-11 Thread Nemon Lou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nemon Lou updated HIVE-9839:

Affects Version/s: 1.1.0

 HiveServer2 leaks OperationHandle on async queries which fail at compile phase
 --

 Key: HIVE-9839
 URL: https://issues.apache.org/jira/browse/HIVE-9839
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.14.0, 0.13.1, 1.0.0, 1.1.0
Reporter: Nemon Lou
Priority: Critical
 Attachments: OperationHandleMonitor.java, hive-9839.patch


 Using beeline to connect to HiveServer2.And type the following:
 drop table if exists table_not_exists;
 select * from table_not_exists;
 There will be an OperationHandle object staying in HiveServer2's memory for 
 ever even after quit from beeline .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9839) HiveServer2 leaks OperationHandle on async queries which fail at compile phase

2015-03-11 Thread Nemon Lou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nemon Lou updated HIVE-9839:

Priority: Critical  (was: Major)

 HiveServer2 leaks OperationHandle on async queries which fail at compile phase
 --

 Key: HIVE-9839
 URL: https://issues.apache.org/jira/browse/HIVE-9839
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.14.0, 0.13.1, 1.0.0
Reporter: Nemon Lou
Priority: Critical
 Attachments: OperationHandleMonitor.java, hive-9839.patch


 Using beeline to connect to HiveServer2.And type the following:
 drop table if exists table_not_exists;
 select * from table_not_exists;
 There will be an OperationHandle object staying in HiveServer2's memory for 
 ever even after quit from beeline .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9936) fix potential NPE in DefaultUDAFEvaluatorResolver

2015-03-11 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9936:
--
Attachment: HIVE-9936.1.patch

patch #1

 fix potential NPE in DefaultUDAFEvaluatorResolver
 -

 Key: HIVE-9936
 URL: https://issues.apache.org/jira/browse/HIVE-9936
 Project: Hive
  Issue Type: Bug
  Components: UDF
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
 Attachments: HIVE-9936.1.patch


 In some cases DefaultUDAFEvaluatorResolver calls new 
 AmbiguousMethodException(udafClass, null, null)  (line 94)
 This will throw NPE because AmbiguousMethodException calls 
 argTypeInfos.toString()
 argTypeInfos is the second parameter and it can not be null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9924) Fix union12 and union31 for spark [Spark Branch]

2015-03-11 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-9924:
-
Summary: Fix union12 and union31 for spark [Spark Branch]  (was: Add 
SORT_QUERY_RESULTS to union12.q [Spark Branch])

 Fix union12 and union31 for spark [Spark Branch]
 

 Key: HIVE-9924
 URL: https://issues.apache.org/jira/browse/HIVE-9924
 Project: Hive
  Issue Type: Test
  Components: Spark
Reporter: Rui Li
Assignee: Rui Li
Priority: Minor
 Attachments: HIVE-9924.1-spark.patch, HIVE-9924.2-spark.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9924) Add SORT_QUERY_RESULTS to union12.q [Spark Branch]

2015-03-11 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-9924:
-
Attachment: HIVE-9924.2-spark.patch

 Add SORT_QUERY_RESULTS to union12.q [Spark Branch]
 --

 Key: HIVE-9924
 URL: https://issues.apache.org/jira/browse/HIVE-9924
 Project: Hive
  Issue Type: Test
  Components: Spark
Reporter: Rui Li
Assignee: Rui Li
Priority: Minor
 Attachments: HIVE-9924.1-spark.patch, HIVE-9924.2-spark.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9922) Compile hive failed

2015-03-11 Thread dqpylf (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dqpylf updated HIVE-9922:
-
Attachment: log-hive1.1
log-hive1.0

 Compile hive failed
 ---

 Key: HIVE-9922
 URL: https://issues.apache.org/jira/browse/HIVE-9922
 Project: Hive
  Issue Type: Bug
  Components: Hive
Affects Versions: 1.0.0
 Environment: red hat linux6.3
Reporter: dqpylf
 Attachments: log-hive1.0, log-hive1.1


 Hi,
 I compile hive failed,please refer to following information:
 [INFO] 
 
 [INFO] Reactor Summary:
 [INFO] 
 [INFO] Hive ... SUCCESS [ 31.673 
 s]
 [INFO] Hive Shims Common .. SUCCESS [ 20.184 
 s]
 [INFO] Hive Shims 0.20  SUCCESS [ 10.680 
 s]
 [INFO] Hive Shims Secure Common ... SUCCESS [ 14.380 
 s]
 [INFO] Hive Shims 0.20S ... SUCCESS [  5.792 
 s]
 [INFO] Hive Shims 0.23  SUCCESS [ 25.961 
 s]
 [INFO] Hive Shims . SUCCESS [  1.550 
 s]
 [INFO] Hive Common  SUCCESS [ 30.775 
 s]
 [INFO] Hive Serde . SUCCESS [01:21 
 min]
 [INFO] Hive Metastore . SUCCESS [02:39 
 min]
 [INFO] Hive Ant Utilities . SUCCESS [  4.433 
 s]
 [INFO] Hive Query Language  FAILURE [04:51 
 min]
 [INFO] Hive Service ... SKIPPED
 [INFO] Hive Accumulo Handler .. SKIPPED
 [INFO] Hive JDBC .. SKIPPED
 [INFO] Hive Beeline ... SKIPPED
 [INFO] Hive CLI ... SKIPPED
 [INFO] Hive Contrib ... SKIPPED
 [INFO] Hive HBase Handler . SKIPPED
 [INFO] Hive HCatalog .. SKIPPED
 [INFO] Hive HCatalog Core . SKIPPED
 [INFO] Hive HCatalog Pig Adapter .. SKIPPED
 [INFO] Hive HCatalog Server Extensions  SKIPPED
 [INFO] Hive HCatalog Webhcat Java Client .. SKIPPED
 [INFO] Hive HCatalog Webhcat .. SKIPPED
 [INFO] Hive HCatalog Streaming  SKIPPED
 [INFO] Hive HWI ... SKIPPED
 [INFO] Hive ODBC .. SKIPPED
 [INFO] Hive Shims Aggregator .. SKIPPED
 [INFO] Hive TestUtils . SKIPPED
 [INFO] Hive Packaging . SKIPPED
 [INFO] 
 
 [INFO] BUILD FAILURE
 [INFO] 
 
 [INFO] Total time: 11:26 min
 [INFO] Finished at: 2015-03-10T22:51:30-07:00
 [INFO] Final Memory: 72M/451M
 [INFO] 
 
 [WARNING] The requested profile disist could not be activated because it 
 does not exist.
 [ERROR] Failed to execute goal on project hive-exec: Could not resolve 
 dependencies for project org.apache.hive:hive-exec:jar:1.0.0: The following 
 artifacts could not be resolved: 
 org.pentaho:pentaho-aggdesigner-algorithm:jar:5.1.3-jhyde, 
 eigenbase:eigenbase-properties:jar:1.1.4, net.hydromatic:linq4j:jar:0.4, 
 net.hydromatic:quidem:jar:0.1.1: Could not find artifact 
 org.pentaho:pentaho-aggdesigner-algorithm:jar:5.1.3-jhyde in nexus-osc 
 (http://maven.oschina.net/content/groups/public/) - [Help 1]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-3454) Problem with CAST(BIGINT as TIMESTAMP)

2015-03-11 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-3454:
---
Release Note: 
The behaviors of converting from BOOLEAN/TINYINT/SMALLINT/INT/BIGINT and 
converting from FLOAT/DOUBLE to TIMESTAMP are inconsistent. The value of a 
BOOLEAN/TINYINT/SMALLINT/INT/BIGINT is treated as the time in milliseconds 
while  the value of a FLOAT/DOUBLE is treated as the time in seconds. After the 
change, all the types during the conversion are interpreted in seconds.


  was:
The behaviors of converting from BOOLEAN/TINYINT/SMALLINT/INT/BIGINT and 
converting from FLOAT/DOUBLE to TIMESTAMP have been inconsistent. The value of 
a BOOLEAN/TINYINT/SMALLINT/INT/BIGINT is treated as the time in milliseconds 
while  the value of a FLOAT/DOUBLE is treated as the time in seconds. 

With the change of HIVE-3454, we support an additional configuration 
hive.int.timestamp.conversion.in.seconds to enable the interpretation the 
BOOLEAN/BYTE/SHORT/INT/BIGINT value in seconds during the timestamp conversion 
without breaking the existing customers. By default, the existing functionality 
is kept.


 Problem with CAST(BIGINT as TIMESTAMP)
 --

 Key: HIVE-3454
 URL: https://issues.apache.org/jira/browse/HIVE-3454
 Project: Hive
  Issue Type: Bug
  Components: Types, UDF
Affects Versions: 0.8.0, 0.8.1, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 
 0.13.1
Reporter: Ryan Harris
Assignee: Aihua Xu
  Labels: newbie, newdev, patch
 Attachments: HIVE-3454.1.patch.txt, HIVE-3454.2.patch, 
 HIVE-3454.3.patch, HIVE-3454.patch


 Ran into an issue while working with timestamp conversion.
 CAST(unix_timestamp() as TIMESTAMP) should create a timestamp for the current 
 time from the BIGINT returned by unix_timestamp()
 Instead, however, a 1970-01-16 timestamp is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9924) Add SORT_QUERY_RESULTS to union12.q

2015-03-11 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-9924:
--
Attachment: HIVE-9924.1-spark.patch

Attached a dummy patch to trigger a clean test run for Spark branch to find out 
any test failures.

 Add SORT_QUERY_RESULTS to union12.q
 ---

 Key: HIVE-9924
 URL: https://issues.apache.org/jira/browse/HIVE-9924
 Project: Hive
  Issue Type: Test
Reporter: Rui Li
Assignee: Rui Li
Priority: Minor
 Attachments: HIVE-9924.1-spark.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-5317) Implement insert, update, and delete in Hive with full ACID support

2015-03-11 Thread Fanhong Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358084#comment-14358084
 ] 

Fanhong Li commented on HIVE-5317:
--

insert into table values() when UTF-8 character is not correct



insert into table test_acid partition(pt='pt_2')
 values( 2, '中文_2' , 'city_2' )
 ;

hive select *
  from test_acid 
  ;
 OK
 2 -�_2 city_2 pt_2
 Time taken: 0.237 seconds, Fetched: 1 row(s)
 hive 

CREATE TABLE test_acid(id INT, 
 name STRING, 
 city STRING) 
 PARTITIONED BY (pt STRING)
 clustered by (id) into 1 buckets
 stored as ORCFILE
 TBLPROPERTIES('transactional'='true')
 ;


 Implement insert, update, and delete in Hive with full ACID support
 ---

 Key: HIVE-5317
 URL: https://issues.apache.org/jira/browse/HIVE-5317
 Project: Hive
  Issue Type: New Feature
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Fix For: 0.14.0

 Attachments: InsertUpdatesinHive.pdf


 Many customers want to be able to insert, update and delete rows from Hive 
 tables with full ACID support. The use cases are varied, but the form of the 
 queries that should be supported are:
 * INSERT INTO tbl SELECT …
 * INSERT INTO tbl VALUES ...
 * UPDATE tbl SET … WHERE …
 * DELETE FROM tbl WHERE …
 * MERGE INTO tbl USING src ON … WHEN MATCHED THEN ... WHEN NOT MATCHED THEN 
 ...
 * SET TRANSACTION LEVEL …
 * BEGIN/END TRANSACTION
 Use Cases
 * Once an hour, a set of inserts and updates (up to 500k rows) for various 
 dimension tables (eg. customer, inventory, stores) needs to be processed. The 
 dimension tables have primary keys and are typically bucketed and sorted on 
 those keys.
 * Once a day a small set (up to 100k rows) of records need to be deleted for 
 regulatory compliance.
 * Once an hour a log of transactions is exported from a RDBS and the fact 
 tables need to be updated (up to 1m rows)  to reflect the new data. The 
 transactions are a combination of inserts, updates, and deletes. The table is 
 partitioned and bucketed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9915) Allow specifying file format for managed tables

2015-03-11 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358111#comment-14358111
 ] 

Hive QA commented on HIVE-9915:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12703732/HIVE-9915.1.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 7763 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_default_file_format
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udaf_context_ngrams
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3010/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3010/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3010/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12703732 - PreCommit-HIVE-TRUNK-Build

 Allow specifying file format for managed tables
 ---

 Key: HIVE-9915
 URL: https://issues.apache.org/jira/browse/HIVE-9915
 Project: Hive
  Issue Type: Bug
  Components: File Formats
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Attachments: HIVE-9915.1.patch


 We already allow setting a system wide default format. In some cases it's 
 useful though to specify this only for managed tables, or distinguish 
 external and managed via two variables. You might want to set a more 
 efficient (than text) format for managed tables, but leave external to text 
 (as they often are log files etc.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9924) Fix union12 and union31 for spark [Spark Branch]

2015-03-11 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358128#comment-14358128
 ] 

Hive QA commented on HIVE-9924:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12704081/HIVE-9924.2-spark.patch

{color:green}SUCCESS:{color} +1 7644 tests passed

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/786/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/786/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-786/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12704081 - PreCommit-HIVE-SPARK-Build

 Fix union12 and union31 for spark [Spark Branch]
 

 Key: HIVE-9924
 URL: https://issues.apache.org/jira/browse/HIVE-9924
 Project: Hive
  Issue Type: Test
  Components: Spark
Reporter: Rui Li
Assignee: Rui Li
Priority: Minor
 Attachments: HIVE-9924.1-spark.patch, HIVE-9924.2-spark.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9601) New Beeline queries will hang If Beeline terminates in-properly [Spark Branch]

2015-03-11 Thread Szehon Ho (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358042#comment-14358042
 ] 

Szehon Ho commented on HIVE-9601:
-

Thanks Chao for committing, I did not have access for last few days :)

 New Beeline queries will hang If Beeline terminates in-properly [Spark Branch]
 --

 Key: HIVE-9601
 URL: https://issues.apache.org/jira/browse/HIVE-9601
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Szehon Ho
Assignee: Jimmy Xiang
 Fix For: spark-branch

 Attachments: HIVE-9601.1-spark.patch, HIVE-9601.1-spark.patch, 
 HIVE-9601.2-spark.patch


 User session's Spark application seems to stay around if beeline is not quit 
 properly (!quit) because the user is not disconnected.
 If Beeline is started, it will create a new Spark application which will hang 
 waiting for the first one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9625) Delegation tokens for HMS are not renewed

2015-03-11 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358052#comment-14358052
 ] 

Xuefu Zhang commented on HIVE-9625:
---

[~brocknoland], [~prasadm], could we move this forward?

 Delegation tokens for HMS are not renewed
 -

 Key: HIVE-9625
 URL: https://issues.apache.org/jira/browse/HIVE-9625
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-9625.1.patch


 AFAICT the delegation tokens stored in [HiveSessionImplwithUGI 
 |https://github.com/apache/hive/blob/trunk/service/src/java/org/apache/hive/service/cli/session/HiveSessionImplwithUGI.java#L45]
  for HMS + Impersonation are never renewed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9935) Fix tests for java 1.8 [Spark Branch]

2015-03-11 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14357931#comment-14357931
 ] 

Hive QA commented on HIVE-9935:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12704052/HIVE-9935.1-spark.patch

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 7644 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_union12
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_union31
org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAddPartitionWithUnicode
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/785/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/785/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-785/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12704052 - PreCommit-HIVE-SPARK-Build

 Fix tests for java 1.8 [Spark Branch]
 -

 Key: HIVE-9935
 URL: https://issues.apache.org/jira/browse/HIVE-9935
 Project: Hive
  Issue Type: Bug
  Components: spark-branch
Affects Versions: spark-branch
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: spark-branch

 Attachments: HIVE-9935.1-spark.patch


 In spark branch, these tests don't have java 1.8 golden file:
 join0.q
 list_bucket_dml_2.q
 subquery_multiinsert.q



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9932) DDLTask.conf hides base class Task.conf

2015-03-11 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358015#comment-14358015
 ] 

Hive QA commented on HIVE-9932:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12703990/HIVE-9932.1.patch

{color:green}SUCCESS:{color} +1 7762 tests passed

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3008/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3008/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3008/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12703990 - PreCommit-HIVE-TRUNK-Build

 DDLTask.conf hides base class Task.conf
 ---

 Key: HIVE-9932
 URL: https://issues.apache.org/jira/browse/HIVE-9932
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
Priority: Minor
 Attachments: HIVE-9932.1.patch


 DDLTask defines field conf
 DDLTask extends Task
 Task also defines protected field conf  (which is accessible from DDLTask)
 Probably we should remove field conf from DDLTask.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)