[jira] [Created] (HIVE-6189) Support top level union all statements

2014-01-13 Thread Gunther Hagleitner (JIRA)
Gunther Hagleitner created HIVE-6189:


 Summary: Support top level union all statements
 Key: HIVE-6189
 URL: https://issues.apache.org/jira/browse/HIVE-6189
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner


I've always wondered why union all has to be in subqueries in hive.

After looking at it, problems are:

- Hive Parser:
  - Union happens at the wrong place (insert ... select ... union all select 
...) is parsed as (insert select) union select.
  - There are many rewrite rules in the parser to force any query into the a 
from - insert -select form. No doubt for historical reasons.
- Plan generation/semantic analysis assumes top level TOK_QUERY and not top 
level TOK_UNION.

The rewrite rules don't work when we move the UNION ALL recursion into the 
select statements. However, it's not hard to do that in code.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-5945) ql.plan.ConditionalResolverCommonJoin.resolveMapJoinTask also sums those tables which are not used in the child of this conditional task.

2014-01-13 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13869351#comment-13869351
 ] 

Hive QA commented on HIVE-5945:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12622571/HIVE-5945.7.patch.txt

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 4917 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join30
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join31
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join_filters
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union22
org.apache.hadoop.hive.ql.plan.TestConditionalResolverCommonJoin.testResolvingDriverAlias
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/877/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/877/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12622571

 ql.plan.ConditionalResolverCommonJoin.resolveMapJoinTask also sums those 
 tables which are not used in the child of this conditional task.
 -

 Key: HIVE-5945
 URL: https://issues.apache.org/jira/browse/HIVE-5945
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.8.0, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0
Reporter: Yin Huai
Assignee: Navis
Priority: Critical
 Attachments: HIVE-5945.1.patch.txt, HIVE-5945.2.patch.txt, 
 HIVE-5945.3.patch.txt, HIVE-5945.4.patch.txt, HIVE-5945.5.patch.txt, 
 HIVE-5945.6.patch.txt, HIVE-5945.7.patch.txt


 Here is an example
 {code}
 select
i_item_id,
s_state,
avg(ss_quantity) agg1,
avg(ss_list_price) agg2,
avg(ss_coupon_amt) agg3,
avg(ss_sales_price) agg4
 FROM store_sales
 JOIN date_dim on (store_sales.ss_sold_date_sk = date_dim.d_date_sk)
 JOIN item on (store_sales.ss_item_sk = item.i_item_sk)
 JOIN customer_demographics on (store_sales.ss_cdemo_sk = 
 customer_demographics.cd_demo_sk)
 JOIN store on (store_sales.ss_store_sk = store.s_store_sk)
 where
cd_gender = 'F' and
cd_marital_status = 'U' and
cd_education_status = 'Primary' and
d_year = 2002 and
s_state in ('GA','PA', 'LA', 'SC', 'MI', 'AL')
 group by
i_item_id,
s_state
 order by
i_item_id,
s_state
 limit 100;
 {\code}
 I turned off noconditionaltask. So, I expected that there will be 4 Map-only 
 jobs for this query. However, I got 1 Map-only job (joining strore_sales and 
 date_dim) and 3 MR job (for reduce joins.)
 So, I checked the conditional task determining the plan of the join involving 
 item. In ql.plan.ConditionalResolverCommonJoin.resolveMapJoinTask, 
 aliasToFileSizeMap contains all input tables used in this query and the 
 intermediate table generated by joining store_sales and date_dim. So, when we 
 sum the size of all small tables, the size of store_sales (which is around 
 45GB in my test) will be also counted.  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6189) Support top level union all statements

2014-01-13 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-6189:
-

Attachment: HIVE-6189.1.patch

 Support top level union all statements
 --

 Key: HIVE-6189
 URL: https://issues.apache.org/jira/browse/HIVE-6189
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Attachments: HIVE-6189.1.patch


 I've always wondered why union all has to be in subqueries in hive.
 After looking at it, problems are:
 - Hive Parser:
   - Union happens at the wrong place (insert ... select ... union all select 
 ...) is parsed as (insert select) union select.
   - There are many rewrite rules in the parser to force any query into the a 
 from - insert -select form. No doubt for historical reasons.
 - Plan generation/semantic analysis assumes top level TOK_QUERY and not top 
 level TOK_UNION.
 The rewrite rules don't work when we move the UNION ALL recursion into the 
 select statements. However, it's not hard to do that in code.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-5945) ql.plan.ConditionalResolverCommonJoin.resolveMapJoinTask also sums those tables which are not used in the child of this conditional task.

2014-01-13 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-5945:


Status: Open  (was: Patch Available)

 ql.plan.ConditionalResolverCommonJoin.resolveMapJoinTask also sums those 
 tables which are not used in the child of this conditional task.
 -

 Key: HIVE-5945
 URL: https://issues.apache.org/jira/browse/HIVE-5945
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.12.0, 0.11.0, 0.10.0, 0.9.0, 0.8.0, 0.13.0
Reporter: Yin Huai
Assignee: Navis
Priority: Critical
 Attachments: HIVE-5945.1.patch.txt, HIVE-5945.2.patch.txt, 
 HIVE-5945.3.patch.txt, HIVE-5945.4.patch.txt, HIVE-5945.5.patch.txt, 
 HIVE-5945.6.patch.txt, HIVE-5945.7.patch.txt


 Here is an example
 {code}
 select
i_item_id,
s_state,
avg(ss_quantity) agg1,
avg(ss_list_price) agg2,
avg(ss_coupon_amt) agg3,
avg(ss_sales_price) agg4
 FROM store_sales
 JOIN date_dim on (store_sales.ss_sold_date_sk = date_dim.d_date_sk)
 JOIN item on (store_sales.ss_item_sk = item.i_item_sk)
 JOIN customer_demographics on (store_sales.ss_cdemo_sk = 
 customer_demographics.cd_demo_sk)
 JOIN store on (store_sales.ss_store_sk = store.s_store_sk)
 where
cd_gender = 'F' and
cd_marital_status = 'U' and
cd_education_status = 'Primary' and
d_year = 2002 and
s_state in ('GA','PA', 'LA', 'SC', 'MI', 'AL')
 group by
i_item_id,
s_state
 order by
i_item_id,
s_state
 limit 100;
 {\code}
 I turned off noconditionaltask. So, I expected that there will be 4 Map-only 
 jobs for this query. However, I got 1 Map-only job (joining strore_sales and 
 date_dim) and 3 MR job (for reduce joins.)
 So, I checked the conditional task determining the plan of the join involving 
 item. In ql.plan.ConditionalResolverCommonJoin.resolveMapJoinTask, 
 aliasToFileSizeMap contains all input tables used in this query and the 
 intermediate table generated by joining store_sales and date_dim. So, when we 
 sum the size of all small tables, the size of store_sales (which is around 
 45GB in my test) will be also counted.  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6189) Support top level union all statements

2014-01-13 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-6189:
-

Status: Patch Available  (was: Open)

 Support top level union all statements
 --

 Key: HIVE-6189
 URL: https://issues.apache.org/jira/browse/HIVE-6189
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Attachments: HIVE-6189.1.patch


 I've always wondered why union all has to be in subqueries in hive.
 After looking at it, problems are:
 - Hive Parser:
   - Union happens at the wrong place (insert ... select ... union all select 
 ...) is parsed as (insert select) union select.
   - There are many rewrite rules in the parser to force any query into the a 
 from - insert -select form. No doubt for historical reasons.
 - Plan generation/semantic analysis assumes top level TOK_QUERY and not top 
 level TOK_UNION.
 The rewrite rules don't work when we move the UNION ALL recursion into the 
 select statements. However, it's not hard to do that in code.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Review Request 16818: HIVE-6189: Support top level union all statements

2014-01-13 Thread Gunther Hagleitner

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/16818/
---

Review request for hive.


Repository: hive-git


Description
---

https://issues.apache.org/jira/browse/HIVE-6189


Diffs
-

  ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java 3dbbff4 
  ql/src/java/org/apache/hadoop/hive/ql/parse/ColumnStatsSemanticAnalyzer.java 
5b77e6f 
  ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java 713bd54 
  ql/src/java/org/apache/hadoop/hive/ql/parse/ExplainSemanticAnalyzer.java 
327 
  ql/src/java/org/apache/hadoop/hive/ql/parse/ExportSemanticAnalyzer.java 
be0ad62 
  ql/src/java/org/apache/hadoop/hive/ql/parse/FunctionSemanticAnalyzer.java 
da917f7 
  ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g 5dff3fe 
  ql/src/java/org/apache/hadoop/hive/ql/parse/ImportSemanticAnalyzer.java 
1ab5a60 
  ql/src/java/org/apache/hadoop/hive/ql/parse/LoadSemanticAnalyzer.java 5663fca 
  ql/src/java/org/apache/hadoop/hive/ql/parse/MacroSemanticAnalyzer.java 
b42a425 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 8e68fcf 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SubQueryUtils.java 8ffbe07 
  ql/src/java/org/apache/hadoop/hive/ql/parse/UnparseTranslator.java 93e3ad3 
  ql/src/test/queries/clientnegative/union.q e3c5c83 
  ql/src/test/queries/clientpositive/union_top_level.q PRE-CREATION 
  ql/src/test/results/clientnegative/union.q.out b66d394 
  ql/src/test/results/clientpositive/union_top_level.q.out PRE-CREATION 

Diff: https://reviews.apache.org/r/16818/diff/


Testing
---

union_top_level.q contains tests for select, insert into, insert overwrite, 
ctas and views


Thanks,

Gunther Hagleitner



[jira] [Updated] (HIVE-5945) ql.plan.ConditionalResolverCommonJoin.resolveMapJoinTask also sums those tables which are not used in the child of this conditional task.

2014-01-13 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-5945:


Attachment: HIVE-5945.8.patch.txt

 ql.plan.ConditionalResolverCommonJoin.resolveMapJoinTask also sums those 
 tables which are not used in the child of this conditional task.
 -

 Key: HIVE-5945
 URL: https://issues.apache.org/jira/browse/HIVE-5945
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.8.0, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0
Reporter: Yin Huai
Assignee: Navis
Priority: Critical
 Attachments: HIVE-5945.1.patch.txt, HIVE-5945.2.patch.txt, 
 HIVE-5945.3.patch.txt, HIVE-5945.4.patch.txt, HIVE-5945.5.patch.txt, 
 HIVE-5945.6.patch.txt, HIVE-5945.7.patch.txt, HIVE-5945.8.patch.txt


 Here is an example
 {code}
 select
i_item_id,
s_state,
avg(ss_quantity) agg1,
avg(ss_list_price) agg2,
avg(ss_coupon_amt) agg3,
avg(ss_sales_price) agg4
 FROM store_sales
 JOIN date_dim on (store_sales.ss_sold_date_sk = date_dim.d_date_sk)
 JOIN item on (store_sales.ss_item_sk = item.i_item_sk)
 JOIN customer_demographics on (store_sales.ss_cdemo_sk = 
 customer_demographics.cd_demo_sk)
 JOIN store on (store_sales.ss_store_sk = store.s_store_sk)
 where
cd_gender = 'F' and
cd_marital_status = 'U' and
cd_education_status = 'Primary' and
d_year = 2002 and
s_state in ('GA','PA', 'LA', 'SC', 'MI', 'AL')
 group by
i_item_id,
s_state
 order by
i_item_id,
s_state
 limit 100;
 {\code}
 I turned off noconditionaltask. So, I expected that there will be 4 Map-only 
 jobs for this query. However, I got 1 Map-only job (joining strore_sales and 
 date_dim) and 3 MR job (for reduce joins.)
 So, I checked the conditional task determining the plan of the join involving 
 item. In ql.plan.ConditionalResolverCommonJoin.resolveMapJoinTask, 
 aliasToFileSizeMap contains all input tables used in this query and the 
 intermediate table generated by joining store_sales and date_dim. So, when we 
 sum the size of all small tables, the size of store_sales (which is around 
 45GB in my test) will be also counted.  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Re: Review Request 16172: ql.plan.ConditionalResolverCommonJoin.resolveMapJoinTask also sums those tables which are not used in the child of this conditional task.

2014-01-13 Thread Navis Ryu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/16172/
---

(Updated Jan. 13, 2014, 8:46 a.m.)


Review request for hive.


Changes
---

Fixed test fails


Bugs: HIVE-5945
https://issues.apache.org/jira/browse/HIVE-5945


Repository: hive-git


Description
---

Here is an example
{code}
select
   i_item_id,
   s_state,
   avg(ss_quantity) agg1,
   avg(ss_list_price) agg2,
   avg(ss_coupon_amt) agg3,
   avg(ss_sales_price) agg4
FROM store_sales
JOIN date_dim on (store_sales.ss_sold_date_sk = date_dim.d_date_sk)
JOIN item on (store_sales.ss_item_sk = item.i_item_sk)
JOIN customer_demographics on (store_sales.ss_cdemo_sk = 
customer_demographics.cd_demo_sk)
JOIN store on (store_sales.ss_store_sk = store.s_store_sk)
where
   cd_gender = 'F' and
   cd_marital_status = 'U' and
   cd_education_status = 'Primary' and
   d_year = 2002 and
   s_state in ('GA','PA', 'LA', 'SC', 'MI', 'AL')
group by
   i_item_id,
   s_state
order by
   i_item_id,
   s_state
limit 100;
{\code}
I turned off noconditionaltask. So, I expected that there will be 4 Map-only 
jobs for this query. However, I got 1 Map-only job (joining strore_sales and 
date_dim) and 3 MR job (for reduce joins.)

So, I checked the conditional task determining the plan of the join involving 
item. In ql.plan.ConditionalResolverCommonJoin.resolveMapJoinTask, 
aliasToFileSizeMap contains all input tables used in this query and the 
intermediate table generated by joining store_sales and date_dim. So, when we 
sum the size of all small tables, the size of store_sales (which is around 45GB 
in my test) will be also counted.  


Diffs (updated)
-

  ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java fccea89 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/CommonJoinTaskDispatcher.java
 efa9768 
  ql/src/java/org/apache/hadoop/hive/ql/plan/ConditionalResolverCommonJoin.java 
f75e366 
  
ql/src/test/org/apache/hadoop/hive/ql/plan/TestConditionalResolverCommonJoin.java
 67203c9 
  ql/src/test/results/clientpositive/auto_join25.q.out 7427239 
  ql/src/test/results/clientpositive/infer_bucket_sort_convert_join.q.out 
7d06739 
  ql/src/test/results/clientpositive/mapjoin_hook.q.out d60d16e 

Diff: https://reviews.apache.org/r/16172/diff/


Testing
---


Thanks,

Navis Ryu



[jira] [Commented] (HIVE-6046) add UDF for converting date time from one presentation to another

2014-01-13 Thread Kostiantyn Kudriavtsev (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13869361#comment-13869361
 ] 

Kostiantyn Kudriavtsev commented on HIVE-6046:
--

Hi all, any progress with moving forward? 
because I don't see any activities w/ this issue/review 

 add  UDF for converting date time from one presentation to another
 --

 Key: HIVE-6046
 URL: https://issues.apache.org/jira/browse/HIVE-6046
 Project: Hive
  Issue Type: New Feature
  Components: UDF
Affects Versions: 0.13.0
Reporter: Kostiantyn Kudriavtsev
Assignee: Kostiantyn Kudriavtsev
 Attachments: hive-6046.patch


 it'd be nice to have function for converting datetime to different formats, 
 for example:
 format_date('2013-12-12 00:00:00.0', '-MM-dd HH:mm:ss.S', '/MM/dd')
 There are two signatures to facilitate further using:
 format_date(datetime, fromFormat, toFormat)
 format_date(timestamp, toFormat)
  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-5941) SQL std auth - support 'show all roles'

2014-01-13 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-5941:


Status: Open  (was: Patch Available)

 SQL std auth - support 'show all roles'
 ---

 Key: HIVE-5941
 URL: https://issues.apache.org/jira/browse/HIVE-5941
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Navis
 Attachments: HIVE-5941.1.patch.txt, HIVE-5941.2.patch.txt, 
 HIVE-5941.3.patch.txt, HIVE-5941.4.patch.txt

   Original Estimate: 24h
  Remaining Estimate: 24h

 SHOW ALL ROLES - This will list all
 currently existing roles. This will be available only to the superuser.
 This task includes parser changes.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-5945) ql.plan.ConditionalResolverCommonJoin.resolveMapJoinTask also sums those tables which are not used in the child of this conditional task.

2014-01-13 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-5945:


Status: Patch Available  (was: Open)

 ql.plan.ConditionalResolverCommonJoin.resolveMapJoinTask also sums those 
 tables which are not used in the child of this conditional task.
 -

 Key: HIVE-5945
 URL: https://issues.apache.org/jira/browse/HIVE-5945
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.12.0, 0.11.0, 0.10.0, 0.9.0, 0.8.0, 0.13.0
Reporter: Yin Huai
Assignee: Navis
Priority: Critical
 Attachments: HIVE-5945.1.patch.txt, HIVE-5945.2.patch.txt, 
 HIVE-5945.3.patch.txt, HIVE-5945.4.patch.txt, HIVE-5945.5.patch.txt, 
 HIVE-5945.6.patch.txt, HIVE-5945.7.patch.txt, HIVE-5945.8.patch.txt


 Here is an example
 {code}
 select
i_item_id,
s_state,
avg(ss_quantity) agg1,
avg(ss_list_price) agg2,
avg(ss_coupon_amt) agg3,
avg(ss_sales_price) agg4
 FROM store_sales
 JOIN date_dim on (store_sales.ss_sold_date_sk = date_dim.d_date_sk)
 JOIN item on (store_sales.ss_item_sk = item.i_item_sk)
 JOIN customer_demographics on (store_sales.ss_cdemo_sk = 
 customer_demographics.cd_demo_sk)
 JOIN store on (store_sales.ss_store_sk = store.s_store_sk)
 where
cd_gender = 'F' and
cd_marital_status = 'U' and
cd_education_status = 'Primary' and
d_year = 2002 and
s_state in ('GA','PA', 'LA', 'SC', 'MI', 'AL')
 group by
i_item_id,
s_state
 order by
i_item_id,
s_state
 limit 100;
 {\code}
 I turned off noconditionaltask. So, I expected that there will be 4 Map-only 
 jobs for this query. However, I got 1 Map-only job (joining strore_sales and 
 date_dim) and 3 MR job (for reduce joins.)
 So, I checked the conditional task determining the plan of the join involving 
 item. In ql.plan.ConditionalResolverCommonJoin.resolveMapJoinTask, 
 aliasToFileSizeMap contains all input tables used in this query and the 
 intermediate table generated by joining store_sales and date_dim. So, when we 
 sum the size of all small tables, the size of store_sales (which is around 
 45GB in my test) will be also counted.  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-5941) SQL std auth - support 'show all roles'

2014-01-13 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-5941:


Attachment: HIVE-5941.5.patch.txt

 SQL std auth - support 'show all roles'
 ---

 Key: HIVE-5941
 URL: https://issues.apache.org/jira/browse/HIVE-5941
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Navis
 Attachments: HIVE-5941.1.patch.txt, HIVE-5941.2.patch.txt, 
 HIVE-5941.3.patch.txt, HIVE-5941.4.patch.txt, HIVE-5941.5.patch.txt

   Original Estimate: 24h
  Remaining Estimate: 24h

 SHOW ALL ROLES - This will list all
 currently existing roles. This will be available only to the superuser.
 This task includes parser changes.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Re: Review Request 16643: SQL std auth - support 'show all roles'

2014-01-13 Thread Navis Ryu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/16643/
---

(Updated Jan. 13, 2014, 8:50 a.m.)


Review request for hive.


Changes
---

missed updating test result file


Bugs: HIVE-5941
https://issues.apache.org/jira/browse/HIVE-5941


Repository: hive-git


Description
---

SHOW ALL ROLES - This will list all
currently existing roles. This will be available only to the superuser.
This task includes parser changes.


Diffs (updated)
-

  ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 9e4f1c7 
  ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java 713bd54 
  ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g da745d7 
  ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g 5dff3fe 
  ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g 9b6fc3b 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzerFactory.java 
542d59a 
  ql/src/java/org/apache/hadoop/hive/ql/plan/HiveOperation.java bfd6b77 
  ql/src/java/org/apache/hadoop/hive/ql/plan/RoleDDLDesc.java 99dadb0 
  ql/src/test/queries/clientpositive/show_roles.q PRE-CREATION 
  ql/src/test/results/clientpositive/show_roles.q.out PRE-CREATION 

Diff: https://reviews.apache.org/r/16643/diff/


Testing
---


Thanks,

Navis Ryu



[jira] [Updated] (HIVE-5941) SQL std auth - support 'show all roles'

2014-01-13 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-5941:


Status: Patch Available  (was: Open)

 SQL std auth - support 'show all roles'
 ---

 Key: HIVE-5941
 URL: https://issues.apache.org/jira/browse/HIVE-5941
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Navis
 Attachments: HIVE-5941.1.patch.txt, HIVE-5941.2.patch.txt, 
 HIVE-5941.3.patch.txt, HIVE-5941.4.patch.txt, HIVE-5941.5.patch.txt

   Original Estimate: 24h
  Remaining Estimate: 24h

 SHOW ALL ROLES - This will list all
 currently existing roles. This will be available only to the superuser.
 This task includes parser changes.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6182) LDAP Authentication errors need to be more informative

2014-01-13 Thread Szehon Ho (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13869375#comment-13869375
 ] 

Szehon Ho commented on HIVE-6182:
-

Hi, I could not figure out an easy way to fix it, I think it will be another 
JIRA if it is to be fixed.

 LDAP Authentication errors need to be more informative
 --

 Key: HIVE-6182
 URL: https://issues.apache.org/jira/browse/HIVE-6182
 Project: Hive
  Issue Type: Improvement
  Components: Authentication
Affects Versions: 0.13.0
Reporter: Szehon Ho
Assignee: Szehon Ho
 Attachments: HIVE-6182.patch


 There are a host of errors that can happen when logging into an LDAP-enabled 
 Hive-server2 from beeline.  But for any error there is only a generic log 
 message:
 {code}
 SASL negotiation failure
 javax.security.sasl.SaslException: PLAIN auth failed: Error validating LDAP 
 user
   at 
 org.apache.hadoop.security.SaslPlainServer.evaluateResponse(SaslPlainServer.java:108)
   at 
 org.apache.thrift.transport.TSaslTransport$SaslParticipant.evaluateChallengeOrRespons
 {code}
 And on Beeline side there is only an even more unhelpful message:
 {code}
 Error: Invalid URL: jdbc:hive2://localhost:1/default (state=08S01,code=0)
 {code}
 It would be good to print out the underlying error message at least in the 
 log, if not beeline.   But today they are swallowed.  This is bad because the 
 underlying message is the most important, having the error codes as shown 
 here : [LDAP error 
 code|https://wiki.servicenow.com/index.php?title=LDAP_Error_Codes]
 The beeline seems to throw that exception for any error during connection, 
 authetication or otherwise.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-1662) Add file pruning into Hive.

2014-01-13 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13869382#comment-13869382
 ] 

Lefty Leverenz commented on HIVE-1662:
--

This adds hive.optimize.ppd.vc.filename in HiveConf.java, so documentation 
should be added in hive-default.xml.template (and the wiki, but that's not part 
of the patch).

 Add file pruning into Hive.
 ---

 Key: HIVE-1662
 URL: https://issues.apache.org/jira/browse/HIVE-1662
 Project: Hive
  Issue Type: New Feature
Reporter: He Yongqiang
Assignee: Navis
 Attachments: HIVE-1662.10.patch.txt, HIVE-1662.8.patch.txt, 
 HIVE-1662.9.patch.txt, HIVE-1662.D8391.1.patch, HIVE-1662.D8391.2.patch, 
 HIVE-1662.D8391.3.patch, HIVE-1662.D8391.4.patch, HIVE-1662.D8391.5.patch, 
 HIVE-1662.D8391.6.patch, HIVE-1662.D8391.7.patch


 now hive support filename virtual column. 
 if a file name filter presents in a query, hive should be able to only add 
 files which passed the filter to input paths.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-1662) Add file pruning into Hive.

2014-01-13 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13869385#comment-13869385
 ] 

Hive QA commented on HIVE-1662:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12622577/HIVE-1662.10.patch.txt

{color:red}ERROR:{color} -1 due to 30 failed/errored test(s), 4918 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join22
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join30
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join31
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_10
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_9
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_explain_rearrange
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join28
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join29
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join31
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join32
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join32_lessSize
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join33
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join35
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_star
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_mapjoin_mapjoin
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_mapjoin_subquery
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_mapjoin_subquery2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_multiMapJoin1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_multiMapJoin2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_multi_join_union
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_nonblock_op_deduplicate
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_reduce_deduplicate_exclude_join
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subq_where_serialization
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_in_having
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorized_context
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/878/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/878/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 30 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12622577

 Add file pruning into Hive.
 ---

 Key: HIVE-1662
 URL: https://issues.apache.org/jira/browse/HIVE-1662
 Project: Hive
  Issue Type: New Feature
Reporter: He Yongqiang
Assignee: Navis
 Attachments: HIVE-1662.10.patch.txt, HIVE-1662.8.patch.txt, 
 HIVE-1662.9.patch.txt, HIVE-1662.D8391.1.patch, HIVE-1662.D8391.2.patch, 
 HIVE-1662.D8391.3.patch, HIVE-1662.D8391.4.patch, HIVE-1662.D8391.5.patch, 
 HIVE-1662.D8391.6.patch, HIVE-1662.D8391.7.patch


 now hive support filename virtual column. 
 if a file name filter presents in a query, hive should be able to only add 
 files which passed the filter to input paths.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6144) Implement non-staged MapJoin

2014-01-13 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13869429#comment-13869429
 ] 

Hive QA commented on HIVE-6144:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12622588/HIVE-6144.3.patch.txt

{color:green}SUCCESS:{color} +1 4918 tests passed

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/879/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/879/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12622588

 Implement non-staged MapJoin
 

 Key: HIVE-6144
 URL: https://issues.apache.org/jira/browse/HIVE-6144
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Navis
Assignee: Navis
Priority: Minor
 Attachments: HIVE-6144.1.patch.txt, HIVE-6144.2.patch.txt, 
 HIVE-6144.3.patch.txt


 For map join, all data in small aliases are hashed and stored into temporary 
 file in MapRedLocalTask. But for some aliases without filter or projection, 
 it seemed not necessary to do that. For example.
 {noformat}
 select a.* from src a join src b on a.key=b.key;
 {noformat}
 makes plan like this.
 {noformat}
 STAGE PLANS:
   Stage: Stage-4
 Map Reduce Local Work
   Alias - Map Local Tables:
 a 
   Fetch Operator
 limit: -1
   Alias - Map Local Operator Tree:
 a 
   TableScan
 alias: a
 HashTable Sink Operator
   condition expressions:
 0 {key} {value}
 1 
   handleSkewJoin: false
   keys:
 0 [Column[key]]
 1 [Column[key]]
   Position of Big Table: 1
   Stage: Stage-3
 Map Reduce
   Alias - Map Operator Tree:
 b 
   TableScan
 alias: b
 Map Join Operator
   condition map:
Inner Join 0 to 1
   condition expressions:
 0 {key} {value}
 1 
   handleSkewJoin: false
   keys:
 0 [Column[key]]
 1 [Column[key]]
   outputColumnNames: _col0, _col1
   Position of Big Table: 1
   Select Operator
 File Output Operator
   Local Work:
 Map Reduce Local Work
   Stage: Stage-0
 Fetch Operator
 {noformat}
 table src(a) is fetched and stored as-is in MRLocalTask. With this patch, 
 plan can be like below.
 {noformat}
   Stage: Stage-3
 Map Reduce
   Alias - Map Operator Tree:
 b 
   TableScan
 alias: b
 Map Join Operator
   condition map:
Inner Join 0 to 1
   condition expressions:
 0 {key} {value}
 1 
   handleSkewJoin: false
   keys:
 0 [Column[key]]
 1 [Column[key]]
   outputColumnNames: _col0, _col1
   Position of Big Table: 1
   Select Operator
   File Output Operator
   Local Work:
 Map Reduce Local Work
   Alias - Map Local Tables:
 a 
   Fetch Operator
 limit: -1
   Alias - Map Local Operator Tree:
 a 
   TableScan
 alias: a
   Has Any Stage Alias: false
   Stage: Stage-0
 Fetch Operator
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6109) Support customized location for EXTERNAL tables created by Dynamic Partitioning

2014-01-13 Thread Satish Mittal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Satish Mittal updated HIVE-6109:


Attachment: HIVE-6109.1.patch.txt

Attaching the patch that implements the functionality to support custom 
location for external tables in dynamic partitioning.

 Support customized location for EXTERNAL tables created by Dynamic 
 Partitioning
 ---

 Key: HIVE-6109
 URL: https://issues.apache.org/jira/browse/HIVE-6109
 Project: Hive
  Issue Type: Improvement
  Components: HCatalog
Reporter: Satish Mittal
 Attachments: HIVE-6109.1.patch.txt


 Currently when dynamic partitions are created by HCatalog, the underlying 
 directories for the partitions are created in a fixed 'Hive-style' format, 
 i.e. root_dir/key1=value1/key2=value2/ and so on. However in case of 
 external table, user should be able to control the format of directories 
 created for dynamic partitions.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6109) Support customized location for EXTERNAL tables created by Dynamic Partitioning

2014-01-13 Thread Satish Mittal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Satish Mittal updated HIVE-6109:


Status: Patch Available  (was: Open)

 Support customized location for EXTERNAL tables created by Dynamic 
 Partitioning
 ---

 Key: HIVE-6109
 URL: https://issues.apache.org/jira/browse/HIVE-6109
 Project: Hive
  Issue Type: Improvement
  Components: HCatalog
Reporter: Satish Mittal
 Attachments: HIVE-6109.1.patch.txt


 Currently when dynamic partitions are created by HCatalog, the underlying 
 directories for the partitions are created in a fixed 'Hive-style' format, 
 i.e. root_dir/key1=value1/key2=value2/ and so on. However in case of 
 external table, user should be able to control the format of directories 
 created for dynamic partitions.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-4518) Counter Strike: Operation Operator

2014-01-13 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13869447#comment-13869447
 ] 

Lefty Leverenz commented on HIVE-4518:
--

Someone already added *hive.counters.group.name* to hive-default.xml.template 
with this description:

bq. The name of counter group for internal Hive variables (CREATED_FILE, 
FATAL_ERROR, etc.)

So I've added it to the wiki with this merged description:

bq. Counter group name for counters used during query execution. The counter 
group is used for internal Hive variables (CREATED_FILE, FATAL_ERROR, and so 
on).

See [Configuration 
Properties|https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties]
 -- it's listed in the Query Execution section right after *hive.task.progress*.

 Counter Strike: Operation Operator
 --

 Key: HIVE-4518
 URL: https://issues.apache.org/jira/browse/HIVE-4518
 Project: Hive
  Issue Type: Improvement
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Fix For: 0.13.0

 Attachments: HIVE-4518.1.patch, HIVE-4518.10.patch, 
 HIVE-4518.11.patch, HIVE-4518.2.patch, HIVE-4518.3.patch, HIVE-4518.4.patch, 
 HIVE-4518.5.patch, HIVE-4518.6.patch.txt, HIVE-4518.7.patch, 
 HIVE-4518.8.patch, HIVE-4518.9.patch


 Queries of the form:
 from foo
 insert overwrite table bar partition (p) select ...
 insert overwrite table bar partition (p) select ...
 insert overwrite table bar partition (p) select ...
 Generate a huge amount of counters. The reason is that task.progress is 
 turned on for dynamic partitioning queries.
 The counters not only make queries slower than necessary (up to 50%) you will 
 also eventually run out. That's because we're wrapping them in enum values to 
 comply with hadoop 0.17.
 The real reason we turn task.progress on is that we need CREATED_FILES and 
 FATAL counters to ensure dynamic partitioning queries don't go haywire.
 The counters have counter-intuitive names like C1 through C1000 and don't 
 seem really useful by themselves.
 With hadoop 20+ you don't need to wrap the counters anymore, each operator 
 can simply create and increment counters. That should simplify the code a lot.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6189) Support top level union all statements

2014-01-13 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13869463#comment-13869463
 ] 

Hive QA commented on HIVE-6189:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12622596/HIVE-6189.1.patch

{color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 4917 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_subq_insert
org.apache.hadoop.hive.ql.parse.TestParse.testParse_sample2
org.apache.hadoop.hive.ql.parse.TestParse.testParse_sample3
org.apache.hadoop.hive.ql.parse.TestParse.testParse_sample4
org.apache.hadoop.hive.ql.parse.TestParse.testParse_sample5
org.apache.hadoop.hive.ql.parse.TestParse.testParse_sample6
org.apache.hadoop.hive.ql.parse.TestParse.testParse_sample7
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/881/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/881/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 7 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12622596

 Support top level union all statements
 --

 Key: HIVE-6189
 URL: https://issues.apache.org/jira/browse/HIVE-6189
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Attachments: HIVE-6189.1.patch


 I've always wondered why union all has to be in subqueries in hive.
 After looking at it, problems are:
 - Hive Parser:
   - Union happens at the wrong place (insert ... select ... union all select 
 ...) is parsed as (insert select) union select.
   - There are many rewrite rules in the parser to force any query into the a 
 from - insert -select form. No doubt for historical reasons.
 - Plan generation/semantic analysis assumes top level TOK_QUERY and not top 
 level TOK_UNION.
 The rewrite rules don't work when we move the UNION ALL recursion into the 
 select statements. However, it's not hard to do that in code.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-5595) Implement vectorized SMB JOIN

2014-01-13 Thread Remus Rusanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Remus Rusanu updated HIVE-5595:
---

Attachment: HIVE-5595.3.patch

 Implement vectorized SMB JOIN
 -

 Key: HIVE-5595
 URL: https://issues.apache.org/jira/browse/HIVE-5595
 Project: Hive
  Issue Type: Sub-task
Reporter: Remus Rusanu
Assignee: Remus Rusanu
Priority: Critical
 Attachments: HIVE-5595.1.patch, HIVE-5595.2.patch, HIVE-5595.3.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 Vectorized implementation of SMB Map Join.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Re: Review Request 16167: HIVE-5595 Implement Vectorized SMB Join

2014-01-13 Thread Remus Rusanu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/16167/
---

(Updated Jan. 13, 2014, 2:14 p.m.)


Review request for hive, Ashutosh Chauhan, Eric Hanson, and Jitendra Pandey.


Bugs: HIVE-5595
https://issues.apache.org/jira/browse/HIVE-5595


Repository: hive-git


Description
---

See HIVE-5595 I will post description


Diffs (updated)
-

  ql/src/java/org/apache/hadoop/hive/ql/exec/OperatorFactory.java 24a812d 
  ql/src/java/org/apache/hadoop/hive/ql/exec/SMBMapJoinOperator.java 81a1232 
  ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java fccea89 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorSMBMapJoinOperator.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java d189dde 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/Vectorizer.java 
32fd191 
  ql/src/test/org/apache/hadoop/hive/ql/optimizer/physical/TestVectorizer.java 
02031ea 
  ql/src/test/queries/clientpositive/vectorized_bucketmapjoin1.q PRE-CREATION 
  ql/src/test/results/clientpositive/vectorized_bucketmapjoin1.q.out 
PRE-CREATION 

Diff: https://reviews.apache.org/r/16167/diff/


Testing
---

New .q file, manually tested several cases


Thanks,

Remus Rusanu



[jira] [Updated] (HIVE-5595) Implement vectorized SMB JOIN

2014-01-13 Thread Remus Rusanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Remus Rusanu updated HIVE-5595:
---

Status: Patch Available  (was: Open)

Updated the patch to include code review feedback. Updated to latest trunk 
base, removed changes to deleted CommonRCFileInputFormat.

 Implement vectorized SMB JOIN
 -

 Key: HIVE-5595
 URL: https://issues.apache.org/jira/browse/HIVE-5595
 Project: Hive
  Issue Type: Sub-task
Reporter: Remus Rusanu
Assignee: Remus Rusanu
Priority: Critical
 Attachments: HIVE-5595.1.patch, HIVE-5595.2.patch, HIVE-5595.3.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 Vectorized implementation of SMB Map Join.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-5595) Implement vectorized SMB JOIN

2014-01-13 Thread Remus Rusanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Remus Rusanu updated HIVE-5595:
---

Status: Open  (was: Patch Available)

 Implement vectorized SMB JOIN
 -

 Key: HIVE-5595
 URL: https://issues.apache.org/jira/browse/HIVE-5595
 Project: Hive
  Issue Type: Sub-task
Reporter: Remus Rusanu
Assignee: Remus Rusanu
Priority: Critical
 Attachments: HIVE-5595.1.patch, HIVE-5595.2.patch, HIVE-5595.3.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 Vectorized implementation of SMB Map Join.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-5945) ql.plan.ConditionalResolverCommonJoin.resolveMapJoinTask also sums those tables which are not used in the child of this conditional task.

2014-01-13 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13869579#comment-13869579
 ] 

Hive QA commented on HIVE-5945:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12622597/HIVE-5945.8.patch.txt

{color:green}SUCCESS:{color} +1 4917 tests passed

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/882/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/882/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12622597

 ql.plan.ConditionalResolverCommonJoin.resolveMapJoinTask also sums those 
 tables which are not used in the child of this conditional task.
 -

 Key: HIVE-5945
 URL: https://issues.apache.org/jira/browse/HIVE-5945
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.8.0, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0
Reporter: Yin Huai
Assignee: Navis
Priority: Critical
 Attachments: HIVE-5945.1.patch.txt, HIVE-5945.2.patch.txt, 
 HIVE-5945.3.patch.txt, HIVE-5945.4.patch.txt, HIVE-5945.5.patch.txt, 
 HIVE-5945.6.patch.txt, HIVE-5945.7.patch.txt, HIVE-5945.8.patch.txt


 Here is an example
 {code}
 select
i_item_id,
s_state,
avg(ss_quantity) agg1,
avg(ss_list_price) agg2,
avg(ss_coupon_amt) agg3,
avg(ss_sales_price) agg4
 FROM store_sales
 JOIN date_dim on (store_sales.ss_sold_date_sk = date_dim.d_date_sk)
 JOIN item on (store_sales.ss_item_sk = item.i_item_sk)
 JOIN customer_demographics on (store_sales.ss_cdemo_sk = 
 customer_demographics.cd_demo_sk)
 JOIN store on (store_sales.ss_store_sk = store.s_store_sk)
 where
cd_gender = 'F' and
cd_marital_status = 'U' and
cd_education_status = 'Primary' and
d_year = 2002 and
s_state in ('GA','PA', 'LA', 'SC', 'MI', 'AL')
 group by
i_item_id,
s_state
 order by
i_item_id,
s_state
 limit 100;
 {\code}
 I turned off noconditionaltask. So, I expected that there will be 4 Map-only 
 jobs for this query. However, I got 1 Map-only job (joining strore_sales and 
 date_dim) and 3 MR job (for reduce joins.)
 So, I checked the conditional task determining the plan of the join involving 
 item. In ql.plan.ConditionalResolverCommonJoin.resolveMapJoinTask, 
 aliasToFileSizeMap contains all input tables used in this query and the 
 intermediate table generated by joining store_sales and date_dim. So, when we 
 sum the size of all small tables, the size of store_sales (which is around 
 45GB in my test) will be also counted.  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Hive-trunk-hadoop2 - Build # 667 - Still Failing

2014-01-13 Thread Apache Jenkins Server
Changes for Build #640

Changes for Build #641
[navis] HIVE-5414 : The result of show grant is not visible via JDBC (Navis 
reviewed by Thejas M Nair)

[navis] HIVE-4257 : java.sql.SQLNonTransientConnectionException on 
JDBCStatsAggregator (Teddy Choi via Navis, reviewed by Ashutosh)


Changes for Build #642

Changes for Build #643
[ehans] HIVE-6017: Contribute Decimal128 high-performance decimal(p, s) package 
from Microsoft to Hive (Hideaki Kumura via Eric Hanson)


Changes for Build #644
[cws] HIVE-5911: Recent change to schema upgrade scripts breaks file naming 
conventions (Sergey Shelukhin via cws)

[cws] HIVE-3746: Fix HS2 ResultSet Serialization Performance Regression II 
(Navis via cws)

[cws] HIVE-3746: Fix HS2 ResultSet Serialization Performance Regression (Navis 
via cws)

[jitendra] HIVE-6010: TestCompareCliDriver enables tests that would ensure 
vectorization produces same results as non-vectorized execution (Sergey 
Shelukhin via Jitendra Pandey)


Changes for Build #645

Changes for Build #646
[ehans] HIVE-5757: Implement vectorized support for CASE (Eric Hanson)


Changes for Build #647
[thejas] HIVE-5795 : Hive should be able to skip header and footer rows when 
reading data file for a table (Shuaishuai Nie via Thejas Nair)


Changes for Build #648
[thejas] HIVE-5923 : SQL std auth - parser changes (Thejas Nair, reviewed by 
Brock Noland)


Changes for Build #649

Changes for Build #650

Changes for Build #651
[brock] HIVE-3936 - Remote debug failed with hadoop 0.23X, hadoop 2.X (Swarnim 
Kulkarni via Brock)


Changes for Build #652

Changes for Build #653
[gunther] HIVE-6125: Tez: Refactoring changes (Gunther Hagleitner, reviewed by 
Thejas M Nair)


Changes for Build #654
[cws] HIVE-5829: Rewrite Trim and Pad UDFs based on GenericUDF (Mohammad Islam 
via cws)


Changes for Build #655
[brock] HIVE-2599 - Support Composit/Compound Keys with HBaseStorageHandler 
(Swarnim Kulkarni via Brock Noland)

[brock] HIVE-5946 - DDL authorization task factory should be better tested 
(Brock reviewed by Thejas)


Changes for Build #656

Changes for Build #657
[gunther] HIVE-6105: LongWritable.compareTo needs shimming (Navis vis Gunther 
Hagleitner)


Changes for Build #658

Changes for Build #659
[ehans] HIVE-6051: Create DecimalColumnVector and a representative 
VectorExpression for decimal (Eric Hanson)


Changes for Build #660
[thejas] HIVE-5224 : When creating table with AVRO serde, the avro.schema.url 
should be about to load serde schema from file system beside HDFS (Shuaishuai 
Nie via Thejas Nair)

[thejas] HIVE-6154 : HiveServer2 returns a detailed error message to the client 
only when the underlying exception is a HiveSQLException (Vaibhav Gumashta via 
Thejas Nair)


Changes for Build #661

Changes for Build #662
[gunther] HIVE-6098: Merge Tez branch into trunk (Gunther Hagleitner et al, 
reviewed by Thejas Nair, Vikram Dixit K, Ashutosh Chauhan)


Changes for Build #663
[hashutosh] HIVE-6171 : Use Paths consistently - V (Ashutosh Chauhan via Thejas 
Nair)


Changes for Build #664
[xuefu] HIVE-5446: Hive can CREATE an external table but not SELECT from it 
when file path have spaces


Changes for Build #665

Changes for Build #666

Changes for Build #667
[brock] HIVE-6115 - Remove redundant code in HiveHBaseStorageHandler (Brock 
reviewed by Xuefu and Sushanth)




No tests ran.

The Apache Jenkins build system has built Hive-trunk-hadoop2 (build #667)

Status: Still Failing

Check console output at https://builds.apache.org/job/Hive-trunk-hadoop2/667/ 
to view the results.

[jira] [Updated] (HIVE-6115) Remove redundant code in HiveHBaseStorageHandler

2014-01-13 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-6115:
---

   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Thank you everyone! I have committed this change to trunk!

 Remove redundant code in HiveHBaseStorageHandler
 

 Key: HIVE-6115
 URL: https://issues.apache.org/jira/browse/HIVE-6115
 Project: Hive
  Issue Type: Improvement
Affects Versions: 0.12.0
Reporter: Brock Noland
Assignee: Brock Noland
 Fix For: 0.13.0

 Attachments: HIVE-6115.patch, HIVE-6115.patch






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Hive-trunk-h0.21 - Build # 2567 - Still Failing

2014-01-13 Thread Apache Jenkins Server
Changes for Build #2539

Changes for Build #2540
[navis] HIVE-5414 : The result of show grant is not visible via JDBC (Navis 
reviewed by Thejas M Nair)


Changes for Build #2541

Changes for Build #2542
[ehans] HIVE-6017: Contribute Decimal128 high-performance decimal(p, s) package 
from Microsoft to Hive (Hideaki Kumura via Eric Hanson)


Changes for Build #2543
[cws] HIVE-3746: Fix HS2 ResultSet Serialization Performance Regression II 
(Navis via cws)

[cws] HIVE-3746: Fix HS2 ResultSet Serialization Performance Regression (Navis 
via cws)

[jitendra] HIVE-6010: TestCompareCliDriver enables tests that would ensure 
vectorization produces same results as non-vectorized execution (Sergey 
Shelukhin via Jitendra Pandey)


Changes for Build #2544
[cws] HIVE-5911: Recent change to schema upgrade scripts breaks file naming 
conventions (Sergey Shelukhin via cws)


Changes for Build #2545

Changes for Build #2546
[ehans] HIVE-5757: Implement vectorized support for CASE (Eric Hanson)


Changes for Build #2547
[thejas] HIVE-5795 : Hive should be able to skip header and footer rows when 
reading data file for a table (Shuaishuai Nie via Thejas Nair)


Changes for Build #2548
[thejas] HIVE-5923 : SQL std auth - parser changes (Thejas Nair, reviewed by 
Brock Noland)


Changes for Build #2549

Changes for Build #2550

Changes for Build #2551
[brock] HIVE-3936 - Remote debug failed with hadoop 0.23X, hadoop 2.X (Swarnim 
Kulkarni via Brock)


Changes for Build #2552

Changes for Build #2553
[gunther] HIVE-6125: Tez: Refactoring changes (Gunther Hagleitner, reviewed by 
Thejas M Nair)


Changes for Build #2554
[cws] HIVE-5829: Rewrite Trim and Pad UDFs based on GenericUDF (Mohammad Islam 
via cws)


Changes for Build #2555
[brock] HIVE-2599 - Support Composit/Compound Keys with HBaseStorageHandler 
(Swarnim Kulkarni via Brock Noland)

[brock] HIVE-5946 - DDL authorization task factory should be better tested 
(Brock reviewed by Thejas)


Changes for Build #2556
[gunther] HIVE-6105: LongWritable.compareTo needs shimming (Navis vis Gunther 
Hagleitner)


Changes for Build #2557

Changes for Build #2558
[ehans] HIVE-6051: Create DecimalColumnVector and a representative 
VectorExpression for decimal (Eric Hanson)


Changes for Build #2559
[thejas] HIVE-5224 : When creating table with AVRO serde, the avro.schema.url 
should be about to load serde schema from file system beside HDFS (Shuaishuai 
Nie via Thejas Nair)

[thejas] HIVE-6154 : HiveServer2 returns a detailed error message to the client 
only when the underlying exception is a HiveSQLException (Vaibhav Gumashta via 
Thejas Nair)


Changes for Build #2560

Changes for Build #2561
[gunther] HIVE-6098: Merge Tez branch into trunk (Gunther Hagleitner et al, 
reviewed by Thejas Nair, Vikram Dixit K, Ashutosh Chauhan)


Changes for Build #2562
[hashutosh] HIVE-6171 : Use Paths consistently - V (Ashutosh Chauhan via Thejas 
Nair)


Changes for Build #2563

Changes for Build #2564
[xuefu] HIVE-5446: Hive can CREATE an external table but not SELECT from it 
when file path have spaces


Changes for Build #2565

Changes for Build #2566

Changes for Build #2567
[brock] HIVE-6115 - Remove redundant code in HiveHBaseStorageHandler (Brock 
reviewed by Xuefu and Sushanth)




No tests ran.

The Apache Jenkins build system has built Hive-trunk-h0.21 (build #2567)

Status: Still Failing

Check console output at https://builds.apache.org/job/Hive-trunk-h0.21/2567/ to 
view the results.

[jira] [Updated] (HIVE-6123) Implement checkstyle in maven

2014-01-13 Thread Remus Rusanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Remus Rusanu updated HIVE-6123:
---

Attachment: HIVE-6123.1.patch

 Implement checkstyle in maven
 -

 Key: HIVE-6123
 URL: https://issues.apache.org/jira/browse/HIVE-6123
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
 Attachments: HIVE-6123.1.patch


 ant had a checkstyle target, we should do something similar for maven



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6123) Implement checkstyle in maven

2014-01-13 Thread Remus Rusanu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13869598#comment-13869598
 ] 

Remus Rusanu commented on HIVE-6123:


I uploaded a patch that enables the maven checkstyle plugin. But trying to use 
the old ant checkstyle conf (uncomment `!-- 
configLocation${checkstyle.conf.dir}/checkstyle.xml/configLocation --` in 
pom.xml) results in an error when loading the plugin: Failed during checkstyle 
configuration: cannot initialize module JavadocPackage - Unable to instantiate 
JavadocPackage: Unable to instantiate JavadocPackageCheck 

 Implement checkstyle in maven
 -

 Key: HIVE-6123
 URL: https://issues.apache.org/jira/browse/HIVE-6123
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
 Attachments: HIVE-6123.1.patch


 ant had a checkstyle target, we should do something similar for maven



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6046) add UDF for converting date time from one presentation to another

2014-01-13 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13869616#comment-13869616
 ] 

Xuefu Zhang commented on HIVE-6046:
---

[~kostiantyn] You will need to upload the changes you made to the review board 
so that review can begin. The link you provided shows no diff.

 add  UDF for converting date time from one presentation to another
 --

 Key: HIVE-6046
 URL: https://issues.apache.org/jira/browse/HIVE-6046
 Project: Hive
  Issue Type: New Feature
  Components: UDF
Affects Versions: 0.13.0
Reporter: Kostiantyn Kudriavtsev
Assignee: Kostiantyn Kudriavtsev
 Attachments: hive-6046.patch


 it'd be nice to have function for converting datetime to different formats, 
 for example:
 format_date('2013-12-12 00:00:00.0', '-MM-dd HH:mm:ss.S', '/MM/dd')
 There are two signatures to facilitate further using:
 format_date(datetime, fromFormat, toFormat)
 format_date(timestamp, toFormat)
  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-5941) SQL std auth - support 'show all roles'

2014-01-13 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13869647#comment-13869647
 ] 

Hive QA commented on HIVE-5941:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12622598/HIVE-5941.5.patch.txt

{color:green}SUCCESS:{color} +1 4918 tests passed

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/883/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/883/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12622598

 SQL std auth - support 'show all roles'
 ---

 Key: HIVE-5941
 URL: https://issues.apache.org/jira/browse/HIVE-5941
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Navis
 Attachments: HIVE-5941.1.patch.txt, HIVE-5941.2.patch.txt, 
 HIVE-5941.3.patch.txt, HIVE-5941.4.patch.txt, HIVE-5941.5.patch.txt

   Original Estimate: 24h
  Remaining Estimate: 24h

 SHOW ALL ROLES - This will list all
 currently existing roles. This will be available only to the superuser.
 This task includes parser changes.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-5679) add date support to metastore JDO/SQL

2014-01-13 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-5679:
---

   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Sergey!

 add date support to metastore JDO/SQL
 -

 Key: HIVE-5679
 URL: https://issues.apache.org/jira/browse/HIVE-5679
 Project: Hive
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 0.13.0

 Attachments: HIVE-5679.01.patch, HIVE-5679.02.patch, HIVE-5679.patch


 Metastore supports strings and integral types in filters.
 It could also support dates.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6109) Support customized location for EXTERNAL tables created by Dynamic Partitioning

2014-01-13 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13869654#comment-13869654
 ] 

Hive QA commented on HIVE-6109:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12622613/HIVE-6109.1.patch.txt

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/884/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/884/console

Messages:
{noformat}
 This message was trimmed, see log for full details 

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hive-contrib 
---
[INFO] Compiling 39 source files to 
/data/hive-ptest/working/apache-svn-trunk-source/contrib/target/classes
[WARNING] Note: Some input files use or override a deprecated API.
[WARNING] Note: Recompile with -Xlint:deprecation for details.
[WARNING] Note: 
/data/hive-ptest/working/apache-svn-trunk-source/contrib/src/java/org/apache/hadoop/hive/contrib/udf/example/UDFExampleStructPrint.java
 uses unchecked or unsafe operations.
[WARNING] Note: Recompile with -Xlint:unchecked for details.
[INFO] 
[INFO] --- maven-resources-plugin:2.5:testResources (default-testResources) @ 
hive-contrib ---
[debug] execute contextualize
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 
/data/hive-ptest/working/apache-svn-trunk-source/contrib/src/test/resources
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (setup-test-dirs) @ hive-contrib ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/data/hive-ptest/working/apache-svn-trunk-source/contrib/target/tmp
[mkdir] Created dir: 
/data/hive-ptest/working/apache-svn-trunk-source/contrib/target/warehouse
[mkdir] Created dir: 
/data/hive-ptest/working/apache-svn-trunk-source/contrib/target/tmp/conf
 [copy] Copying 5 files to 
/data/hive-ptest/working/apache-svn-trunk-source/contrib/target/tmp/conf
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hive-contrib ---
[INFO] Compiling 2 source files to 
/data/hive-ptest/working/apache-svn-trunk-source/contrib/target/test-classes
[WARNING] Note: 
/data/hive-ptest/working/apache-svn-trunk-source/contrib/src/test/org/apache/hadoop/hive/contrib/serde2/TestRegexSerDe.java
 uses or overrides a deprecated API.
[WARNING] Note: Recompile with -Xlint:deprecation for details.
[INFO] 
[INFO] --- maven-surefire-plugin:2.16:test (default-test) @ hive-contrib ---
[INFO] Tests are skipped.
[INFO] 
[INFO] --- maven-jar-plugin:2.2:jar (default-jar) @ hive-contrib ---
[INFO] Building jar: 
/data/hive-ptest/working/apache-svn-trunk-source/contrib/target/hive-contrib-0.13.0-SNAPSHOT.jar
[INFO] 
[INFO] --- maven-install-plugin:2.4:install (default-install) @ hive-contrib ---
[INFO] Installing 
/data/hive-ptest/working/apache-svn-trunk-source/contrib/target/hive-contrib-0.13.0-SNAPSHOT.jar
 to 
/data/hive-ptest/working/maven/org/apache/hive/hive-contrib/0.13.0-SNAPSHOT/hive-contrib-0.13.0-SNAPSHOT.jar
[INFO] Installing 
/data/hive-ptest/working/apache-svn-trunk-source/contrib/pom.xml to 
/data/hive-ptest/working/maven/org/apache/hive/hive-contrib/0.13.0-SNAPSHOT/hive-contrib-0.13.0-SNAPSHOT.pom
[INFO] 
[INFO] 
[INFO] Building Hive HBase Handler 0.13.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hive-hbase-handler ---
[INFO] Deleting /data/hive-ptest/working/apache-svn-trunk-source/hbase-handler 
(includes = [datanucleus.log, derby.log], excludes = [])
[INFO] 
[INFO] --- maven-resources-plugin:2.5:resources (default-resources) @ 
hive-hbase-handler ---
[debug] execute contextualize
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 
/data/hive-ptest/working/apache-svn-trunk-source/hbase-handler/src/main/resources
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ hive-hbase-handler 
---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ 
hive-hbase-handler ---
[INFO] Compiling 18 source files to 
/data/hive-ptest/working/apache-svn-trunk-source/hbase-handler/target/classes
[WARNING] Note: Some input files use or override a deprecated API.
[WARNING] Note: Recompile with -Xlint:deprecation for details.
[INFO] 
[INFO] --- maven-resources-plugin:2.5:testResources (default-testResources) @ 
hive-hbase-handler ---
[debug] execute contextualize
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

Re: Review Request 16806: HIVE-6185: DDLTask is inconsistent in creating a table and adding a partition when dealing with location

2014-01-13 Thread Ashutosh Chauhan

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/16806/#review31619
---



ql/src/java/org/apache/hadoop/hive/ql/index/IndexMetadataChangeTask.java
https://reviews.apache.org/r/16806/#comment60225

This can be written as Path url = part.getDataLocation;


- Ashutosh Chauhan


On Jan. 12, 2014, 1:53 a.m., Xuefu Zhang wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/16806/
 ---
 
 (Updated Jan. 12, 2014, 1:53 a.m.)
 
 
 Review request for hive and Ashutosh Chauhan.
 
 
 Bugs: HIVE-6185
 https://issues.apache.org/jira/browse/HIVE-6185
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 Fix the inconsistency and standardize it using Path.
 
 
 Diffs
 -
 
   
 hcatalog/core/src/main/java/org/apache/hcatalog/cli/SemanticAnalysis/CreateTableHook.java
  791e01b 
   
 hcatalog/core/src/main/java/org/apache/hcatalog/security/HdfsAuthorizationProvider.java
  2eba530 
   
 hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/CreateTableHook.java
  4c3acb6 
   
 hcatalog/core/src/test/java/org/apache/hcatalog/mapreduce/TestHCatMultiOutputFormat.java
  a2be640 
   
 hcatalog/core/src/test/java/org/apache/hive/hcatalog/mapreduce/TestHCatMultiOutputFormat.java
  68c77c2 
   
 itests/util/src/main/java/org/apache/hadoop/hive/ql/hooks/VerifyOutputTableLocationSchemeIsFileHook.java
  5cc4079 
   
 itests/util/src/main/java/org/apache/hadoop/hive/ql/hooks/VerifyPartitionIsNotSubdirectoryOfTableHook.java
  ce377be 
   
 itests/util/src/main/java/org/apache/hadoop/hive/ql/hooks/VerifyPartitionIsSubdirectoryOfTableHook.java
  4406036 
   ql/src/java/org/apache/hadoop/hive/ql/exec/ArchiveUtils.java 9608fcd 
   ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 9e4f1c7 
   ql/src/java/org/apache/hadoop/hive/ql/exec/MoveTask.java c44e9da 
   ql/src/java/org/apache/hadoop/hive/ql/hooks/Entity.java f57feb9 
   ql/src/java/org/apache/hadoop/hive/ql/index/IndexMetadataChangeTask.java 
 364fc19 
   ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java 2fe86e1 
   ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveMetaStoreChecker.java 
 695982f 
   ql/src/java/org/apache/hadoop/hive/ql/metadata/Partition.java 83514a2 
   ql/src/java/org/apache/hadoop/hive/ql/metadata/Table.java 0180b87 
   
 ql/src/java/org/apache/hadoop/hive/ql/metadata/formatting/JsonMetaDataFormatter.java
  bcd75be 
   
 ql/src/java/org/apache/hadoop/hive/ql/metadata/formatting/TextMetaDataFormatter.java
  b919d1a 
   ql/src/java/org/apache/hadoop/hive/ql/optimizer/AbstractBucketJoinProc.java 
 0991847 
   ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMRTableScan1.java 
 51d56ef 
   ql/src/java/org/apache/hadoop/hive/ql/optimizer/IndexUtils.java 2146009 
   ql/src/java/org/apache/hadoop/hive/ql/optimizer/SamplePruner.java ab2ed81 
   ql/src/java/org/apache/hadoop/hive/ql/optimizer/SimpleFetchOptimizer.java 
 c73c874 
   
 ql/src/java/org/apache/hadoop/hive/ql/optimizer/SizeBasedBigTableSelectorForAutoSMJ.java
  1a686a0 
   ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java 
 713bd54 
   ql/src/java/org/apache/hadoop/hive/ql/parse/ExportSemanticAnalyzer.java 
 be0ad62 
   ql/src/java/org/apache/hadoop/hive/ql/parse/ImportSemanticAnalyzer.java 
 1ab5a60 
   ql/src/java/org/apache/hadoop/hive/ql/parse/LoadSemanticAnalyzer.java 
 5663fca 
   ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 8e68fcf 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/StorageBasedAuthorizationProvider.java
  5557697 
   ql/src/java/org/apache/hadoop/hive/ql/stats/StatsUtils.java e89e3a4 
   
 ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHiveMetaStoreChecker.java 
 69d1896 
 
 Diff: https://reviews.apache.org/r/16806/diff/
 
 
 Testing
 ---
 
 No new test.
 
 
 Thanks,
 
 Xuefu Zhang
 




[jira] [Updated] (HIVE-6166) JsonSerDe is too strict about table schema

2014-01-13 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-6166:
---

   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Committed to trunk.

 JsonSerDe is too strict about table schema
 --

 Key: HIVE-6166
 URL: https://issues.apache.org/jira/browse/HIVE-6166
 Project: Hive
  Issue Type: Bug
  Components: HCatalog, Serializers/Deserializers
Affects Versions: 0.12.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Fix For: 0.13.0

 Attachments: HIVE-6166.2.patch, HIVE-6166.3.patch, HIVE-6166.patch


 JsonSerDe is too strict when it comes to schema, erroring out if it finds a 
 subfield with a key name that does not map to an appropriate type/schema of a 
 table, or an inner-struct schema.
 Thus, if a schema specifies s:structa:int,b:string,k:int and we pass it 
 data that looks like the following:
 {noformat}
 { x : abc , s : { a : 2 , b : blah, c: woo } }
 {noformat}
 This should still pass, and the record should be read as if it were 
 {noformat}
 { s : { a : 2 , b : blah}, k :  null }
 {noformat}
 This will allow the JsonSerDe to be used with a wider set of data where the 
 data does not map too finely to the declared table schema.
 Note, we are still strict about a couple of things:
 a) If there is a declared schema column, then the type cannot vary, that is 
 still considered an error. i.e., if the hive table schema says k1 is a 
 boolean, it cannot magically change into an int or a struct, say, for eg.
 b) The JsonSerDe still attempts to map hive internal column names - i.e. if 
 the data contains a column named _col2, then, if _col2 is not declared 
 directly in the schema, it will map to column position 2 in that 
 schema/subschema, rather than ignoring the field. This is so that tables 
 created with CTAS will still work. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6185) DDLTask is inconsistent in creating a table and adding a partition when dealing with location

2014-01-13 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13869669#comment-13869669
 ] 

Ashutosh Chauhan commented on HIVE-6185:


+1, left a minor comment on RB.

 DDLTask is inconsistent in creating a table and adding a partition when 
 dealing with location
 -

 Key: HIVE-6185
 URL: https://issues.apache.org/jira/browse/HIVE-6185
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.12.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Attachments: HIVE-6185.1.patch, HIVE-6185.2.patch, HIVE-6185.patch, 
 HIVE-6185.patch


 When creating a table, Hive uses URI to represent location:
 {code}
 if (crtTbl.getLocation() != null) {
   tbl.setDataLocation(new Path(crtTbl.getLocation()).toUri());
 }
 {code}
 When adding a partition, Hive uses Path to represent location:
 {code}
   // set partition path relative to table
   db.createPartition(tbl, addPartitionDesc.getPartSpec(), new Path(tbl
 .getPath(), addPartitionDesc.getLocation()), 
 addPartitionDesc.getPartParams(),
 addPartitionDesc.getInputFormat(),
 addPartitionDesc.getOutputFormat(),
 addPartitionDesc.getNumBuckets(),
 addPartitionDesc.getCols(),
 addPartitionDesc.getSerializationLib(),
 addPartitionDesc.getSerdeParams(),
 addPartitionDesc.getBucketCols(),
 addPartitionDesc.getSortCols());
 {code}
 This disparity makes the values stored in metastore be encoded differently, 
 causing problems w.r.t. special character as demonstrated in HIVE-5446. As a 
 result, the code dealing with location for table is different for partition, 
 creating maintenance burden.
 We need to standardize it to Path to be in line with other Path related 
 cleanup effort.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Hive-trunk-hadoop2 - Build # 669 - Still Failing

2014-01-13 Thread Apache Jenkins Server
Changes for Build #640

Changes for Build #641
[navis] HIVE-5414 : The result of show grant is not visible via JDBC (Navis 
reviewed by Thejas M Nair)

[navis] HIVE-4257 : java.sql.SQLNonTransientConnectionException on 
JDBCStatsAggregator (Teddy Choi via Navis, reviewed by Ashutosh)


Changes for Build #642

Changes for Build #643
[ehans] HIVE-6017: Contribute Decimal128 high-performance decimal(p, s) package 
from Microsoft to Hive (Hideaki Kumura via Eric Hanson)


Changes for Build #644
[cws] HIVE-5911: Recent change to schema upgrade scripts breaks file naming 
conventions (Sergey Shelukhin via cws)

[cws] HIVE-3746: Fix HS2 ResultSet Serialization Performance Regression II 
(Navis via cws)

[cws] HIVE-3746: Fix HS2 ResultSet Serialization Performance Regression (Navis 
via cws)

[jitendra] HIVE-6010: TestCompareCliDriver enables tests that would ensure 
vectorization produces same results as non-vectorized execution (Sergey 
Shelukhin via Jitendra Pandey)


Changes for Build #645

Changes for Build #646
[ehans] HIVE-5757: Implement vectorized support for CASE (Eric Hanson)


Changes for Build #647
[thejas] HIVE-5795 : Hive should be able to skip header and footer rows when 
reading data file for a table (Shuaishuai Nie via Thejas Nair)


Changes for Build #648
[thejas] HIVE-5923 : SQL std auth - parser changes (Thejas Nair, reviewed by 
Brock Noland)


Changes for Build #649

Changes for Build #650

Changes for Build #651
[brock] HIVE-3936 - Remote debug failed with hadoop 0.23X, hadoop 2.X (Swarnim 
Kulkarni via Brock)


Changes for Build #652

Changes for Build #653
[gunther] HIVE-6125: Tez: Refactoring changes (Gunther Hagleitner, reviewed by 
Thejas M Nair)


Changes for Build #654
[cws] HIVE-5829: Rewrite Trim and Pad UDFs based on GenericUDF (Mohammad Islam 
via cws)


Changes for Build #655
[brock] HIVE-2599 - Support Composit/Compound Keys with HBaseStorageHandler 
(Swarnim Kulkarni via Brock Noland)

[brock] HIVE-5946 - DDL authorization task factory should be better tested 
(Brock reviewed by Thejas)


Changes for Build #656

Changes for Build #657
[gunther] HIVE-6105: LongWritable.compareTo needs shimming (Navis vis Gunther 
Hagleitner)


Changes for Build #658

Changes for Build #659
[ehans] HIVE-6051: Create DecimalColumnVector and a representative 
VectorExpression for decimal (Eric Hanson)


Changes for Build #660
[thejas] HIVE-5224 : When creating table with AVRO serde, the avro.schema.url 
should be about to load serde schema from file system beside HDFS (Shuaishuai 
Nie via Thejas Nair)

[thejas] HIVE-6154 : HiveServer2 returns a detailed error message to the client 
only when the underlying exception is a HiveSQLException (Vaibhav Gumashta via 
Thejas Nair)


Changes for Build #661

Changes for Build #662
[gunther] HIVE-6098: Merge Tez branch into trunk (Gunther Hagleitner et al, 
reviewed by Thejas Nair, Vikram Dixit K, Ashutosh Chauhan)


Changes for Build #663
[hashutosh] HIVE-6171 : Use Paths consistently - V (Ashutosh Chauhan via Thejas 
Nair)


Changes for Build #664
[xuefu] HIVE-5446: Hive can CREATE an external table but not SELECT from it 
when file path have spaces


Changes for Build #665

Changes for Build #666

Changes for Build #667
[brock] HIVE-6115 - Remove redundant code in HiveHBaseStorageHandler (Brock 
reviewed by Xuefu and Sushanth)


Changes for Build #668
[hashutosh] HIVE-6166 : JsonSerDe is too strict about table schema (Sushanth 
Sowmyan via Ashutosh Chauhan)

[hashutosh] HIVE-5679 : add date support to metastore JDO/SQL (Sergey Shelukhin 
via Ashutosh Chauhan)


Changes for Build #669



No tests ran.

The Apache Jenkins build system has built Hive-trunk-hadoop2 (build #669)

Status: Still Failing

Check console output at https://builds.apache.org/job/Hive-trunk-hadoop2/669/ 
to view the results.

Hive-trunk-hadoop2 - Build # 668 - Still Failing

2014-01-13 Thread Apache Jenkins Server
Changes for Build #640

Changes for Build #641
[navis] HIVE-5414 : The result of show grant is not visible via JDBC (Navis 
reviewed by Thejas M Nair)

[navis] HIVE-4257 : java.sql.SQLNonTransientConnectionException on 
JDBCStatsAggregator (Teddy Choi via Navis, reviewed by Ashutosh)


Changes for Build #642

Changes for Build #643
[ehans] HIVE-6017: Contribute Decimal128 high-performance decimal(p, s) package 
from Microsoft to Hive (Hideaki Kumura via Eric Hanson)


Changes for Build #644
[cws] HIVE-5911: Recent change to schema upgrade scripts breaks file naming 
conventions (Sergey Shelukhin via cws)

[cws] HIVE-3746: Fix HS2 ResultSet Serialization Performance Regression II 
(Navis via cws)

[cws] HIVE-3746: Fix HS2 ResultSet Serialization Performance Regression (Navis 
via cws)

[jitendra] HIVE-6010: TestCompareCliDriver enables tests that would ensure 
vectorization produces same results as non-vectorized execution (Sergey 
Shelukhin via Jitendra Pandey)


Changes for Build #645

Changes for Build #646
[ehans] HIVE-5757: Implement vectorized support for CASE (Eric Hanson)


Changes for Build #647
[thejas] HIVE-5795 : Hive should be able to skip header and footer rows when 
reading data file for a table (Shuaishuai Nie via Thejas Nair)


Changes for Build #648
[thejas] HIVE-5923 : SQL std auth - parser changes (Thejas Nair, reviewed by 
Brock Noland)


Changes for Build #649

Changes for Build #650

Changes for Build #651
[brock] HIVE-3936 - Remote debug failed with hadoop 0.23X, hadoop 2.X (Swarnim 
Kulkarni via Brock)


Changes for Build #652

Changes for Build #653
[gunther] HIVE-6125: Tez: Refactoring changes (Gunther Hagleitner, reviewed by 
Thejas M Nair)


Changes for Build #654
[cws] HIVE-5829: Rewrite Trim and Pad UDFs based on GenericUDF (Mohammad Islam 
via cws)


Changes for Build #655
[brock] HIVE-2599 - Support Composit/Compound Keys with HBaseStorageHandler 
(Swarnim Kulkarni via Brock Noland)

[brock] HIVE-5946 - DDL authorization task factory should be better tested 
(Brock reviewed by Thejas)


Changes for Build #656

Changes for Build #657
[gunther] HIVE-6105: LongWritable.compareTo needs shimming (Navis vis Gunther 
Hagleitner)


Changes for Build #658

Changes for Build #659
[ehans] HIVE-6051: Create DecimalColumnVector and a representative 
VectorExpression for decimal (Eric Hanson)


Changes for Build #660
[thejas] HIVE-5224 : When creating table with AVRO serde, the avro.schema.url 
should be about to load serde schema from file system beside HDFS (Shuaishuai 
Nie via Thejas Nair)

[thejas] HIVE-6154 : HiveServer2 returns a detailed error message to the client 
only when the underlying exception is a HiveSQLException (Vaibhav Gumashta via 
Thejas Nair)


Changes for Build #661

Changes for Build #662
[gunther] HIVE-6098: Merge Tez branch into trunk (Gunther Hagleitner et al, 
reviewed by Thejas Nair, Vikram Dixit K, Ashutosh Chauhan)


Changes for Build #663
[hashutosh] HIVE-6171 : Use Paths consistently - V (Ashutosh Chauhan via Thejas 
Nair)


Changes for Build #664
[xuefu] HIVE-5446: Hive can CREATE an external table but not SELECT from it 
when file path have spaces


Changes for Build #665

Changes for Build #666

Changes for Build #667
[brock] HIVE-6115 - Remove redundant code in HiveHBaseStorageHandler (Brock 
reviewed by Xuefu and Sushanth)


Changes for Build #668
[hashutosh] HIVE-6166 : JsonSerDe is too strict about table schema (Sushanth 
Sowmyan via Ashutosh Chauhan)

[hashutosh] HIVE-5679 : add date support to metastore JDO/SQL (Sergey Shelukhin 
via Ashutosh Chauhan)




No tests ran.

The Apache Jenkins build system has built Hive-trunk-hadoop2 (build #668)

Status: Still Failing

Check console output at https://builds.apache.org/job/Hive-trunk-hadoop2/668/ 
to view the results.

Hive-trunk-h0.21 - Build # 2568 - Still Failing

2014-01-13 Thread Apache Jenkins Server
Changes for Build #2539

Changes for Build #2540
[navis] HIVE-5414 : The result of show grant is not visible via JDBC (Navis 
reviewed by Thejas M Nair)


Changes for Build #2541

Changes for Build #2542
[ehans] HIVE-6017: Contribute Decimal128 high-performance decimal(p, s) package 
from Microsoft to Hive (Hideaki Kumura via Eric Hanson)


Changes for Build #2543
[cws] HIVE-3746: Fix HS2 ResultSet Serialization Performance Regression II 
(Navis via cws)

[cws] HIVE-3746: Fix HS2 ResultSet Serialization Performance Regression (Navis 
via cws)

[jitendra] HIVE-6010: TestCompareCliDriver enables tests that would ensure 
vectorization produces same results as non-vectorized execution (Sergey 
Shelukhin via Jitendra Pandey)


Changes for Build #2544
[cws] HIVE-5911: Recent change to schema upgrade scripts breaks file naming 
conventions (Sergey Shelukhin via cws)


Changes for Build #2545

Changes for Build #2546
[ehans] HIVE-5757: Implement vectorized support for CASE (Eric Hanson)


Changes for Build #2547
[thejas] HIVE-5795 : Hive should be able to skip header and footer rows when 
reading data file for a table (Shuaishuai Nie via Thejas Nair)


Changes for Build #2548
[thejas] HIVE-5923 : SQL std auth - parser changes (Thejas Nair, reviewed by 
Brock Noland)


Changes for Build #2549

Changes for Build #2550

Changes for Build #2551
[brock] HIVE-3936 - Remote debug failed with hadoop 0.23X, hadoop 2.X (Swarnim 
Kulkarni via Brock)


Changes for Build #2552

Changes for Build #2553
[gunther] HIVE-6125: Tez: Refactoring changes (Gunther Hagleitner, reviewed by 
Thejas M Nair)


Changes for Build #2554
[cws] HIVE-5829: Rewrite Trim and Pad UDFs based on GenericUDF (Mohammad Islam 
via cws)


Changes for Build #2555
[brock] HIVE-2599 - Support Composit/Compound Keys with HBaseStorageHandler 
(Swarnim Kulkarni via Brock Noland)

[brock] HIVE-5946 - DDL authorization task factory should be better tested 
(Brock reviewed by Thejas)


Changes for Build #2556
[gunther] HIVE-6105: LongWritable.compareTo needs shimming (Navis vis Gunther 
Hagleitner)


Changes for Build #2557

Changes for Build #2558
[ehans] HIVE-6051: Create DecimalColumnVector and a representative 
VectorExpression for decimal (Eric Hanson)


Changes for Build #2559
[thejas] HIVE-5224 : When creating table with AVRO serde, the avro.schema.url 
should be about to load serde schema from file system beside HDFS (Shuaishuai 
Nie via Thejas Nair)

[thejas] HIVE-6154 : HiveServer2 returns a detailed error message to the client 
only when the underlying exception is a HiveSQLException (Vaibhav Gumashta via 
Thejas Nair)


Changes for Build #2560

Changes for Build #2561
[gunther] HIVE-6098: Merge Tez branch into trunk (Gunther Hagleitner et al, 
reviewed by Thejas Nair, Vikram Dixit K, Ashutosh Chauhan)


Changes for Build #2562
[hashutosh] HIVE-6171 : Use Paths consistently - V (Ashutosh Chauhan via Thejas 
Nair)


Changes for Build #2563

Changes for Build #2564
[xuefu] HIVE-5446: Hive can CREATE an external table but not SELECT from it 
when file path have spaces


Changes for Build #2565

Changes for Build #2566

Changes for Build #2567
[brock] HIVE-6115 - Remove redundant code in HiveHBaseStorageHandler (Brock 
reviewed by Xuefu and Sushanth)


Changes for Build #2568
[hashutosh] HIVE-6166 : JsonSerDe is too strict about table schema (Sushanth 
Sowmyan via Ashutosh Chauhan)

[hashutosh] HIVE-5679 : add date support to metastore JDO/SQL (Sergey Shelukhin 
via Ashutosh Chauhan)




No tests ran.

The Apache Jenkins build system has built Hive-trunk-h0.21 (build #2568)

Status: Still Failing

Check console output at https://builds.apache.org/job/Hive-trunk-h0.21/2568/ to 
view the results.

Hive-trunk-h0.21 - Build # 2569 - Still Failing

2014-01-13 Thread Apache Jenkins Server
Changes for Build #2539

Changes for Build #2540
[navis] HIVE-5414 : The result of show grant is not visible via JDBC (Navis 
reviewed by Thejas M Nair)


Changes for Build #2541

Changes for Build #2542
[ehans] HIVE-6017: Contribute Decimal128 high-performance decimal(p, s) package 
from Microsoft to Hive (Hideaki Kumura via Eric Hanson)


Changes for Build #2543
[cws] HIVE-3746: Fix HS2 ResultSet Serialization Performance Regression II 
(Navis via cws)

[cws] HIVE-3746: Fix HS2 ResultSet Serialization Performance Regression (Navis 
via cws)

[jitendra] HIVE-6010: TestCompareCliDriver enables tests that would ensure 
vectorization produces same results as non-vectorized execution (Sergey 
Shelukhin via Jitendra Pandey)


Changes for Build #2544
[cws] HIVE-5911: Recent change to schema upgrade scripts breaks file naming 
conventions (Sergey Shelukhin via cws)


Changes for Build #2545

Changes for Build #2546
[ehans] HIVE-5757: Implement vectorized support for CASE (Eric Hanson)


Changes for Build #2547
[thejas] HIVE-5795 : Hive should be able to skip header and footer rows when 
reading data file for a table (Shuaishuai Nie via Thejas Nair)


Changes for Build #2548
[thejas] HIVE-5923 : SQL std auth - parser changes (Thejas Nair, reviewed by 
Brock Noland)


Changes for Build #2549

Changes for Build #2550

Changes for Build #2551
[brock] HIVE-3936 - Remote debug failed with hadoop 0.23X, hadoop 2.X (Swarnim 
Kulkarni via Brock)


Changes for Build #2552

Changes for Build #2553
[gunther] HIVE-6125: Tez: Refactoring changes (Gunther Hagleitner, reviewed by 
Thejas M Nair)


Changes for Build #2554
[cws] HIVE-5829: Rewrite Trim and Pad UDFs based on GenericUDF (Mohammad Islam 
via cws)


Changes for Build #2555
[brock] HIVE-2599 - Support Composit/Compound Keys with HBaseStorageHandler 
(Swarnim Kulkarni via Brock Noland)

[brock] HIVE-5946 - DDL authorization task factory should be better tested 
(Brock reviewed by Thejas)


Changes for Build #2556
[gunther] HIVE-6105: LongWritable.compareTo needs shimming (Navis vis Gunther 
Hagleitner)


Changes for Build #2557

Changes for Build #2558
[ehans] HIVE-6051: Create DecimalColumnVector and a representative 
VectorExpression for decimal (Eric Hanson)


Changes for Build #2559
[thejas] HIVE-5224 : When creating table with AVRO serde, the avro.schema.url 
should be about to load serde schema from file system beside HDFS (Shuaishuai 
Nie via Thejas Nair)

[thejas] HIVE-6154 : HiveServer2 returns a detailed error message to the client 
only when the underlying exception is a HiveSQLException (Vaibhav Gumashta via 
Thejas Nair)


Changes for Build #2560

Changes for Build #2561
[gunther] HIVE-6098: Merge Tez branch into trunk (Gunther Hagleitner et al, 
reviewed by Thejas Nair, Vikram Dixit K, Ashutosh Chauhan)


Changes for Build #2562
[hashutosh] HIVE-6171 : Use Paths consistently - V (Ashutosh Chauhan via Thejas 
Nair)


Changes for Build #2563

Changes for Build #2564
[xuefu] HIVE-5446: Hive can CREATE an external table but not SELECT from it 
when file path have spaces


Changes for Build #2565

Changes for Build #2566

Changes for Build #2567
[brock] HIVE-6115 - Remove redundant code in HiveHBaseStorageHandler (Brock 
reviewed by Xuefu and Sushanth)


Changes for Build #2568
[hashutosh] HIVE-6166 : JsonSerDe is too strict about table schema (Sushanth 
Sowmyan via Ashutosh Chauhan)

[hashutosh] HIVE-5679 : add date support to metastore JDO/SQL (Sergey Shelukhin 
via Ashutosh Chauhan)


Changes for Build #2569



No tests ran.

The Apache Jenkins build system has built Hive-trunk-h0.21 (build #2569)

Status: Still Failing

Check console output at https://builds.apache.org/job/Hive-trunk-h0.21/2569/ to 
view the results.

[jira] [Commented] (HIVE-5595) Implement vectorized SMB JOIN

2014-01-13 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13869736#comment-13869736
 ] 

Hive QA commented on HIVE-5595:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12622629/HIVE-5595.3.patch

{color:green}SUCCESS:{color} +1 4919 tests passed

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/885/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/885/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12622629

 Implement vectorized SMB JOIN
 -

 Key: HIVE-5595
 URL: https://issues.apache.org/jira/browse/HIVE-5595
 Project: Hive
  Issue Type: Sub-task
Reporter: Remus Rusanu
Assignee: Remus Rusanu
Priority: Critical
 Attachments: HIVE-5595.1.patch, HIVE-5595.2.patch, HIVE-5595.3.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 Vectorized implementation of SMB Map Join.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6109) Support customized location for EXTERNAL tables created by Dynamic Partitioning

2014-01-13 Thread Satish Mittal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Satish Mittal updated HIVE-6109:


Attachment: HIVE-6109.2.patch.txt

Updated patch.

 Support customized location for EXTERNAL tables created by Dynamic 
 Partitioning
 ---

 Key: HIVE-6109
 URL: https://issues.apache.org/jira/browse/HIVE-6109
 Project: Hive
  Issue Type: Improvement
  Components: HCatalog
Reporter: Satish Mittal
 Attachments: HIVE-6109.1.patch.txt, HIVE-6109.2.patch.txt


 Currently when dynamic partitions are created by HCatalog, the underlying 
 directories for the partitions are created in a fixed 'Hive-style' format, 
 i.e. root_dir/key1=value1/key2=value2/ and so on. However in case of 
 external table, user should be able to control the format of directories 
 created for dynamic partitions.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-5032) Enable hive creating external table at the root directory of DFS

2014-01-13 Thread Shuaishuai Nie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shuaishuai Nie updated HIVE-5032:
-

Attachment: HIVE-5032.5.patch

Patch #5 fix the unit test failure in patch #4

 Enable hive creating external table at the root directory of DFS
 

 Key: HIVE-5032
 URL: https://issues.apache.org/jira/browse/HIVE-5032
 Project: Hive
  Issue Type: Bug
Reporter: Shuaishuai Nie
Assignee: Shuaishuai Nie
 Attachments: HIVE-5032.1.patch, HIVE-5032.2.patch, HIVE-5032.3.patch, 
 HIVE-5032.4.patch, HIVE-5032.5.patch


 Creating external table using HIVE with location point to the root directory 
 of DFS will fail because the function 
 HiveFileFormatUtils#doGetPartitionDescFromPath treat authority of the path 
 the same as folder and cannot find a match in the pathToPartitionInfo table 
 when doing prefix match. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Review Request 16826: Review for HIVE-5032 Enable hive creating external table at the root directory of DFS

2014-01-13 Thread Shuaishuai Nie

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/16826/
---

Review request for hive, Ashutosh Chauhan and Thejas Nair.


Bugs: hive-5032
https://issues.apache.org/jira/browse/hive-5032


Repository: hive-git


Description
---

Creating external table using HIVE with location point to the root directory of 
DFS will fail because the function 
HiveFileFormatUtils#doGetPartitionDescFromPath treat authority of the path the 
same as folder and cannot find a match in the pathToPartitionInfo table when 
doing prefix match. Instead of modify the path string for recursive prefix 
match, this patch use Path.getParent() to get the prefix for the path. This 
approach solve the failed corner cases of original implementation.


Diffs
-

  itests/qtest/pom.xml 3d3f3f8 
  ql/src/java/org/apache/hadoop/hive/ql/io/HiveFileFormatUtils.java 4be56f3 
  ql/src/test/queries/clientpositive/root_dir_external_table.q PRE-CREATION 
  ql/src/test/results/clientpositive/root_dir_external_table.q.out PRE-CREATION 

Diff: https://reviews.apache.org/r/16826/diff/


Testing
---

unit test added


Thanks,

Shuaishuai Nie



[jira] [Commented] (HIVE-5032) Enable hive creating external table at the root directory of DFS

2014-01-13 Thread Shuaishuai Nie (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13869808#comment-13869808
 ] 

Shuaishuai Nie commented on HIVE-5032:
--

Added review board for this change here: https://reviews.apache.org/r/16826/

 Enable hive creating external table at the root directory of DFS
 

 Key: HIVE-5032
 URL: https://issues.apache.org/jira/browse/HIVE-5032
 Project: Hive
  Issue Type: Bug
Reporter: Shuaishuai Nie
Assignee: Shuaishuai Nie
 Attachments: HIVE-5032.1.patch, HIVE-5032.2.patch, HIVE-5032.3.patch, 
 HIVE-5032.4.patch, HIVE-5032.5.patch


 Creating external table using HIVE with location point to the root directory 
 of DFS will fail because the function 
 HiveFileFormatUtils#doGetPartitionDescFromPath treat authority of the path 
 the same as folder and cannot find a match in the pathToPartitionInfo table 
 when doing prefix match. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-5515) Writing to an HBase table throws IllegalArgumentException, failing job submission

2014-01-13 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13869832#comment-13869832
 ] 

Ashutosh Chauhan commented on HIVE-5515:


+1

 Writing to an HBase table throws IllegalArgumentException, failing job 
 submission
 -

 Key: HIVE-5515
 URL: https://issues.apache.org/jira/browse/HIVE-5515
 Project: Hive
  Issue Type: Bug
  Components: HBase Handler
Affects Versions: 0.12.0
 Environment: Hadoop2, Hive 0.12.0, HBase-0.96RC
Reporter: Nick Dimiduk
Assignee: Viraj Bhat
  Labels: hbase
 Fix For: 0.13.0

 Attachments: HIVE-5515.1.patch, HIVE-5515.2.patch, HIVE-5515.patch


 Inserting data into HBase table via hive query fails with the following 
 message:
 {noformat}
 $ hive -e FROM pgc INSERT OVERWRITE TABLE pagecounts_hbase SELECT pgc.* 
 WHERE rowkey LIKE 'en/q%' LIMIT 10;
 ...
 Total MapReduce jobs = 1
 Launching Job 1 out of 1
 Number of reduce tasks determined at compile time: 1
 In order to change the average load for a reducer (in bytes):
   set hive.exec.reducers.bytes.per.reducer=number
 In order to limit the maximum number of reducers:
   set hive.exec.reducers.max=number
 In order to set a constant number of reducers:
   set mapred.reduce.tasks=number
 java.lang.IllegalArgumentException: Property value must not be null
 at 
 com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
 at org.apache.hadoop.conf.Configuration.set(Configuration.java:810)
 at org.apache.hadoop.conf.Configuration.set(Configuration.java:792)
 at 
 org.apache.hadoop.hive.ql.exec.Utilities.copyTableJobPropertiesToConf(Utilities.java:2002)
 at 
 org.apache.hadoop.hive.ql.exec.FileSinkOperator.checkOutputSpecs(FileSinkOperator.java:947)
 at 
 org.apache.hadoop.hive.ql.io.HiveOutputFormatImpl.checkOutputSpecs(HiveOutputFormatImpl.java:67)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:458)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:342)
 at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
 at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
 at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
 at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
 at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
 at 
 org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
 at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
 at 
 org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:425)
 at 
 org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:136)
 at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151)
 at 
 org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65)
 at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1414)
 at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1192)
 at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1020)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:888)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:348)
 at 
 org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:731)
 at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675)
 at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
 Job Submission failed with exception 
 'java.lang.IllegalArgumentException(Property value must not be null)'
 

[jira] [Updated] (HIVE-6152) insert query fails on hdfs federation + viewfs

2014-01-13 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-6152:


Summary: insert query fails on hdfs federation + viewfs  (was: insert query 
fails on federation + viewfs)

 insert query fails on hdfs federation + viewfs
 --

 Key: HIVE-6152
 URL: https://issues.apache.org/jira/browse/HIVE-6152
 Project: Hive
  Issue Type: Bug
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: HIVE-6152.1.patch, HIVE-6152.2.patch, HIVE-6152.3.patch, 
 HIVE-6152.4.patch


 This is because Hive first writes data to /tmp/ and than moves from /tmp to 
 final destination. In federated HDFS recommendation is to mount /tmp on a 
 separate nameservice, which is usually different than /user. Since renames 
 across different mount points are not supported, this fails. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6067) Implement vectorized decimal comparison filters

2014-01-13 Thread Eric Hanson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13869862#comment-13869862
 ] 

Eric Hanson commented on HIVE-6067:
---

I ran the one failed test on my machine and it passed. This test is not related 
to the patch.

 Implement vectorized decimal comparison filters
 ---

 Key: HIVE-6067
 URL: https://issues.apache.org/jira/browse/HIVE-6067
 Project: Hive
  Issue Type: Sub-task
Affects Versions: 0.13.0
Reporter: Eric Hanson
Assignee: Eric Hanson
 Attachments: HIVE-6067.01.patch, HIVE-6067.02.patch, 
 HIVE-6067.03.patch, HIVE-6067.03.patch, HIVE-6067.04.patch


 Using the new DecimalColumnVector type, implement templates to generate 
 VectorExpression subclasses for Decimal comparison filters (, =, , =, =, 
 !=). Include scalar-column, column-scalar, and column-column filter cases. 
 Include unit tests.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HIVE-6190) redundant columns in metastore schema for stats

2014-01-13 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HIVE-6190:
--

 Summary: redundant columns in metastore schema for stats
 Key: HIVE-6190
 URL: https://issues.apache.org/jira/browse/HIVE-6190
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
Priority: Minor


package.jdo has:
{noformat}
  field name =dbName
column name=DB_NAME length=128 jdbc-type=VARCHAR 
allows-null=false/
  /field
  field name=tableName
column name=TABLE_NAME length=128 jdbc-type=VARCHAR 
allows-null=false/
  /field
  field name=partitionName
column name=PARTITION_NAME length=767 jdbc-type=VARCHAR 
allows-null=false/
  /field
  field name=partition
column name=PART_ID/
  /field
{noformat}

Only PART_ID is enough, the other fields are unnecessary and may potentially 
cause bugs; similarly for table stats. One could argue that they were intended 
for perf (denormalization), but stats retrieval currently is very slow on much 
deeper level so it's not really justified.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6109) Support customized location for EXTERNAL tables created by Dynamic Partitioning

2014-01-13 Thread Satish Mittal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Satish Mittal updated HIVE-6109:


Attachment: HIVE-6109.pdf

Attaching a document that describes the approach taken by the patch in 
designing/implementing the functionality.

 Support customized location for EXTERNAL tables created by Dynamic 
 Partitioning
 ---

 Key: HIVE-6109
 URL: https://issues.apache.org/jira/browse/HIVE-6109
 Project: Hive
  Issue Type: Improvement
  Components: HCatalog
Reporter: Satish Mittal
 Attachments: HIVE-6109.1.patch.txt, HIVE-6109.2.patch.txt, 
 HIVE-6109.pdf


 Currently when dynamic partitions are created by HCatalog, the underlying 
 directories for the partitions are created in a fixed 'Hive-style' format, 
 i.e. root_dir/key1=value1/key2=value2/ and so on. However in case of 
 external table, user should be able to control the format of directories 
 created for dynamic partitions.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6067) Implement vectorized decimal comparison filters

2014-01-13 Thread Eric Hanson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Hanson updated HIVE-6067:
--

   Resolution: Implemented
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Committed to trunk

 Implement vectorized decimal comparison filters
 ---

 Key: HIVE-6067
 URL: https://issues.apache.org/jira/browse/HIVE-6067
 Project: Hive
  Issue Type: Sub-task
Affects Versions: 0.13.0
Reporter: Eric Hanson
Assignee: Eric Hanson
 Fix For: 0.13.0

 Attachments: HIVE-6067.01.patch, HIVE-6067.02.patch, 
 HIVE-6067.03.patch, HIVE-6067.03.patch, HIVE-6067.04.patch


 Using the new DecimalColumnVector type, implement templates to generate 
 VectorExpression subclasses for Decimal comparison filters (, =, , =, =, 
 !=). Include scalar-column, column-scalar, and column-column filter cases. 
 Include unit tests.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Hive-trunk-h0.21 - Build # 2570 - Still Failing

2014-01-13 Thread Apache Jenkins Server
Changes for Build #2539

Changes for Build #2540
[navis] HIVE-5414 : The result of show grant is not visible via JDBC (Navis 
reviewed by Thejas M Nair)


Changes for Build #2541

Changes for Build #2542
[ehans] HIVE-6017: Contribute Decimal128 high-performance decimal(p, s) package 
from Microsoft to Hive (Hideaki Kumura via Eric Hanson)


Changes for Build #2543
[cws] HIVE-3746: Fix HS2 ResultSet Serialization Performance Regression II 
(Navis via cws)

[cws] HIVE-3746: Fix HS2 ResultSet Serialization Performance Regression (Navis 
via cws)

[jitendra] HIVE-6010: TestCompareCliDriver enables tests that would ensure 
vectorization produces same results as non-vectorized execution (Sergey 
Shelukhin via Jitendra Pandey)


Changes for Build #2544
[cws] HIVE-5911: Recent change to schema upgrade scripts breaks file naming 
conventions (Sergey Shelukhin via cws)


Changes for Build #2545

Changes for Build #2546
[ehans] HIVE-5757: Implement vectorized support for CASE (Eric Hanson)


Changes for Build #2547
[thejas] HIVE-5795 : Hive should be able to skip header and footer rows when 
reading data file for a table (Shuaishuai Nie via Thejas Nair)


Changes for Build #2548
[thejas] HIVE-5923 : SQL std auth - parser changes (Thejas Nair, reviewed by 
Brock Noland)


Changes for Build #2549

Changes for Build #2550

Changes for Build #2551
[brock] HIVE-3936 - Remote debug failed with hadoop 0.23X, hadoop 2.X (Swarnim 
Kulkarni via Brock)


Changes for Build #2552

Changes for Build #2553
[gunther] HIVE-6125: Tez: Refactoring changes (Gunther Hagleitner, reviewed by 
Thejas M Nair)


Changes for Build #2554
[cws] HIVE-5829: Rewrite Trim and Pad UDFs based on GenericUDF (Mohammad Islam 
via cws)


Changes for Build #2555
[brock] HIVE-2599 - Support Composit/Compound Keys with HBaseStorageHandler 
(Swarnim Kulkarni via Brock Noland)

[brock] HIVE-5946 - DDL authorization task factory should be better tested 
(Brock reviewed by Thejas)


Changes for Build #2556
[gunther] HIVE-6105: LongWritable.compareTo needs shimming (Navis vis Gunther 
Hagleitner)


Changes for Build #2557

Changes for Build #2558
[ehans] HIVE-6051: Create DecimalColumnVector and a representative 
VectorExpression for decimal (Eric Hanson)


Changes for Build #2559
[thejas] HIVE-5224 : When creating table with AVRO serde, the avro.schema.url 
should be about to load serde schema from file system beside HDFS (Shuaishuai 
Nie via Thejas Nair)

[thejas] HIVE-6154 : HiveServer2 returns a detailed error message to the client 
only when the underlying exception is a HiveSQLException (Vaibhav Gumashta via 
Thejas Nair)


Changes for Build #2560

Changes for Build #2561
[gunther] HIVE-6098: Merge Tez branch into trunk (Gunther Hagleitner et al, 
reviewed by Thejas Nair, Vikram Dixit K, Ashutosh Chauhan)


Changes for Build #2562
[hashutosh] HIVE-6171 : Use Paths consistently - V (Ashutosh Chauhan via Thejas 
Nair)


Changes for Build #2563

Changes for Build #2564
[xuefu] HIVE-5446: Hive can CREATE an external table but not SELECT from it 
when file path have spaces


Changes for Build #2565

Changes for Build #2566

Changes for Build #2567
[brock] HIVE-6115 - Remove redundant code in HiveHBaseStorageHandler (Brock 
reviewed by Xuefu and Sushanth)


Changes for Build #2568
[hashutosh] HIVE-6166 : JsonSerDe is too strict about table schema (Sushanth 
Sowmyan via Ashutosh Chauhan)

[hashutosh] HIVE-5679 : add date support to metastore JDO/SQL (Sergey Shelukhin 
via Ashutosh Chauhan)


Changes for Build #2569

Changes for Build #2570
[ehans] HIVE-6067: Implement vectorized decimal comparison filters (Eric Hanson)




No tests ran.

The Apache Jenkins build system has built Hive-trunk-h0.21 (build #2570)

Status: Still Failing

Check console output at https://builds.apache.org/job/Hive-trunk-h0.21/2570/ to 
view the results.

[jira] [Commented] (HIVE-6185) DDLTask is inconsistent in creating a table and adding a partition when dealing with location

2014-01-13 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13869898#comment-13869898
 ] 

Xuefu Zhang commented on HIVE-6185:
---

Patch #3 incorporated the review feedback.

 DDLTask is inconsistent in creating a table and adding a partition when 
 dealing with location
 -

 Key: HIVE-6185
 URL: https://issues.apache.org/jira/browse/HIVE-6185
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.12.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Attachments: HIVE-6185.1.patch, HIVE-6185.2.patch, HIVE-6185.3.patch, 
 HIVE-6185.patch, HIVE-6185.patch


 When creating a table, Hive uses URI to represent location:
 {code}
 if (crtTbl.getLocation() != null) {
   tbl.setDataLocation(new Path(crtTbl.getLocation()).toUri());
 }
 {code}
 When adding a partition, Hive uses Path to represent location:
 {code}
   // set partition path relative to table
   db.createPartition(tbl, addPartitionDesc.getPartSpec(), new Path(tbl
 .getPath(), addPartitionDesc.getLocation()), 
 addPartitionDesc.getPartParams(),
 addPartitionDesc.getInputFormat(),
 addPartitionDesc.getOutputFormat(),
 addPartitionDesc.getNumBuckets(),
 addPartitionDesc.getCols(),
 addPartitionDesc.getSerializationLib(),
 addPartitionDesc.getSerdeParams(),
 addPartitionDesc.getBucketCols(),
 addPartitionDesc.getSortCols());
 {code}
 This disparity makes the values stored in metastore be encoded differently, 
 causing problems w.r.t. special character as demonstrated in HIVE-5446. As a 
 result, the code dealing with location for table is different for partition, 
 creating maintenance burden.
 We need to standardize it to Path to be in line with other Path related 
 cleanup effort.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6185) DDLTask is inconsistent in creating a table and adding a partition when dealing with location

2014-01-13 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-6185:
--

Attachment: HIVE-6185.3.patch

 DDLTask is inconsistent in creating a table and adding a partition when 
 dealing with location
 -

 Key: HIVE-6185
 URL: https://issues.apache.org/jira/browse/HIVE-6185
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.12.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Attachments: HIVE-6185.1.patch, HIVE-6185.2.patch, HIVE-6185.3.patch, 
 HIVE-6185.patch, HIVE-6185.patch


 When creating a table, Hive uses URI to represent location:
 {code}
 if (crtTbl.getLocation() != null) {
   tbl.setDataLocation(new Path(crtTbl.getLocation()).toUri());
 }
 {code}
 When adding a partition, Hive uses Path to represent location:
 {code}
   // set partition path relative to table
   db.createPartition(tbl, addPartitionDesc.getPartSpec(), new Path(tbl
 .getPath(), addPartitionDesc.getLocation()), 
 addPartitionDesc.getPartParams(),
 addPartitionDesc.getInputFormat(),
 addPartitionDesc.getOutputFormat(),
 addPartitionDesc.getNumBuckets(),
 addPartitionDesc.getCols(),
 addPartitionDesc.getSerializationLib(),
 addPartitionDesc.getSerdeParams(),
 addPartitionDesc.getBucketCols(),
 addPartitionDesc.getSortCols());
 {code}
 This disparity makes the values stored in metastore be encoded differently, 
 causing problems w.r.t. special character as demonstrated in HIVE-5446. As a 
 result, the code dealing with location for table is different for partition, 
 creating maintenance burden.
 We need to standardize it to Path to be in line with other Path related 
 cleanup effort.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6159) Hive uses deprecated hadoop configuration in Hadoop 2.0

2014-01-13 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13869901#comment-13869901
 ] 

Thejas M Nair commented on HIVE-6159:
-

Shanyu, can you please update the reviewboard link as well ?


 Hive uses deprecated hadoop configuration in Hadoop 2.0
 ---

 Key: HIVE-6159
 URL: https://issues.apache.org/jira/browse/HIVE-6159
 Project: Hive
  Issue Type: Bug
  Components: Configuration
Affects Versions: 0.12.0
Reporter: shanyu zhao
Assignee: shanyu zhao
 Fix For: 0.13.0

 Attachments: HIVE-6159-v2.patch, HIVE-6159-v3.patch, HIVE-6159.patch


 Build hive against hadoop 2.0. Then run hive CLI, you'll see deprecated 
 configurations warnings like this:
 13/12/14 01:00:51 INFO Configuration.deprecation: mapred.input.dir.recursive 
 is
  deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
  13/12/14 01:00:52 INFO Configuration.deprecation: mapred.max.split.size is 
 depre
  cated. Instead, use mapreduce.input.fileinputformat.split.maxsize
  13/12/14 01:00:52 INFO Configuration.deprecation: mapred.min.split.size is 
 depre
  cated. Instead, use mapreduce.input.fileinputformat.split.minsize
  13/12/14 01:00:52 INFO Configuration.deprecation: 
 mapred.min.split.size.per.rack
  is deprecated. Instead, use 
 mapreduce.input.fileinputformat.split.minsize.per.r
  ack
  13/12/14 01:00:52 INFO Configuration.deprecation: 
 mapred.min.split.size.per.node
  is deprecated. Instead, use 
 mapreduce.input.fileinputformat.split.minsize.per.n
  ode
  13/12/14 01:00:52 INFO Configuration.deprecation: mapred.reduce.tasks is 
 depreca
  ted. Instead, use mapreduce.job.reduces
  13/12/14 01:00:52 INFO Configuration.deprecation: 
 mapred.reduce.tasks.speculativ
  e.execution is deprecated. Instead, use mapreduce.reduce.speculative



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Re: Review Request 16818: HIVE-6189: Support top level union all statements

2014-01-13 Thread Gunther Hagleitner

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/16818/
---

(Updated Jan. 13, 2014, 8:05 p.m.)


Review request for hive.


Changes
---

fix unit tests


Repository: hive-git


Description
---

https://issues.apache.org/jira/browse/HIVE-6189


Diffs (updated)
-

  ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java efe3286 
  ql/src/java/org/apache/hadoop/hive/ql/parse/ColumnStatsSemanticAnalyzer.java 
5b77e6f 
  ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java 713bd54 
  ql/src/java/org/apache/hadoop/hive/ql/parse/ExplainSemanticAnalyzer.java 
327 
  ql/src/java/org/apache/hadoop/hive/ql/parse/ExportSemanticAnalyzer.java 
be0ad62 
  ql/src/java/org/apache/hadoop/hive/ql/parse/FunctionSemanticAnalyzer.java 
da917f7 
  ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g 5dff3fe 
  ql/src/java/org/apache/hadoop/hive/ql/parse/ImportSemanticAnalyzer.java 
1ab5a60 
  ql/src/java/org/apache/hadoop/hive/ql/parse/LoadSemanticAnalyzer.java 5663fca 
  ql/src/java/org/apache/hadoop/hive/ql/parse/MacroSemanticAnalyzer.java 
b42a425 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 8e68fcf 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SubQueryUtils.java 8ffbe07 
  ql/src/java/org/apache/hadoop/hive/ql/parse/UnparseTranslator.java 93e3ad3 
  ql/src/test/queries/clientnegative/union.q e3c5c83 
  ql/src/test/queries/clientpositive/union_top_level.q PRE-CREATION 
  ql/src/test/results/clientnegative/union.q.out b66d394 
  ql/src/test/results/clientpositive/union_top_level.q.out PRE-CREATION 
  ql/src/test/results/compiler/parse/sample2.q.out e67c761 
  ql/src/test/results/compiler/parse/sample3.q.out ad5855b 
  ql/src/test/results/compiler/parse/sample4.q.out 790b009 
  ql/src/test/results/compiler/parse/sample5.q.out cb55074 
  ql/src/test/results/compiler/parse/sample6.q.out 3562bb8 
  ql/src/test/results/compiler/parse/sample7.q.out 6bcf840 

Diff: https://reviews.apache.org/r/16818/diff/


Testing
---

union_top_level.q contains tests for select, insert into, insert overwrite, 
ctas and views


Thanks,

Gunther Hagleitner



[jira] [Updated] (HIVE-6189) Support top level union all statements

2014-01-13 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-6189:
-

Attachment: HIVE-6189.2.patch

.2 addresses the failures.

 Support top level union all statements
 --

 Key: HIVE-6189
 URL: https://issues.apache.org/jira/browse/HIVE-6189
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Attachments: HIVE-6189.1.patch, HIVE-6189.2.patch


 I've always wondered why union all has to be in subqueries in hive.
 After looking at it, problems are:
 - Hive Parser:
   - Union happens at the wrong place (insert ... select ... union all select 
 ...) is parsed as (insert select) union select.
   - There are many rewrite rules in the parser to force any query into the a 
 from - insert -select form. No doubt for historical reasons.
 - Plan generation/semantic analysis assumes top level TOK_QUERY and not top 
 level TOK_UNION.
 The rewrite rules don't work when we move the UNION ALL recursion into the 
 select statements. However, it's not hard to do that in code.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6189) Support top level union all statements

2014-01-13 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-6189:
-

Status: Patch Available  (was: Open)

 Support top level union all statements
 --

 Key: HIVE-6189
 URL: https://issues.apache.org/jira/browse/HIVE-6189
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Attachments: HIVE-6189.1.patch, HIVE-6189.2.patch


 I've always wondered why union all has to be in subqueries in hive.
 After looking at it, problems are:
 - Hive Parser:
   - Union happens at the wrong place (insert ... select ... union all select 
 ...) is parsed as (insert select) union select.
   - There are many rewrite rules in the parser to force any query into the a 
 from - insert -select form. No doubt for historical reasons.
 - Plan generation/semantic analysis assumes top level TOK_QUERY and not top 
 level TOK_UNION.
 The rewrite rules don't work when we move the UNION ALL recursion into the 
 select statements. However, it's not hard to do that in code.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6189) Support top level union all statements

2014-01-13 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13869909#comment-13869909
 ] 

Gunther Hagleitner commented on HIVE-6189:
--

RB: https://reviews.apache.org/r/16818/

 Support top level union all statements
 --

 Key: HIVE-6189
 URL: https://issues.apache.org/jira/browse/HIVE-6189
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Attachments: HIVE-6189.1.patch, HIVE-6189.2.patch


 I've always wondered why union all has to be in subqueries in hive.
 After looking at it, problems are:
 - Hive Parser:
   - Union happens at the wrong place (insert ... select ... union all select 
 ...) is parsed as (insert select) union select.
   - There are many rewrite rules in the parser to force any query into the a 
 from - insert -select form. No doubt for historical reasons.
 - Plan generation/semantic analysis assumes top level TOK_QUERY and not top 
 level TOK_UNION.
 The rewrite rules don't work when we move the UNION ALL recursion into the 
 select statements. However, it's not hard to do that in code.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6189) Support top level union all statements

2014-01-13 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-6189:
-

Status: Open  (was: Patch Available)

 Support top level union all statements
 --

 Key: HIVE-6189
 URL: https://issues.apache.org/jira/browse/HIVE-6189
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Attachments: HIVE-6189.1.patch, HIVE-6189.2.patch


 I've always wondered why union all has to be in subqueries in hive.
 After looking at it, problems are:
 - Hive Parser:
   - Union happens at the wrong place (insert ... select ... union all select 
 ...) is parsed as (insert select) union select.
   - There are many rewrite rules in the parser to force any query into the a 
 from - insert -select form. No doubt for historical reasons.
 - Plan generation/semantic analysis assumes top level TOK_QUERY and not top 
 level TOK_UNION.
 The rewrite rules don't work when we move the UNION ALL recursion into the 
 select statements. However, it's not hard to do that in code.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-5595) Implement vectorized SMB JOIN

2014-01-13 Thread Eric Hanson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13869911#comment-13869911
 ] 

Eric Hanson commented on HIVE-5595:
---

+1

 Implement vectorized SMB JOIN
 -

 Key: HIVE-5595
 URL: https://issues.apache.org/jira/browse/HIVE-5595
 Project: Hive
  Issue Type: Sub-task
Reporter: Remus Rusanu
Assignee: Remus Rusanu
Priority: Critical
 Attachments: HIVE-5595.1.patch, HIVE-5595.2.patch, HIVE-5595.3.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 Vectorized implementation of SMB Map Join.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6109) Support customized location for EXTERNAL tables created by Dynamic Partitioning

2014-01-13 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13869933#comment-13869933
 ] 

Hive QA commented on HIVE-6109:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12622673/HIVE-6109.2.patch.txt

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 4919 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testNegativeCliDriver_mapreduce_stack_trace_hadoop20
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/886/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/886/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12622673

 Support customized location for EXTERNAL tables created by Dynamic 
 Partitioning
 ---

 Key: HIVE-6109
 URL: https://issues.apache.org/jira/browse/HIVE-6109
 Project: Hive
  Issue Type: Improvement
  Components: HCatalog
Reporter: Satish Mittal
 Attachments: HIVE-6109.1.patch.txt, HIVE-6109.2.patch.txt, 
 HIVE-6109.pdf


 Currently when dynamic partitions are created by HCatalog, the underlying 
 directories for the partitions are created in a fixed 'Hive-style' format, 
 i.e. root_dir/key1=value1/key2=value2/ and so on. However in case of 
 external table, user should be able to control the format of directories 
 created for dynamic partitions.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Hive-trunk-hadoop2 - Build # 670 - Still Failing

2014-01-13 Thread Apache Jenkins Server
Changes for Build #640

Changes for Build #641
[navis] HIVE-5414 : The result of show grant is not visible via JDBC (Navis 
reviewed by Thejas M Nair)

[navis] HIVE-4257 : java.sql.SQLNonTransientConnectionException on 
JDBCStatsAggregator (Teddy Choi via Navis, reviewed by Ashutosh)


Changes for Build #642

Changes for Build #643
[ehans] HIVE-6017: Contribute Decimal128 high-performance decimal(p, s) package 
from Microsoft to Hive (Hideaki Kumura via Eric Hanson)


Changes for Build #644
[cws] HIVE-5911: Recent change to schema upgrade scripts breaks file naming 
conventions (Sergey Shelukhin via cws)

[cws] HIVE-3746: Fix HS2 ResultSet Serialization Performance Regression II 
(Navis via cws)

[cws] HIVE-3746: Fix HS2 ResultSet Serialization Performance Regression (Navis 
via cws)

[jitendra] HIVE-6010: TestCompareCliDriver enables tests that would ensure 
vectorization produces same results as non-vectorized execution (Sergey 
Shelukhin via Jitendra Pandey)


Changes for Build #645

Changes for Build #646
[ehans] HIVE-5757: Implement vectorized support for CASE (Eric Hanson)


Changes for Build #647
[thejas] HIVE-5795 : Hive should be able to skip header and footer rows when 
reading data file for a table (Shuaishuai Nie via Thejas Nair)


Changes for Build #648
[thejas] HIVE-5923 : SQL std auth - parser changes (Thejas Nair, reviewed by 
Brock Noland)


Changes for Build #649

Changes for Build #650

Changes for Build #651
[brock] HIVE-3936 - Remote debug failed with hadoop 0.23X, hadoop 2.X (Swarnim 
Kulkarni via Brock)


Changes for Build #652

Changes for Build #653
[gunther] HIVE-6125: Tez: Refactoring changes (Gunther Hagleitner, reviewed by 
Thejas M Nair)


Changes for Build #654
[cws] HIVE-5829: Rewrite Trim and Pad UDFs based on GenericUDF (Mohammad Islam 
via cws)


Changes for Build #655
[brock] HIVE-2599 - Support Composit/Compound Keys with HBaseStorageHandler 
(Swarnim Kulkarni via Brock Noland)

[brock] HIVE-5946 - DDL authorization task factory should be better tested 
(Brock reviewed by Thejas)


Changes for Build #656

Changes for Build #657
[gunther] HIVE-6105: LongWritable.compareTo needs shimming (Navis vis Gunther 
Hagleitner)


Changes for Build #658

Changes for Build #659
[ehans] HIVE-6051: Create DecimalColumnVector and a representative 
VectorExpression for decimal (Eric Hanson)


Changes for Build #660
[thejas] HIVE-5224 : When creating table with AVRO serde, the avro.schema.url 
should be about to load serde schema from file system beside HDFS (Shuaishuai 
Nie via Thejas Nair)

[thejas] HIVE-6154 : HiveServer2 returns a detailed error message to the client 
only when the underlying exception is a HiveSQLException (Vaibhav Gumashta via 
Thejas Nair)


Changes for Build #661

Changes for Build #662
[gunther] HIVE-6098: Merge Tez branch into trunk (Gunther Hagleitner et al, 
reviewed by Thejas Nair, Vikram Dixit K, Ashutosh Chauhan)


Changes for Build #663
[hashutosh] HIVE-6171 : Use Paths consistently - V (Ashutosh Chauhan via Thejas 
Nair)


Changes for Build #664
[xuefu] HIVE-5446: Hive can CREATE an external table but not SELECT from it 
when file path have spaces


Changes for Build #665

Changes for Build #666

Changes for Build #667
[brock] HIVE-6115 - Remove redundant code in HiveHBaseStorageHandler (Brock 
reviewed by Xuefu and Sushanth)


Changes for Build #668
[hashutosh] HIVE-6166 : JsonSerDe is too strict about table schema (Sushanth 
Sowmyan via Ashutosh Chauhan)

[hashutosh] HIVE-5679 : add date support to metastore JDO/SQL (Sergey Shelukhin 
via Ashutosh Chauhan)


Changes for Build #669

Changes for Build #670
[ehans] HIVE-6067: Implement vectorized decimal comparison filters (Eric Hanson)




No tests ran.

The Apache Jenkins build system has built Hive-trunk-hadoop2 (build #670)

Status: Still Failing

Check console output at https://builds.apache.org/job/Hive-trunk-hadoop2/670/ 
to view the results.

[jira] [Commented] (HIVE-5032) Enable hive creating external table at the root directory of DFS

2014-01-13 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13869958#comment-13869958
 ] 

Ashutosh Chauhan commented on HIVE-5032:


+1

 Enable hive creating external table at the root directory of DFS
 

 Key: HIVE-5032
 URL: https://issues.apache.org/jira/browse/HIVE-5032
 Project: Hive
  Issue Type: Bug
Reporter: Shuaishuai Nie
Assignee: Shuaishuai Nie
 Attachments: HIVE-5032.1.patch, HIVE-5032.2.patch, HIVE-5032.3.patch, 
 HIVE-5032.4.patch, HIVE-5032.5.patch


 Creating external table using HIVE with location point to the root directory 
 of DFS will fail because the function 
 HiveFileFormatUtils#doGetPartitionDescFromPath treat authority of the path 
 the same as folder and cannot find a match in the pathToPartitionInfo table 
 when doing prefix match. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HIVE-6191) remove explicit Joda dependency from itests/hcatalog-unit/pom.xml

2014-01-13 Thread Eugene Koifman (JIRA)
Eugene Koifman created HIVE-6191:


 Summary: remove explicit Joda dependency from 
itests/hcatalog-unit/pom.xml
 Key: HIVE-6191
 URL: https://issues.apache.org/jira/browse/HIVE-6191
 Project: Hive
  Issue Type: Sub-task
  Components: HCatalog, Tests
Affects Versions: 0.13.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman


Joda library is used by Pig and should automatically be pulled down by MVN.  
Unfortunately Pig 12 is missing the relevant attribute from it's build file 
(PIG-3516) so I added Joda explicitly to itests/hcatalog-unit/pom.xml.  This 
should be removed once Pig 13 is released and HCat dependency is upgraded.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6152) insert query fails on hdfs federation + viewfs

2014-01-13 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13869983#comment-13869983
 ] 

Thejas M Nair commented on HIVE-6152:
-

I think it will be better to use same code path with and without viewfs. That 
change should work for non viewfs case as well. That way we don't have another 
untested combination.


 insert query fails on hdfs federation + viewfs
 --

 Key: HIVE-6152
 URL: https://issues.apache.org/jira/browse/HIVE-6152
 Project: Hive
  Issue Type: Bug
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: HIVE-6152.1.patch, HIVE-6152.2.patch, HIVE-6152.3.patch, 
 HIVE-6152.4.patch


 This is because Hive first writes data to /tmp/ and than moves from /tmp to 
 final destination. In federated HDFS recommendation is to mount /tmp on a 
 separate nameservice, which is usually different than /user. Since renames 
 across different mount points are not supported, this fails. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Review Request 16829: optimize sum(1) query so that it could be answered from metadata using stats

2014-01-13 Thread Ashutosh Chauhan

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/16829/
---

Review request for hive.


Bugs: HIVE-6192
https://issues.apache.org/jira/browse/HIVE-6192


Repository: hive-git


Description
---

optimize sum(1) query so that it could be answered from metadata using stats


Diffs
-

  ql/src/java/org/apache/hadoop/hive/ql/optimizer/StatsOptimizer.java 75390e7 
  ql/src/test/queries/clientpositive/metadata_only_queries.q 7cbd148 
  ql/src/test/results/clientpositive/metadata_only_queries.q.out b6d149a 

Diff: https://reviews.apache.org/r/16829/diff/


Testing
---

Added test in metadata_only_queries.q which contains other similar tests.


Thanks,

Ashutosh Chauhan



[jira] [Updated] (HIVE-6192) Optimize sum(1) to answer query using metadata

2014-01-13 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-6192:
---

Status: Patch Available  (was: Open)

 Optimize sum(1) to answer query using metadata
 --

 Key: HIVE-6192
 URL: https://issues.apache.org/jira/browse/HIVE-6192
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor, Statistics
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: HIVE-6192.patch


 sum(1) has same semantics as count(1) so it can also be optimized in similar 
 fashion by answering query using stats stored in metastore.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HIVE-6192) Optimize sum(1) to answer query using metadata

2014-01-13 Thread Ashutosh Chauhan (JIRA)
Ashutosh Chauhan created HIVE-6192:
--

 Summary: Optimize sum(1) to answer query using metadata
 Key: HIVE-6192
 URL: https://issues.apache.org/jira/browse/HIVE-6192
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor, Statistics
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan


sum(1) has same semantics as count(1) so it can also be optimized in similar 
fashion by answering query using stats stored in metastore.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Patch available for HIVE-6104

2014-01-13 Thread Steven Wong
Hi devs,

Would someone please review the patch for
HIVE-6104https://issues.apache.org/jira/browse/HIVE-6104?
It is a very small patch that should be easy to review and commit.

Thanks in advance.

Steven


[jira] [Updated] (HIVE-6192) Optimize sum(1) to answer query using metadata

2014-01-13 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-6192:
---

Attachment: HIVE-6192.patch

 Optimize sum(1) to answer query using metadata
 --

 Key: HIVE-6192
 URL: https://issues.apache.org/jira/browse/HIVE-6192
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor, Statistics
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: HIVE-6192.patch


 sum(1) has same semantics as count(1) so it can also be optimized in similar 
 fashion by answering query using stats stored in metastore.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6192) Optimize sum(1) to answer query using metadata

2014-01-13 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13870020#comment-13870020
 ] 

Ashutosh Chauhan commented on HIVE-6192:


RB request : https://reviews.apache.org/r/16829/

 Optimize sum(1) to answer query using metadata
 --

 Key: HIVE-6192
 URL: https://issues.apache.org/jira/browse/HIVE-6192
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor, Statistics
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: HIVE-6192.patch


 sum(1) has same semantics as count(1) so it can also be optimized in similar 
 fashion by answering query using stats stored in metastore.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6164) Hive build on Windows failed with datanucleus enhancer error command line is too long

2014-01-13 Thread shanyu zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shanyu zhao updated HIVE-6164:
--

Description: 
Build hive 0.13 against hadoop 2.0 on Windows always fail:
mvn install -Phadoop-2
...
[ERROR] 
[ERROR]  Standard error from the DataNucleus tool + org.datanucleus.enhancer.Dat
aNucleusEnhancer :
[ERROR] 
[ERROR] The command line is too long.

  was:
Build hive 0.13 on Windows always fail with error:

[ERROR] 
[ERROR]  Standard error from the DataNucleus tool + org.datanucleus.enhancer.Dat
aNucleusEnhancer :
[ERROR] 
[ERROR] The command line is too long.


 Hive build on Windows failed with datanucleus enhancer error command line is 
 too long
 ---

 Key: HIVE-6164
 URL: https://issues.apache.org/jira/browse/HIVE-6164
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.13.0
Reporter: shanyu zhao
Assignee: shanyu zhao
 Fix For: 0.13.0

 Attachments: HIVE-6164.patch


 Build hive 0.13 against hadoop 2.0 on Windows always fail:
 mvn install -Phadoop-2
 ...
 [ERROR] 
 [ERROR]  Standard error from the DataNucleus tool + 
 org.datanucleus.enhancer.Dat
 aNucleusEnhancer :
 [ERROR] 
 [ERROR] The command line is too long.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6159) Hive uses deprecated hadoop configuration in Hadoop 2.0

2014-01-13 Thread shanyu zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13870029#comment-13870029
 ] 

shanyu zhao commented on HIVE-6159:
---

Just updated review board.

 Hive uses deprecated hadoop configuration in Hadoop 2.0
 ---

 Key: HIVE-6159
 URL: https://issues.apache.org/jira/browse/HIVE-6159
 Project: Hive
  Issue Type: Bug
  Components: Configuration
Affects Versions: 0.12.0
Reporter: shanyu zhao
Assignee: shanyu zhao
 Fix For: 0.13.0

 Attachments: HIVE-6159-v2.patch, HIVE-6159-v3.patch, HIVE-6159.patch


 Build hive against hadoop 2.0. Then run hive CLI, you'll see deprecated 
 configurations warnings like this:
 13/12/14 01:00:51 INFO Configuration.deprecation: mapred.input.dir.recursive 
 is
  deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
  13/12/14 01:00:52 INFO Configuration.deprecation: mapred.max.split.size is 
 depre
  cated. Instead, use mapreduce.input.fileinputformat.split.maxsize
  13/12/14 01:00:52 INFO Configuration.deprecation: mapred.min.split.size is 
 depre
  cated. Instead, use mapreduce.input.fileinputformat.split.minsize
  13/12/14 01:00:52 INFO Configuration.deprecation: 
 mapred.min.split.size.per.rack
  is deprecated. Instead, use 
 mapreduce.input.fileinputformat.split.minsize.per.r
  ack
  13/12/14 01:00:52 INFO Configuration.deprecation: 
 mapred.min.split.size.per.node
  is deprecated. Instead, use 
 mapreduce.input.fileinputformat.split.minsize.per.n
  ode
  13/12/14 01:00:52 INFO Configuration.deprecation: mapred.reduce.tasks is 
 depreca
  ted. Instead, use mapreduce.job.reduces
  13/12/14 01:00:52 INFO Configuration.deprecation: 
 mapred.reduce.tasks.speculativ
  e.execution is deprecated. Instead, use mapreduce.reduce.speculative



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Re: Review Request 16747: Add file pruning into Hive

2014-01-13 Thread Sergey Shelukhin


 On Jan. 10, 2014, 6:02 p.m., Sergey Shelukhin wrote:
  ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java, line 2868
  https://reviews.apache.org/r/16747/diff/1/?file=419383#file419383line2868
 
  why make it a hashset now? or should it have always been one
 
 Navis Ryu wrote:
 I'm little confusing on this. Would it be not possible to have multiple 
 paths for an alias? I think that kind of scenario is not supported by current 
 hive. Reverting to list.

just checking... if it makes sense its ok


 On Jan. 10, 2014, 6:02 p.m., Sergey Shelukhin wrote:
  ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java, line 2921
  https://reviews.apache.org/r/16747/diff/1/?file=419383#file419383line2921
 
  nit: could return Collection from the method if it's not hard to change
 
 Navis Ryu wrote:
 It's used by other code parts including TEZ. Would it be better to leave 
 it as-is?

probably better to keep as is then... thanks


 On Jan. 10, 2014, 6:02 p.m., Sergey Shelukhin wrote:
  ql/src/java/org/apache/hadoop/hive/ql/plan/MapWork.java, line 559
  https://reviews.apache.org/r/16747/diff/1/?file=419395#file419395line559
 
  why is it recreating the list? maybe use addAll if it is needed?
 
 Navis Ryu wrote:
 to convert String to Path?

ah, ic. Thanks


- Sergey


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/16747/#review31518
---


On Jan. 13, 2014, 4:33 a.m., Navis Ryu wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/16747/
 ---
 
 (Updated Jan. 13, 2014, 4:33 a.m.)
 
 
 Review request for hive.
 
 
 Bugs: HIVE-1662
 https://issues.apache.org/jira/browse/HIVE-1662
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 now hive support filename virtual column. 
 if a file name filter presents in a query, hive should be able to only add 
 files which passed the filter to input paths.
 
 
 Diffs
 -
 
   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 16d54c6 
   ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java 96a78fc 
   ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java fccea89 
   ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecDriver.java 5511bca 
   ql/src/java/org/apache/hadoop/hive/ql/exec/mr/MapRedTask.java a7e2253 
   ql/src/java/org/apache/hadoop/hive/ql/index/IndexPredicateAnalyzer.java 
 e66c22c 
   ql/src/java/org/apache/hadoop/hive/ql/io/HiveFileFormatUtils.java 4be56f3 
   ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java 99172d4 
   
 ql/src/java/org/apache/hadoop/hive/ql/metadata/FilePrunningPredicateHandler.java
  PRE-CREATION 
   
 ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveStoragePredicateHandler.java
  9f35575 
   
 ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/AbstractJoinTaskDispatcher.java
  33ef581 
   
 ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/index/IndexWhereProcessor.java
  5c6751c 
   ql/src/java/org/apache/hadoop/hive/ql/parse/MapReduceCompiler.java 76f5a31 
   ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeDescUtils.java 96c8d89 
   ql/src/java/org/apache/hadoop/hive/ql/plan/MapWork.java 9929275 
   ql/src/java/org/apache/hadoop/hive/ql/plan/MapredWork.java f3203bf 
   ql/src/java/org/apache/hadoop/hive/ql/plan/PlanUtils.java 6ee6bee 
   ql/src/java/org/apache/hadoop/hive/ql/plan/TableScanDesc.java 9c35890 
   ql/src/java/org/apache/hadoop/hive/ql/ppd/OpProcFactory.java 40298e1 
   ql/src/test/queries/clientpositive/file_pruning.q PRE-CREATION 
   ql/src/test/results/clientpositive/file_pruning.q.out PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/16747/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Navis Ryu
 




hive unit test report question

2014-01-13 Thread Shanyu Zhao
Hi,

I was trying to build hive trunk, run all unit tests and generate reports, but 
I'm not sure what's the correct command line. I was using:
mvn clean install -Phadoop-2 -DskipTests
mvn test surefire-report:report -Phadoop-2
But the reports in the root folder and several other projects (such as 
metastore) are empty with no test results. And I couldn't find a summary page 
for all unit tests.

I was trying to avoid mvn site because it seems to take forever to finish. Am 
I using the correct commands? How can I get a report like the one in the 
precommit report:
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/827/testReport/?

I really appreciate your help!

Shanyu


[jira] [Commented] (HIVE-5032) Enable hive creating external table at the root directory of DFS

2014-01-13 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13870045#comment-13870045
 ] 

Hive QA commented on HIVE-5032:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12622674/HIVE-5032.5.patch

{color:green}SUCCESS:{color} +1 4924 tests passed

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/887/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/887/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12622674

 Enable hive creating external table at the root directory of DFS
 

 Key: HIVE-5032
 URL: https://issues.apache.org/jira/browse/HIVE-5032
 Project: Hive
  Issue Type: Bug
Reporter: Shuaishuai Nie
Assignee: Shuaishuai Nie
 Attachments: HIVE-5032.1.patch, HIVE-5032.2.patch, HIVE-5032.3.patch, 
 HIVE-5032.4.patch, HIVE-5032.5.patch


 Creating external table using HIVE with location point to the root directory 
 of DFS will fail because the function 
 HiveFileFormatUtils#doGetPartitionDescFromPath treat authority of the path 
 the same as folder and cannot find a match in the pathToPartitionInfo table 
 when doing prefix match. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-1662) Add file pruning into Hive.

2014-01-13 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13870067#comment-13870067
 ] 

Sergey Shelukhin commented on HIVE-1662:


Patch looks reasonable to me but tests fail

 Add file pruning into Hive.
 ---

 Key: HIVE-1662
 URL: https://issues.apache.org/jira/browse/HIVE-1662
 Project: Hive
  Issue Type: New Feature
Reporter: He Yongqiang
Assignee: Navis
 Attachments: HIVE-1662.10.patch.txt, HIVE-1662.8.patch.txt, 
 HIVE-1662.9.patch.txt, HIVE-1662.D8391.1.patch, HIVE-1662.D8391.2.patch, 
 HIVE-1662.D8391.3.patch, HIVE-1662.D8391.4.patch, HIVE-1662.D8391.5.patch, 
 HIVE-1662.D8391.6.patch, HIVE-1662.D8391.7.patch


 now hive support filename virtual column. 
 if a file name filter presents in a query, hive should be able to only add 
 files which passed the filter to input paths.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Re: hive unit test report question

2014-01-13 Thread Xuefu Zhang
You probably needs cd itests in the middle of your two steps.

--Xuefu


On Mon, Jan 13, 2014 at 1:59 PM, Shanyu Zhao shz...@microsoft.com wrote:

 Hi,

 I was trying to build hive trunk, run all unit tests and generate reports,
 but I'm not sure what's the correct command line. I was using:
 mvn clean install -Phadoop-2 -DskipTests
 mvn test surefire-report:report -Phadoop-2
 But the reports in the root folder and several other projects (such as
 metastore) are empty with no test results. And I couldn't find a summary
 page for all unit tests.

 I was trying to avoid mvn site because it seems to take forever to
 finish. Am I using the correct commands? How can I get a report like the
 one in the precommit report:
 http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/827/testReport/
 ?

 I really appreciate your help!

 Shanyu



[jira] [Updated] (HIVE-6124) Support basic Decimal arithmetic in vector mode (+, -, *)

2014-01-13 Thread Eric Hanson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Hanson updated HIVE-6124:
--

Attachment: HIVE-6124.03.patch

Re-based patch on trunk. Fixed minor conflicts. Verified that 
TestVectorArithmeticExpressions.java tests pass.

 Support basic Decimal arithmetic in vector mode (+, -, *)
 -

 Key: HIVE-6124
 URL: https://issues.apache.org/jira/browse/HIVE-6124
 Project: Hive
  Issue Type: Sub-task
Affects Versions: 0.13.0
Reporter: Eric Hanson
Assignee: Eric Hanson
 Attachments: HIVE-6124.01.patch, HIVE-6124.02.patch, 
 HIVE-6124.03.patch


 Create support for basic decimal arithmetic (+, -, * but not /, %) based on 
 templates for column-scalar, scalar-column, and column-column operations.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6124) Support basic Decimal arithmetic in vector mode (+, -, *)

2014-01-13 Thread Eric Hanson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13870081#comment-13870081
 ] 

Eric Hanson commented on HIVE-6124:
---

Code review at https://reviews.apache.org/r/16832/

 Support basic Decimal arithmetic in vector mode (+, -, *)
 -

 Key: HIVE-6124
 URL: https://issues.apache.org/jira/browse/HIVE-6124
 Project: Hive
  Issue Type: Sub-task
Affects Versions: 0.13.0
Reporter: Eric Hanson
Assignee: Eric Hanson
 Attachments: HIVE-6124.01.patch, HIVE-6124.02.patch, 
 HIVE-6124.03.patch


 Create support for basic decimal arithmetic (+, -, * but not /, %) based on 
 templates for column-scalar, scalar-column, and column-column operations.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6124) Support basic Decimal arithmetic in vector mode (+, -, *)

2014-01-13 Thread Eric Hanson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Hanson updated HIVE-6124:
--

Status: Patch Available  (was: In Progress)

 Support basic Decimal arithmetic in vector mode (+, -, *)
 -

 Key: HIVE-6124
 URL: https://issues.apache.org/jira/browse/HIVE-6124
 Project: Hive
  Issue Type: Sub-task
Affects Versions: 0.13.0
Reporter: Eric Hanson
Assignee: Eric Hanson
 Attachments: HIVE-6124.01.patch, HIVE-6124.02.patch, 
 HIVE-6124.03.patch


 Create support for basic decimal arithmetic (+, -, * but not /, %) based on 
 templates for column-scalar, scalar-column, and column-column operations.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HIVE-6193) change partition pruning request to metastore to use list instead of set

2014-01-13 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HIVE-6193:
--

 Summary: change partition pruning request to metastore to use list 
instead of set
 Key: HIVE-6193
 URL: https://issues.apache.org/jira/browse/HIVE-6193
 Project: Hive
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
Priority: Trivial


Change partition pruning request to metastore to use list instead of set.
Set is unwieldy w.r.t. compat, better get rid of it before API in this form was 
ever shipped.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6182) LDAP Authentication errors need to be more informative

2014-01-13 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13870089#comment-13870089
 ] 

Xuefu Zhang commented on HIVE-6182:
---

+1

 LDAP Authentication errors need to be more informative
 --

 Key: HIVE-6182
 URL: https://issues.apache.org/jira/browse/HIVE-6182
 Project: Hive
  Issue Type: Improvement
  Components: Authentication
Affects Versions: 0.13.0
Reporter: Szehon Ho
Assignee: Szehon Ho
 Attachments: HIVE-6182.patch


 There are a host of errors that can happen when logging into an LDAP-enabled 
 Hive-server2 from beeline.  But for any error there is only a generic log 
 message:
 {code}
 SASL negotiation failure
 javax.security.sasl.SaslException: PLAIN auth failed: Error validating LDAP 
 user
   at 
 org.apache.hadoop.security.SaslPlainServer.evaluateResponse(SaslPlainServer.java:108)
   at 
 org.apache.thrift.transport.TSaslTransport$SaslParticipant.evaluateChallengeOrRespons
 {code}
 And on Beeline side there is only an even more unhelpful message:
 {code}
 Error: Invalid URL: jdbc:hive2://localhost:1/default (state=08S01,code=0)
 {code}
 It would be good to print out the underlying error message at least in the 
 log, if not beeline.   But today they are swallowed.  This is bad because the 
 underlying message is the most important, having the error codes as shown 
 here : [LDAP error 
 code|https://wiki.servicenow.com/index.php?title=LDAP_Error_Codes]
 The beeline seems to throw that exception for any error during connection, 
 authetication or otherwise.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-5951) improve performance of adding partitions from client

2014-01-13 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-5951:
---

Attachment: HIVE-5951.07.patch

Update some tests; rename method in thrift better (in other JIRA I am adding 
more methods with request-response pattern, probably a good idea to call them 
all ..._req rather than ...2).

 improve performance of adding partitions from client
 

 Key: HIVE-5951
 URL: https://issues.apache.org/jira/browse/HIVE-5951
 Project: Hive
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HIVE-5951.01.patch, HIVE-5951.02.patch, 
 HIVE-5951.03.patch, HIVE-5951.04.patch, HIVE-5951.05.patch, 
 HIVE-5951.07.patch, HIVE-5951.nogen.patch, HIVE-5951.nogen.patch, 
 HIVE-5951.nogen.patch, HIVE-5951.nogen.patch, HIVE-5951.patch


 Adding partitions to metastore is currently very inefficient. There are small 
 things like, for !ifNotExists case, DDLSemanticAnalyzer gets the full 
 partition object for every spec (which is a network call to metastore), and 
 then discards it instantly; there's also general problem that too much 
 processing is done on client side. DDLSA should analyze the query and make 
 one call to metastore (or maybe a set of batched  calls if there are too many 
 partitions in the command), metastore should then figure out stuff and insert 
 in batch.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Re: hive unit test report question

2014-01-13 Thread Szehon Ho
Hi Shanyu,

Are you running in /itests?  The unit tests are in there, and are not run
if you are running from the root.

Thanks
Szehon


On Mon, Jan 13, 2014 at 1:59 PM, Shanyu Zhao shz...@microsoft.com wrote:

 Hi,

 I was trying to build hive trunk, run all unit tests and generate reports,
 but I'm not sure what's the correct command line. I was using:
 mvn clean install -Phadoop-2 -DskipTests
 mvn test surefire-report:report -Phadoop-2
 But the reports in the root folder and several other projects (such as
 metastore) are empty with no test results. And I couldn't find a summary
 page for all unit tests.

 I was trying to avoid mvn site because it seems to take forever to
 finish. Am I using the correct commands? How can I get a report like the
 one in the precommit report:
 http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/827/testReport/
 ?

 I really appreciate your help!

 Shanyu



[jira] [Updated] (HIVE-6193) change partition pruning request to metastore to use list instead of set

2014-01-13 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-6193:
---

Fix Version/s: 0.13.0

 change partition pruning request to metastore to use list instead of set
 

 Key: HIVE-6193
 URL: https://issues.apache.org/jira/browse/HIVE-6193
 Project: Hive
  Issue Type: Improvement
Affects Versions: 0.13.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
Priority: Trivial
 Fix For: 0.13.0


 Change partition pruning request to metastore to use list instead of set.
 Set is unwieldy w.r.t. compat, better get rid of it before API in this form 
 was ever shipped.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6193) change partition pruning request to metastore to use list instead of set

2014-01-13 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-6193:
---

Affects Version/s: 0.13.0

 change partition pruning request to metastore to use list instead of set
 

 Key: HIVE-6193
 URL: https://issues.apache.org/jira/browse/HIVE-6193
 Project: Hive
  Issue Type: Improvement
Affects Versions: 0.13.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
Priority: Trivial
 Fix For: 0.13.0


 Change partition pruning request to metastore to use list instead of set.
 Set is unwieldy w.r.t. compat, better get rid of it before API in this form 
 was ever shipped.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6104) Join-key logging in join operator

2014-01-13 Thread Eric Hanson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13870098#comment-13870098
 ] 

Eric Hanson commented on HIVE-6104:
---

Even though it is short, if you put the patch on Review Board that'll make it 
easier to review.

 Join-key logging in join operator
 -

 Key: HIVE-6104
 URL: https://issues.apache.org/jira/browse/HIVE-6104
 Project: Hive
  Issue Type: Bug
  Components: Diagnosability
Affects Versions: 0.11.0, 0.12.0, 0.13.0
Reporter: Steven Wong
Assignee: Steven Wong
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-6104.patch


 JoinOperator.processOp logs lines like table 0 has x rows for join key 
 \[foo\]. It is supposed to log after x rows for x = i, 2i, 4i,  However, 
 it has never worked completely:
 * In 0.11.0 and before, it logs after i rows and not after i rows, because 
 nextSz is not properly updated.
 * In 0.12.0, HIVE-4960 partially fixed that but x fails to be reset when the 
 alias (tag) changes.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6193) change partition pruning request to metastore to use list instead of set

2014-01-13 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-6193:
---

Attachment: HIVE-6193.patch

Really trivial patch, vast majority of the changes are generated code

 change partition pruning request to metastore to use list instead of set
 

 Key: HIVE-6193
 URL: https://issues.apache.org/jira/browse/HIVE-6193
 Project: Hive
  Issue Type: Improvement
Affects Versions: 0.13.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
Priority: Trivial
 Fix For: 0.13.0

 Attachments: HIVE-6193.patch


 Change partition pruning request to metastore to use list instead of set.
 Set is unwieldy w.r.t. compat, better get rid of it before API in this form 
 was ever shipped.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6193) change partition pruning request to metastore to use list instead of set

2014-01-13 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-6193:
---

Status: Patch Available  (was: Open)

 change partition pruning request to metastore to use list instead of set
 

 Key: HIVE-6193
 URL: https://issues.apache.org/jira/browse/HIVE-6193
 Project: Hive
  Issue Type: Improvement
Affects Versions: 0.13.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
Priority: Trivial
 Fix For: 0.13.0

 Attachments: HIVE-6193.patch


 Change partition pruning request to metastore to use list instead of set.
 Set is unwieldy w.r.t. compat, better get rid of it before API in this form 
 was ever shipped.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6193) change partition pruning request to metastore to use list instead of set

2014-01-13 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13870106#comment-13870106
 ] 

Sergey Shelukhin commented on HIVE-6193:


[~ashutoshc] can you take a quick look? I'd like to change this in Hive 13 
(before we ship this API, so that backward compat is not a concern)

 change partition pruning request to metastore to use list instead of set
 

 Key: HIVE-6193
 URL: https://issues.apache.org/jira/browse/HIVE-6193
 Project: Hive
  Issue Type: Improvement
Affects Versions: 0.13.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
Priority: Trivial
 Fix For: 0.13.0

 Attachments: HIVE-6193.patch


 Change partition pruning request to metastore to use list instead of set.
 Set is unwieldy w.r.t. compat, better get rid of it before API in this form 
 was ever shipped.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6185) DDLTask is inconsistent in creating a table and adding a partition when dealing with location

2014-01-13 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13870120#comment-13870120
 ] 

Hive QA commented on HIVE-6185:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12622690/HIVE-6185.3.patch

{color:green}SUCCESS:{color} +1 4924 tests passed

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/889/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/889/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12622690

 DDLTask is inconsistent in creating a table and adding a partition when 
 dealing with location
 -

 Key: HIVE-6185
 URL: https://issues.apache.org/jira/browse/HIVE-6185
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.12.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Attachments: HIVE-6185.1.patch, HIVE-6185.2.patch, HIVE-6185.3.patch, 
 HIVE-6185.patch, HIVE-6185.patch


 When creating a table, Hive uses URI to represent location:
 {code}
 if (crtTbl.getLocation() != null) {
   tbl.setDataLocation(new Path(crtTbl.getLocation()).toUri());
 }
 {code}
 When adding a partition, Hive uses Path to represent location:
 {code}
   // set partition path relative to table
   db.createPartition(tbl, addPartitionDesc.getPartSpec(), new Path(tbl
 .getPath(), addPartitionDesc.getLocation()), 
 addPartitionDesc.getPartParams(),
 addPartitionDesc.getInputFormat(),
 addPartitionDesc.getOutputFormat(),
 addPartitionDesc.getNumBuckets(),
 addPartitionDesc.getCols(),
 addPartitionDesc.getSerializationLib(),
 addPartitionDesc.getSerdeParams(),
 addPartitionDesc.getBucketCols(),
 addPartitionDesc.getSortCols());
 {code}
 This disparity makes the values stored in metastore be encoded differently, 
 causing problems w.r.t. special character as demonstrated in HIVE-5446. As a 
 result, the code dealing with location for table is different for partition, 
 creating maintenance burden.
 We need to standardize it to Path to be in line with other Path related 
 cleanup effort.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6173) Beeline doesn't accept --hiveconf option as Hive CLI does

2014-01-13 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-6173:
--

Status: Patch Available  (was: Open)

 Beeline doesn't accept --hiveconf option as Hive CLI does
 -

 Key: HIVE-6173
 URL: https://issues.apache.org/jira/browse/HIVE-6173
 Project: Hive
  Issue Type: Improvement
  Components: CLI
Affects Versions: 0.12.0, 0.11.0, 0.10.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Attachments: HIVE-6173.patch


 {code}
  beeline -u jdbc:hive2:// --hiveconf a=b
 Usage: java org.apache.hive.cli.beeline.BeeLine 
 {code}
 Since Beeline is replacing Hive CLI, it should support this command line 
 option as well.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6173) Beeline doesn't accept --hiveconf option as Hive CLI does

2014-01-13 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-6173:
--

Attachment: HIVE-6173.patch

 Beeline doesn't accept --hiveconf option as Hive CLI does
 -

 Key: HIVE-6173
 URL: https://issues.apache.org/jira/browse/HIVE-6173
 Project: Hive
  Issue Type: Improvement
  Components: CLI
Affects Versions: 0.10.0, 0.11.0, 0.12.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Attachments: HIVE-6173.patch


 {code}
  beeline -u jdbc:hive2:// --hiveconf a=b
 Usage: java org.apache.hive.cli.beeline.BeeLine 
 {code}
 Since Beeline is replacing Hive CLI, it should support this command line 
 option as well.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Review Request 16836: HIVE-6104 - Join-key logging in join operator

2014-01-13 Thread Steven Wong

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/16836/
---

Review request for hive.


Bugs: HIVE-6104
https://issues.apache.org/jira/browse/HIVE-6104


Repository: hive-git


Description
---

See https://issues.apache.org/jira/browse/HIVE-6104


Diffs
-

  ql/src/java/org/apache/hadoop/hive/ql/exec/CommonJoinOperator.java 5ee16f7 
  ql/src/java/org/apache/hadoop/hive/ql/exec/JoinOperator.java 3e17ae7 

Diff: https://reviews.apache.org/r/16836/diff/


Testing
---

Manually ran a join and checked the log file.


Thanks,

Steven Wong



[jira] [Commented] (HIVE-6104) Join-key logging in join operator

2014-01-13 Thread Steven Wong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13870145#comment-13870145
 ] 

Steven Wong commented on HIVE-6104:
---

https://reviews.apache.org/r/16836/ (my bad for not having this sooner)

 Join-key logging in join operator
 -

 Key: HIVE-6104
 URL: https://issues.apache.org/jira/browse/HIVE-6104
 Project: Hive
  Issue Type: Bug
  Components: Diagnosability
Affects Versions: 0.11.0, 0.12.0, 0.13.0
Reporter: Steven Wong
Assignee: Steven Wong
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-6104.patch


 JoinOperator.processOp logs lines like table 0 has x rows for join key 
 \[foo\]. It is supposed to log after x rows for x = i, 2i, 4i,  However, 
 it has never worked completely:
 * In 0.11.0 and before, it logs after i rows and not after i rows, because 
 nextSz is not properly updated.
 * In 0.12.0, HIVE-4960 partially fixed that but x fails to be reset when the 
 alias (tag) changes.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


  1   2   >