[jira] [Updated] (HIVE-4392) Illogical InvalidObjectException throwed when use mulit aggregate functions with star columns

2013-05-07 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-4392:
---

   Resolution: Fixed
Fix Version/s: 0.12.0
   Status: Resolved  (was: Patch Available)

> Illogical InvalidObjectException throwed when use mulit aggregate functions 
> with star columns 
> --
>
> Key: HIVE-4392
> URL: https://issues.apache.org/jira/browse/HIVE-4392
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
> Environment: Apache Hadoop 0.20.1
> Apache Hive Trunk
>Reporter: caofangkun
>Assignee: Navis
>Priority: Minor
> Fix For: 0.12.0
>
> Attachments: HIVE-4392.D10431.1.patch, HIVE-4392.D10431.2.patch, 
> HIVE-4392.D10431.3.patch, HIVE-4392.D10431.4.patch, HIVE-4392.D10431.5.patch
>
>
> For Example:
> hive (default)> create table liza_1 as 
>   > select *, sum(key), sum(value) 
>   > from new_src;
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=
> Starting Job = job_201304191025_0003, Tracking URL = 
> http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0003
> Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
> job_201304191025_0003
> Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 
> 1
> 2013-04-22 11:09:28,017 Stage-1 map = 0%,  reduce = 0%
> 2013-04-22 11:09:34,054 Stage-1 map = 0%,  reduce = 100%
> 2013-04-22 11:09:37,074 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201304191025_0003
> Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
> FAILED: Error in metadata: InvalidObjectException(message:liza_1 is not a 
> valid object name)
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask
> MapReduce Jobs Launched: 
> Job 0: Reduce: 1   HDFS Read: 0 HDFS Write: 12 SUCCESS
> Total MapReduce CPU Time Spent: 0 msec
> hive (default)> create table liza_1 as 
>   > select *, sum(key), sum(value) 
>   > from new_src   
>   > group by key, value;
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks not specified. Estimated from input data size: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=
> Starting Job = job_201304191025_0004, Tracking URL = 
> http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0004
> Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
> job_201304191025_0004
> Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 
> 1
> 2013-04-22 11:11:58,945 Stage-1 map = 0%,  reduce = 0%
> 2013-04-22 11:12:01,964 Stage-1 map = 0%,  reduce = 100%
> 2013-04-22 11:12:04,982 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201304191025_0004
> Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
> FAILED: Error in metadata: InvalidObjectException(message:liza_1 is not a 
> valid object name)
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask
> MapReduce Jobs Launched: 
> Job 0: Reduce: 1   HDFS Read: 0 HDFS Write: 0 SUCCESS
> Total MapReduce CPU Time Spent: 0 msec
> But the following tow Queries  work:
> hive (default)> create table liza_1 as select * from new_src;
> Total MapReduce jobs = 3
> Launching Job 1 out of 3
> Number of reduce tasks is set to 0 since there's no reduce operator
> Starting Job = job_201304191025_0006, Tracking URL = 
> http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0006
> Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
> job_201304191025_0006
> Hadoop job information for Stage-1: number of mappers: 0; number of reducers: > 0
> 2013-04-22 11:15:00,681 Stage-1 map = 0%,  reduce = 0%
> 2013-04-22 11:15:03,697 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201304191025_0006
> Stage-4 is selected by condition resolver.
> Stage-3 is filtered out by condition resolver.
> Stage-5 is filtered out by condition resolver.
> Moving data to: 
> hdfs://hd17-vm5:9101/user/zongren/hive-scratchdir/hive_2013-04-22_11-14-54_632_6709035018023861094/-ext-10001
> Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
> Table default.liz

[jira] [Updated] (HIVE-4392) Illogical InvalidObjectException throwed when use mulit aggregate functions with star columns

2013-05-07 Thread caofangkun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

caofangkun updated HIVE-4392:
-

Description: 
For Example:

hive (default)> create table liza_1 as 
  > select *, sum(key), sum(value) 
  > from new_src;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapred.reduce.tasks=
Starting Job = job_201304191025_0003, Tracking URL = 
http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0003
Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
job_201304191025_0003
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 1
2013-04-22 11:09:28,017 Stage-1 map = 0%,  reduce = 0%
2013-04-22 11:09:34,054 Stage-1 map = 0%,  reduce = 100%
2013-04-22 11:09:37,074 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201304191025_0003
Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
FAILED: Error in metadata: InvalidObjectException(message:liza_1 is not a valid 
object name)
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask
MapReduce Jobs Launched: 
Job 0: Reduce: 1   HDFS Read: 0 HDFS Write: 12 SUCCESS
Total MapReduce CPU Time Spent: 0 msec

hive (default)> create table liza_1 as 
  > select *, sum(key), sum(value) 
  > from new_src   
  > group by key, value;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapred.reduce.tasks=
Starting Job = job_201304191025_0004, Tracking URL = 
http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0004
Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
job_201304191025_0004
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 1
2013-04-22 11:11:58,945 Stage-1 map = 0%,  reduce = 0%
2013-04-22 11:12:01,964 Stage-1 map = 0%,  reduce = 100%
2013-04-22 11:12:04,982 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201304191025_0004
Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
FAILED: Error in metadata: InvalidObjectException(message:liza_1 is not a valid 
object name)
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask
MapReduce Jobs Launched: 
Job 0: Reduce: 1   HDFS Read: 0 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 0 msec

But the following tow Queries  work:
hive (default)> create table liza_1 as select * from new_src;
Total MapReduce jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201304191025_0006, Tracking URL = 
http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0006
Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
job_201304191025_0006
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2013-04-22 11:15:00,681 Stage-1 map = 0%,  reduce = 0%
2013-04-22 11:15:03,697 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201304191025_0006
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Moving data to: 
hdfs://hd17-vm5:9101/user/zongren/hive-scratchdir/hive_2013-04-22_11-14-54_632_6709035018023861094/-ext-10001
Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
Table default.liza_1 stats: [num_partitions: 0, num_files: 0, num_rows: 0, 
total_size: 0, raw_data_size: 0]
MapReduce Jobs Launched: 
Job 0:  HDFS Read: 0 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 0 msec
OK
Time taken: 9.576 seconds

hive (default)> create table liza_1 as
  > select sum (key), sum(value) 
  > from new_test;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapred.reduce.tasks=
Starting Job = job_201304191025_0008, Tracking URL = 
http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0008
Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
job_201304191025_0008
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 1
2013-04

[jira] [Updated] (HIVE-4392) Illogical InvalidObjectException throwed when use mulit aggregate functions with star columns

2013-05-02 Thread Phabricator (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-4392:
--

Attachment: HIVE-4392.D10431.5.patch

navis updated the revision "HIVE-4392 [jira] Illogical InvalidObjectException 
throwed when use mulit aggregate functions with star columns".

  Added tests

Reviewers: ashutoshc, JIRA

REVISION DETAIL
  https://reviews.facebook.net/D10431

CHANGE SINCE LAST DIFF
  https://reviews.facebook.net/D10431?vs=33177&id=33285#toc

AFFECTED FILES
  metastore/src/java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java
  metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
  metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java
  ql/src/java/org/apache/hadoop/hive/ql/parse/PTFTranslator.java
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
  ql/src/test/queries/clientpositive/ctas_colname.q
  ql/src/test/results/clientpositive/ctas_colname.q.out

To: JIRA, ashutoshc, navis
Cc: hbutani


> Illogical InvalidObjectException throwed when use mulit aggregate functions 
> with star columns 
> --
>
> Key: HIVE-4392
> URL: https://issues.apache.org/jira/browse/HIVE-4392
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
> Environment: Apache Hadoop 0.20.1
> Apache Hive Trunk
>Reporter: caofangkun
>Assignee: Navis
>Priority: Minor
> Attachments: HIVE-4392.D10431.1.patch, HIVE-4392.D10431.2.patch, 
> HIVE-4392.D10431.3.patch, HIVE-4392.D10431.4.patch, HIVE-4392.D10431.5.patch
>
>
> For Example:
> hive (default)> create table liza_1 as 
>   > select *, sum(key), sum(value) 
>   > from new_src;
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=
> Starting Job = job_201304191025_0003, Tracking URL = 
> http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0003
> Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
> job_201304191025_0003
> Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 
> 1
> 2013-04-22 11:09:28,017 Stage-1 map = 0%,  reduce = 0%
> 2013-04-22 11:09:34,054 Stage-1 map = 0%,  reduce = 100%
> 2013-04-22 11:09:37,074 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201304191025_0003
> Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
> FAILED: Error in metadata: InvalidObjectException(message:liza_1 is not a 
> valid object name)
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask
> MapReduce Jobs Launched: 
> Job 0: Reduce: 1   HDFS Read: 0 HDFS Write: 12 SUCCESS
> Total MapReduce CPU Time Spent: 0 msec
> hive (default)> create table liza_1 as 
>   > select *, sum(key), sum(value) 
>   > from new_src   
>   > group by key, value;
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks not specified. Estimated from input data size: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=
> Starting Job = job_201304191025_0004, Tracking URL = 
> http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0004
> Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
> job_201304191025_0004
> Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 
> 1
> 2013-04-22 11:11:58,945 Stage-1 map = 0%,  reduce = 0%
> 2013-04-22 11:12:01,964 Stage-1 map = 0%,  reduce = 100%
> 2013-04-22 11:12:04,982 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201304191025_0004
> Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
> FAILED: Error in metadata: InvalidObjectException(message:liza_1 is not a 
> valid object name)
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask
> MapReduce Jobs Launched: 
> Job 0: Reduce: 1   HDFS Read: 0 HDFS Write: 0 SUCCESS
> Total MapReduce CPU Time Spent: 0 msec
> But the following tow Queries  work:
> hive (default)> create table liza_1 as select * from new_src;
> Total MapReduce jobs = 3
> Launching Job 1 out of 3
> Number of reduce tasks is set to 0 since there's no reduce operator
> Starting Job = job_201304191025_0006, Tracking URL = 
> http:/

[jira] [Updated] (HIVE-4392) Illogical InvalidObjectException throwed when use mulit aggregate functions with star columns

2013-04-29 Thread Phabricator (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-4392:
--

Attachment: HIVE-4392.D10431.4.patch

navis updated the revision "HIVE-4392 [jira] Illogical InvalidObjectException 
throwed when use mulit aggregate functions with star columns".

  Addressed comments

Reviewers: ashutoshc, JIRA

REVISION DETAIL
  https://reviews.facebook.net/D10431

CHANGE SINCE LAST DIFF
  https://reviews.facebook.net/D10431?vs=32847&id=33177#toc

AFFECTED FILES
  metastore/src/java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java
  metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
  metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java
  ql/src/java/org/apache/hadoop/hive/ql/parse/PTFTranslator.java
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
  ql/src/test/queries/clientpositive/ctas_colname.q
  ql/src/test/results/clientpositive/ctas_colname.q.out

To: JIRA, ashutoshc, navis
Cc: hbutani


> Illogical InvalidObjectException throwed when use mulit aggregate functions 
> with star columns 
> --
>
> Key: HIVE-4392
> URL: https://issues.apache.org/jira/browse/HIVE-4392
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
> Environment: Apache Hadoop 0.20.1
> Apache Hive Trunk
>Reporter: caofangkun
>Assignee: Navis
>Priority: Minor
> Attachments: HIVE-4392.D10431.1.patch, HIVE-4392.D10431.2.patch, 
> HIVE-4392.D10431.3.patch, HIVE-4392.D10431.4.patch
>
>
> For Example:
> hive (default)> create table liza_1 as 
>   > select *, sum(key), sum(value) 
>   > from new_src;
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=
> Starting Job = job_201304191025_0003, Tracking URL = 
> http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0003
> Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
> job_201304191025_0003
> Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 
> 1
> 2013-04-22 11:09:28,017 Stage-1 map = 0%,  reduce = 0%
> 2013-04-22 11:09:34,054 Stage-1 map = 0%,  reduce = 100%
> 2013-04-22 11:09:37,074 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201304191025_0003
> Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
> FAILED: Error in metadata: InvalidObjectException(message:liza_1 is not a 
> valid object name)
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask
> MapReduce Jobs Launched: 
> Job 0: Reduce: 1   HDFS Read: 0 HDFS Write: 12 SUCCESS
> Total MapReduce CPU Time Spent: 0 msec
> hive (default)> create table liza_1 as 
>   > select *, sum(key), sum(value) 
>   > from new_src   
>   > group by key, value;
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks not specified. Estimated from input data size: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=
> Starting Job = job_201304191025_0004, Tracking URL = 
> http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0004
> Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
> job_201304191025_0004
> Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 
> 1
> 2013-04-22 11:11:58,945 Stage-1 map = 0%,  reduce = 0%
> 2013-04-22 11:12:01,964 Stage-1 map = 0%,  reduce = 100%
> 2013-04-22 11:12:04,982 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201304191025_0004
> Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
> FAILED: Error in metadata: InvalidObjectException(message:liza_1 is not a 
> valid object name)
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask
> MapReduce Jobs Launched: 
> Job 0: Reduce: 1   HDFS Read: 0 HDFS Write: 0 SUCCESS
> Total MapReduce CPU Time Spent: 0 msec
> But the following tow Queries  work:
> hive (default)> create table liza_1 as select * from new_src;
> Total MapReduce jobs = 3
> Launching Job 1 out of 3
> Number of reduce tasks is set to 0 since there's no reduce operator
> Starting Job = job_201304191025_0006, Tracking URL = 
> http://hd17-vm5:51030/job

[jira] [Updated] (HIVE-4392) Illogical InvalidObjectException throwed when use mulit aggregate functions with star columns

2013-04-24 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-4392:


Status: Patch Available  (was: Open)

> Illogical InvalidObjectException throwed when use mulit aggregate functions 
> with star columns 
> --
>
> Key: HIVE-4392
> URL: https://issues.apache.org/jira/browse/HIVE-4392
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
> Environment: Apache Hadoop 0.20.1
> Apache Hive Trunk
>Reporter: caofangkun
>Assignee: Navis
>Priority: Minor
> Attachments: HIVE-4392.D10431.1.patch, HIVE-4392.D10431.2.patch, 
> HIVE-4392.D10431.3.patch
>
>
> For Example:
> hive (default)> create table liza_1 as 
>   > select *, sum(key), sum(value) 
>   > from new_src;
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=
> Starting Job = job_201304191025_0003, Tracking URL = 
> http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0003
> Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
> job_201304191025_0003
> Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 
> 1
> 2013-04-22 11:09:28,017 Stage-1 map = 0%,  reduce = 0%
> 2013-04-22 11:09:34,054 Stage-1 map = 0%,  reduce = 100%
> 2013-04-22 11:09:37,074 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201304191025_0003
> Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
> FAILED: Error in metadata: InvalidObjectException(message:liza_1 is not a 
> valid object name)
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask
> MapReduce Jobs Launched: 
> Job 0: Reduce: 1   HDFS Read: 0 HDFS Write: 12 SUCCESS
> Total MapReduce CPU Time Spent: 0 msec
> hive (default)> create table liza_1 as 
>   > select *, sum(key), sum(value) 
>   > from new_src   
>   > group by key, value;
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks not specified. Estimated from input data size: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=
> Starting Job = job_201304191025_0004, Tracking URL = 
> http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0004
> Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
> job_201304191025_0004
> Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 
> 1
> 2013-04-22 11:11:58,945 Stage-1 map = 0%,  reduce = 0%
> 2013-04-22 11:12:01,964 Stage-1 map = 0%,  reduce = 100%
> 2013-04-22 11:12:04,982 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201304191025_0004
> Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
> FAILED: Error in metadata: InvalidObjectException(message:liza_1 is not a 
> valid object name)
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask
> MapReduce Jobs Launched: 
> Job 0: Reduce: 1   HDFS Read: 0 HDFS Write: 0 SUCCESS
> Total MapReduce CPU Time Spent: 0 msec
> But the following tow Queries  work:
> hive (default)> create table liza_1 as select * from new_src;
> Total MapReduce jobs = 3
> Launching Job 1 out of 3
> Number of reduce tasks is set to 0 since there's no reduce operator
> Starting Job = job_201304191025_0006, Tracking URL = 
> http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0006
> Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
> job_201304191025_0006
> Hadoop job information for Stage-1: number of mappers: 0; number of reducers: > 0
> 2013-04-22 11:15:00,681 Stage-1 map = 0%,  reduce = 0%
> 2013-04-22 11:15:03,697 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201304191025_0006
> Stage-4 is selected by condition resolver.
> Stage-3 is filtered out by condition resolver.
> Stage-5 is filtered out by condition resolver.
> Moving data to: 
> hdfs://hd17-vm5:9101/user/zongren/hive-scratchdir/hive_2013-04-22_11-14-54_632_6709035018023861094/-ext-10001
> Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
> Table default.liza_1 stats: [num_partitions: 0, num_files: 0, num_rows: 0, 
> total_size: 0, raw_data_size: 0]
> MapReduce Jobs Launched: 
> Job 0:  HDFS Read: 0 HDFS Write: 0 SUCCESS
>

[jira] [Updated] (HIVE-4392) Illogical InvalidObjectException throwed when use mulit aggregate functions with star columns

2013-04-24 Thread Phabricator (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-4392:
--

Attachment: HIVE-4392.D10431.3.patch

navis updated the revision "HIVE-4392 [jira] Illogical InvalidObjectException 
throwed when use mulit aggregate functions with star columns".

  Changed window/ptf columns to hidden virtual column

Reviewers: ashutoshc, JIRA

REVISION DETAIL
  https://reviews.facebook.net/D10431

CHANGE SINCE LAST DIFF
  https://reviews.facebook.net/D10431?vs=32715&id=32847#toc

AFFECTED FILES
  metastore/src/java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java
  metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
  metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java
  ql/src/java/org/apache/hadoop/hive/ql/parse/PTFTranslator.java
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
  ql/src/test/queries/clientpositive/ctas_colname.q
  ql/src/test/results/clientpositive/ctas_colname.q.out

To: JIRA, ashutoshc, navis
Cc: hbutani


> Illogical InvalidObjectException throwed when use mulit aggregate functions 
> with star columns 
> --
>
> Key: HIVE-4392
> URL: https://issues.apache.org/jira/browse/HIVE-4392
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
> Environment: Apache Hadoop 0.20.1
> Apache Hive Trunk
>Reporter: caofangkun
>Assignee: Navis
>Priority: Minor
> Attachments: HIVE-4392.D10431.1.patch, HIVE-4392.D10431.2.patch, 
> HIVE-4392.D10431.3.patch
>
>
> For Example:
> hive (default)> create table liza_1 as 
>   > select *, sum(key), sum(value) 
>   > from new_src;
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=
> Starting Job = job_201304191025_0003, Tracking URL = 
> http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0003
> Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
> job_201304191025_0003
> Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 
> 1
> 2013-04-22 11:09:28,017 Stage-1 map = 0%,  reduce = 0%
> 2013-04-22 11:09:34,054 Stage-1 map = 0%,  reduce = 100%
> 2013-04-22 11:09:37,074 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201304191025_0003
> Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
> FAILED: Error in metadata: InvalidObjectException(message:liza_1 is not a 
> valid object name)
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask
> MapReduce Jobs Launched: 
> Job 0: Reduce: 1   HDFS Read: 0 HDFS Write: 12 SUCCESS
> Total MapReduce CPU Time Spent: 0 msec
> hive (default)> create table liza_1 as 
>   > select *, sum(key), sum(value) 
>   > from new_src   
>   > group by key, value;
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks not specified. Estimated from input data size: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=
> Starting Job = job_201304191025_0004, Tracking URL = 
> http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0004
> Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
> job_201304191025_0004
> Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 
> 1
> 2013-04-22 11:11:58,945 Stage-1 map = 0%,  reduce = 0%
> 2013-04-22 11:12:01,964 Stage-1 map = 0%,  reduce = 100%
> 2013-04-22 11:12:04,982 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201304191025_0004
> Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
> FAILED: Error in metadata: InvalidObjectException(message:liza_1 is not a 
> valid object name)
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask
> MapReduce Jobs Launched: 
> Job 0: Reduce: 1   HDFS Read: 0 HDFS Write: 0 SUCCESS
> Total MapReduce CPU Time Spent: 0 msec
> But the following tow Queries  work:
> hive (default)> create table liza_1 as select * from new_src;
> Total MapReduce jobs = 3
> Launching Job 1 out of 3
> Number of reduce tasks is set to 0 since there's no reduce operator
> Starting Job = job_201304191025_0006, Tracking URL = 
> http://hd17-vm5:51

[jira] [Updated] (HIVE-4392) Illogical InvalidObjectException throwed when use mulit aggregate functions with star columns

2013-04-23 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-4392:
---

Status: Open  (was: Patch Available)

comments.

> Illogical InvalidObjectException throwed when use mulit aggregate functions 
> with star columns 
> --
>
> Key: HIVE-4392
> URL: https://issues.apache.org/jira/browse/HIVE-4392
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
> Environment: Apache Hadoop 0.20.1
> Apache Hive Trunk
>Reporter: caofangkun
>Assignee: Navis
>Priority: Minor
> Attachments: HIVE-4392.D10431.1.patch, HIVE-4392.D10431.2.patch
>
>
> For Example:
> hive (default)> create table liza_1 as 
>   > select *, sum(key), sum(value) 
>   > from new_src;
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=
> Starting Job = job_201304191025_0003, Tracking URL = 
> http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0003
> Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
> job_201304191025_0003
> Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 
> 1
> 2013-04-22 11:09:28,017 Stage-1 map = 0%,  reduce = 0%
> 2013-04-22 11:09:34,054 Stage-1 map = 0%,  reduce = 100%
> 2013-04-22 11:09:37,074 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201304191025_0003
> Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
> FAILED: Error in metadata: InvalidObjectException(message:liza_1 is not a 
> valid object name)
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask
> MapReduce Jobs Launched: 
> Job 0: Reduce: 1   HDFS Read: 0 HDFS Write: 12 SUCCESS
> Total MapReduce CPU Time Spent: 0 msec
> hive (default)> create table liza_1 as 
>   > select *, sum(key), sum(value) 
>   > from new_src   
>   > group by key, value;
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks not specified. Estimated from input data size: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=
> Starting Job = job_201304191025_0004, Tracking URL = 
> http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0004
> Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
> job_201304191025_0004
> Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 
> 1
> 2013-04-22 11:11:58,945 Stage-1 map = 0%,  reduce = 0%
> 2013-04-22 11:12:01,964 Stage-1 map = 0%,  reduce = 100%
> 2013-04-22 11:12:04,982 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201304191025_0004
> Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
> FAILED: Error in metadata: InvalidObjectException(message:liza_1 is not a 
> valid object name)
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask
> MapReduce Jobs Launched: 
> Job 0: Reduce: 1   HDFS Read: 0 HDFS Write: 0 SUCCESS
> Total MapReduce CPU Time Spent: 0 msec
> But the following tow Queries  work:
> hive (default)> create table liza_1 as select * from new_src;
> Total MapReduce jobs = 3
> Launching Job 1 out of 3
> Number of reduce tasks is set to 0 since there's no reduce operator
> Starting Job = job_201304191025_0006, Tracking URL = 
> http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0006
> Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
> job_201304191025_0006
> Hadoop job information for Stage-1: number of mappers: 0; number of reducers: > 0
> 2013-04-22 11:15:00,681 Stage-1 map = 0%,  reduce = 0%
> 2013-04-22 11:15:03,697 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201304191025_0006
> Stage-4 is selected by condition resolver.
> Stage-3 is filtered out by condition resolver.
> Stage-5 is filtered out by condition resolver.
> Moving data to: 
> hdfs://hd17-vm5:9101/user/zongren/hive-scratchdir/hive_2013-04-22_11-14-54_632_6709035018023861094/-ext-10001
> Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
> Table default.liza_1 stats: [num_partitions: 0, num_files: 0, num_rows: 0, 
> total_size: 0, raw_data_size: 0]
> MapReduce Jobs Launched: 
> Job 0:  HDFS Read: 0 HDFS Wr

[jira] [Updated] (HIVE-4392) Illogical InvalidObjectException throwed when use mulit aggregate functions with star columns

2013-04-22 Thread Phabricator (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-4392:
--

Attachment: HIVE-4392.D10431.2.patch

navis updated the revision "HIVE-4392 [jira] Illogical InvalidObjectException 
throwed when use mulit aggregate functions with star columns".

  Added test cases

Reviewers: JIRA

REVISION DETAIL
  https://reviews.facebook.net/D10431

CHANGE SINCE LAST DIFF
  https://reviews.facebook.net/D10431?vs=32607&id=32715#toc

AFFECTED FILES
  metastore/src/java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java
  metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
  metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
  ql/src/test/queries/clientpositive/ctas_colname.q
  ql/src/test/results/clientpositive/ctas_colname.q.out

To: JIRA, navis


> Illogical InvalidObjectException throwed when use mulit aggregate functions 
> with star columns 
> --
>
> Key: HIVE-4392
> URL: https://issues.apache.org/jira/browse/HIVE-4392
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
> Environment: Apache Hadoop 0.20.1
> Apache Hive Trunk
>Reporter: caofangkun
>Assignee: Navis
>Priority: Minor
> Attachments: HIVE-4392.D10431.1.patch, HIVE-4392.D10431.2.patch
>
>
> For Example:
> hive (default)> create table liza_1 as 
>   > select *, sum(key), sum(value) 
>   > from new_src;
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=
> Starting Job = job_201304191025_0003, Tracking URL = 
> http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0003
> Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
> job_201304191025_0003
> Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 
> 1
> 2013-04-22 11:09:28,017 Stage-1 map = 0%,  reduce = 0%
> 2013-04-22 11:09:34,054 Stage-1 map = 0%,  reduce = 100%
> 2013-04-22 11:09:37,074 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201304191025_0003
> Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
> FAILED: Error in metadata: InvalidObjectException(message:liza_1 is not a 
> valid object name)
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask
> MapReduce Jobs Launched: 
> Job 0: Reduce: 1   HDFS Read: 0 HDFS Write: 12 SUCCESS
> Total MapReduce CPU Time Spent: 0 msec
> hive (default)> create table liza_1 as 
>   > select *, sum(key), sum(value) 
>   > from new_src   
>   > group by key, value;
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks not specified. Estimated from input data size: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=
> Starting Job = job_201304191025_0004, Tracking URL = 
> http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0004
> Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
> job_201304191025_0004
> Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 
> 1
> 2013-04-22 11:11:58,945 Stage-1 map = 0%,  reduce = 0%
> 2013-04-22 11:12:01,964 Stage-1 map = 0%,  reduce = 100%
> 2013-04-22 11:12:04,982 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201304191025_0004
> Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
> FAILED: Error in metadata: InvalidObjectException(message:liza_1 is not a 
> valid object name)
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask
> MapReduce Jobs Launched: 
> Job 0: Reduce: 1   HDFS Read: 0 HDFS Write: 0 SUCCESS
> Total MapReduce CPU Time Spent: 0 msec
> But the following tow Queries  work:
> hive (default)> create table liza_1 as select * from new_src;
> Total MapReduce jobs = 3
> Launching Job 1 out of 3
> Number of reduce tasks is set to 0 since there's no reduce operator
> Starting Job = job_201304191025_0006, Tracking URL = 
> http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0006
> Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
> job_201304191025_0006
> Hadoop job inf

[jira] [Updated] (HIVE-4392) Illogical InvalidObjectException throwed when use mulit aggregate functions with star columns

2013-04-22 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-4392:


Status: Patch Available  (was: Open)

> Illogical InvalidObjectException throwed when use mulit aggregate functions 
> with star columns 
> --
>
> Key: HIVE-4392
> URL: https://issues.apache.org/jira/browse/HIVE-4392
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
> Environment: Apache Hadoop 0.20.1
> Apache Hive Trunk
>Reporter: caofangkun
>Assignee: Navis
>Priority: Minor
> Attachments: HIVE-4392.D10431.1.patch, HIVE-4392.D10431.2.patch
>
>
> For Example:
> hive (default)> create table liza_1 as 
>   > select *, sum(key), sum(value) 
>   > from new_src;
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=
> Starting Job = job_201304191025_0003, Tracking URL = 
> http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0003
> Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
> job_201304191025_0003
> Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 
> 1
> 2013-04-22 11:09:28,017 Stage-1 map = 0%,  reduce = 0%
> 2013-04-22 11:09:34,054 Stage-1 map = 0%,  reduce = 100%
> 2013-04-22 11:09:37,074 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201304191025_0003
> Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
> FAILED: Error in metadata: InvalidObjectException(message:liza_1 is not a 
> valid object name)
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask
> MapReduce Jobs Launched: 
> Job 0: Reduce: 1   HDFS Read: 0 HDFS Write: 12 SUCCESS
> Total MapReduce CPU Time Spent: 0 msec
> hive (default)> create table liza_1 as 
>   > select *, sum(key), sum(value) 
>   > from new_src   
>   > group by key, value;
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks not specified. Estimated from input data size: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=
> Starting Job = job_201304191025_0004, Tracking URL = 
> http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0004
> Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
> job_201304191025_0004
> Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 
> 1
> 2013-04-22 11:11:58,945 Stage-1 map = 0%,  reduce = 0%
> 2013-04-22 11:12:01,964 Stage-1 map = 0%,  reduce = 100%
> 2013-04-22 11:12:04,982 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201304191025_0004
> Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
> FAILED: Error in metadata: InvalidObjectException(message:liza_1 is not a 
> valid object name)
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask
> MapReduce Jobs Launched: 
> Job 0: Reduce: 1   HDFS Read: 0 HDFS Write: 0 SUCCESS
> Total MapReduce CPU Time Spent: 0 msec
> But the following tow Queries  work:
> hive (default)> create table liza_1 as select * from new_src;
> Total MapReduce jobs = 3
> Launching Job 1 out of 3
> Number of reduce tasks is set to 0 since there's no reduce operator
> Starting Job = job_201304191025_0006, Tracking URL = 
> http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0006
> Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
> job_201304191025_0006
> Hadoop job information for Stage-1: number of mappers: 0; number of reducers: > 0
> 2013-04-22 11:15:00,681 Stage-1 map = 0%,  reduce = 0%
> 2013-04-22 11:15:03,697 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201304191025_0006
> Stage-4 is selected by condition resolver.
> Stage-3 is filtered out by condition resolver.
> Stage-5 is filtered out by condition resolver.
> Moving data to: 
> hdfs://hd17-vm5:9101/user/zongren/hive-scratchdir/hive_2013-04-22_11-14-54_632_6709035018023861094/-ext-10001
> Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
> Table default.liza_1 stats: [num_partitions: 0, num_files: 0, num_rows: 0, 
> total_size: 0, raw_data_size: 0]
> MapReduce Jobs Launched: 
> Job 0:  HDFS Read: 0 HDFS Write: 0 SUCCESS
> Total MapReduce CPU Time Spe

[jira] [Updated] (HIVE-4392) Illogical InvalidObjectException throwed when use mulit aggregate functions with star columns

2013-04-22 Thread Phabricator (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-4392:
--

Attachment: HIVE-4392.D10431.1.patch

navis requested code review of "HIVE-4392 [jira] Illogical 
InvalidObjectException throwed when use mulit aggregate functions with star 
columns".

Reviewers: JIRA

HIVE-4392 Illogical InvalidObjectException throwed when use multi aggregate 
functions with star columns

For Example:

hive (default)> create table liza_1 as
  > select *, sum(key), sum(value)
  > from new_src;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapred.reduce.tasks=
Starting Job = job_201304191025_0003, Tracking URL = 
http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0003
Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
job_201304191025_0003
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 1
2013-04-22 11:09:28,017 Stage-1 map = 0%,  reduce = 0%
2013-04-22 11:09:34,054 Stage-1 map = 0%,  reduce = 100%
2013-04-22 11:09:37,074 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201304191025_0003
Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
FAILED: Error in metadata: InvalidObjectException(message:liza_1 is not a valid 
object name)
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask
MapReduce Jobs Launched:
Job 0: Reduce: 1   HDFS Read: 0 HDFS Write: 12 SUCCESS
Total MapReduce CPU Time Spent: 0 msec

hive (default)> create table liza_1 as
  > select *, sum(key), sum(value)
  > from new_src
  > group by key, value;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapred.reduce.tasks=
Starting Job = job_201304191025_0004, Tracking URL = 
http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0004
Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
job_201304191025_0004
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 1
2013-04-22 11:11:58,945 Stage-1 map = 0%,  reduce = 0%
2013-04-22 11:12:01,964 Stage-1 map = 0%,  reduce = 100%
2013-04-22 11:12:04,982 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201304191025_0004
Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
FAILED: Error in metadata: InvalidObjectException(message:liza_1 is not a valid 
object name)
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask
MapReduce Jobs Launched:
Job 0: Reduce: 1   HDFS Read: 0 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 0 msec

But the following tow Queries  work:
hive (default)> create table liza_1 as select * from new_src;
Total MapReduce jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201304191025_0006, Tracking URL = 
http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0006
Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
job_201304191025_0006
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2013-04-22 11:15:00,681 Stage-1 map = 0%,  reduce = 0%
2013-04-22 11:15:03,697 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201304191025_0006
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Moving data to: 
hdfs://hd17-vm5:9101/user/zongren/hive-scratchdir/hive_2013-04-22_11-14-54_632_6709035018023861094/-ext-10001
Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
Table default.liza_1 stats: [num_partitions: 0, num_files: 0, num_rows: 0, 
total_size: 0, raw_data_size: 0]
MapReduce Jobs Launched:
Job 0:  HDFS Read: 0 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 0 msec
OK
Time taken: 9.576 seconds

hive (default)> create table liza_1 as
  > select sum (key), sum(value)
  > from new_test;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapred.reduce.tasks=
Starting Job = job_201304191025_0008, Tr

[jira] [Updated] (HIVE-4392) Illogical InvalidObjectException throwed when use mulit aggregate functions with star columns

2013-04-21 Thread caofangkun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

caofangkun updated HIVE-4392:
-

Description: 
For Example:

hive (default)> create table liza_1 as 
  > select *, sum(key), sum(value) 
  > from new_src;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapred.reduce.tasks=
Starting Job = job_201304191025_0003, Tracking URL = 
http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0003
Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
job_201304191025_0003
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 1
2013-04-22 11:09:28,017 Stage-1 map = 0%,  reduce = 0%
2013-04-22 11:09:34,054 Stage-1 map = 0%,  reduce = 100%
2013-04-22 11:09:37,074 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201304191025_0003
Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
FAILED: Error in metadata: InvalidObjectException(message:liza_1 is not a valid 
object name)
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask
MapReduce Jobs Launched: 
Job 0: Reduce: 1   HDFS Read: 0 HDFS Write: 12 SUCCESS
Total MapReduce CPU Time Spent: 0 msec

hive (default)> create table liza_1 as 
  > select *, sum(key), sum(value) 
  > from new_src   
  > group by key, value;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapred.reduce.tasks=
Starting Job = job_201304191025_0004, Tracking URL = 
http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0004
Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
job_201304191025_0004
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 1
2013-04-22 11:11:58,945 Stage-1 map = 0%,  reduce = 0%
2013-04-22 11:12:01,964 Stage-1 map = 0%,  reduce = 100%
2013-04-22 11:12:04,982 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201304191025_0004
Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
FAILED: Error in metadata: InvalidObjectException(message:liza_1 is not a valid 
object name)
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask
MapReduce Jobs Launched: 
Job 0: Reduce: 1   HDFS Read: 0 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 0 msec

But the following tow Queries  work:
hive (default)> create table liza_1 as select * from new_src;
Total MapReduce jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201304191025_0006, Tracking URL = 
http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0006
Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
job_201304191025_0006
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2013-04-22 11:15:00,681 Stage-1 map = 0%,  reduce = 0%
2013-04-22 11:15:03,697 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201304191025_0006
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Moving data to: 
hdfs://hd17-vm5:9101/user/zongren/hive-scratchdir/hive_2013-04-22_11-14-54_632_6709035018023861094/-ext-10001
Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
Table default.liza_1 stats: [num_partitions: 0, num_files: 0, num_rows: 0, 
total_size: 0, raw_data_size: 0]
MapReduce Jobs Launched: 
Job 0:  HDFS Read: 0 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 0 msec
OK
Time taken: 9.576 seconds

hive (default)> create table liza_1 as
  > select sum (key), sum(value) 
  > from new_test;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapred.reduce.tasks=
Starting Job = job_201304191025_0008, Tracking URL = 
http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0008
Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
job_201304191025_0008
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 1
2013-04