connector ODBC hive (cloudera) and IBM cognos

2012-07-29 Thread Matouk Iftissen


De : Matouk Iftissen
Envoyé : vendredi 27 juillet 2012 16:32
À : 'dev@hive.apache.org'
Objet : connector ODBC hive (cloudera) and IBM cognos

Hello every one
Is there any driver ODBC developed by cloudera or others  (rather then ODBC 
developed for Tableau because it does not work on Windows Server 2008 R2 64bit) 
to use it with cognos BI server (8.4)  for OS (on Windows Server 2008 R2 64bit).
Thanks.
Best regards.




Re: non map-reduce for simple queries

2012-07-29 Thread Namit Jain
I like Navis's idea. The timeout can be configurable.


On 7/29/12 6:47 AM, Navis류승우 navis@nexr.com wrote:

I was thinking of timeout for fetching, 2000msec for example. How about
that?

2012년 7월 29일 일요일에 Edward Caprioloedlinuxg...@gmail.com님이 작성:
 If where condition is too complex , selecting specific columns seems
simple
 enough and useful.

 On Saturday, July 28, 2012, Namit Jain nj...@fb.com wrote:
 Currently, hive does not launch map-reduce jobs for the following
queries:

 select * from T where condition on partition columns (limit n)?

 This behavior is not configurable, and cannot be altered.

 HIVE-2925 wants to extend this behavior. The goal is not to spawn
 map-reduce jobs for the following queries:

 Select expr from T where any condition (limit n)?

 It is currently controlled by one parameter:
 hive.aggressive.fetch.task.conversion, based on which it is decided,
 whether to spawn
 map-reduce jobs or not for the queries of the above type. Note that
this
 can be beneficial for certain types of queries, since it is
 avoiding the expensive step of spawning map-reduce. However, it can be
 pretty expensive for certain types of queries: selecting
 a very large number of rows, the query having a very selective filter
 (which is satisfied by a very number of rows, and therefore involves
 scanning a very large table) etc. The user does not have any control on
 this. Note that it cannot be done by hooks, since the pre-semantic
 hooks does not have enough information: type of the query, inputs etc.
 and it is too late to do anything in the post-semantic hook (the
 query plan has already been altered).

 I would like to propose the following configuration parameters to
control
 this behavior.
 hive.fetch.task.conversion: true, false, auto

 If the value is true, then all queries with only selects and filters
will
 be converted
 If the value is false, then no query will be converted
 If the value is auto (which should be the default behavior), there
should
 be additional parameters to control the semantics.

 hive.fetch.task.auto.limit.threshold   --- integer value
X1
 hive.fetch.task.auto.inputsize.threshold  --- integer value X2

 If either the query has a limit lower than X1, or the input size is
 smaller than X2, the queries containing only filters and selects will be
 converted to not use
 map-reudce jobs.


 Comments…

 -namit







Re: non map-reduce for simple queries

2012-07-29 Thread Namit Jain
This can be a follow-up to HIVE-2925.
Navis, if you want, I can work on it.


On 7/29/12 7:58 PM, Namit Jain nj...@fb.com wrote:

I like Navis's idea. The timeout can be configurable.


On 7/29/12 6:47 AM, Navis류승우 navis@nexr.com wrote:

I was thinking of timeout for fetching, 2000msec for example. How about
that?

2012년 7월 29일 일요일에 Edward Caprioloedlinuxg...@gmail.com님이 작성:
 If where condition is too complex , selecting specific columns seems
simple
 enough and useful.

 On Saturday, July 28, 2012, Namit Jain nj...@fb.com wrote:
 Currently, hive does not launch map-reduce jobs for the following
queries:

 select * from T where condition on partition columns (limit n)?

 This behavior is not configurable, and cannot be altered.

 HIVE-2925 wants to extend this behavior. The goal is not to spawn
 map-reduce jobs for the following queries:

 Select expr from T where any condition (limit n)?

 It is currently controlled by one parameter:
 hive.aggressive.fetch.task.conversion, based on which it is decided,
 whether to spawn
 map-reduce jobs or not for the queries of the above type. Note that
this
 can be beneficial for certain types of queries, since it is
 avoiding the expensive step of spawning map-reduce. However, it can be
 pretty expensive for certain types of queries: selecting
 a very large number of rows, the query having a very selective filter
 (which is satisfied by a very number of rows, and therefore involves
 scanning a very large table) etc. The user does not have any control
on
 this. Note that it cannot be done by hooks, since the pre-semantic
 hooks does not have enough information: type of the query, inputs etc.
 and it is too late to do anything in the post-semantic hook (the
 query plan has already been altered).

 I would like to propose the following configuration parameters to
control
 this behavior.
 hive.fetch.task.conversion: true, false, auto

 If the value is true, then all queries with only selects and filters
will
 be converted
 If the value is false, then no query will be converted
 If the value is auto (which should be the default behavior), there
should
 be additional parameters to control the semantics.

 hive.fetch.task.auto.limit.threshold   --- integer value
X1
 hive.fetch.task.auto.inputsize.threshold  --- integer value X2

 If either the query has a limit lower than X1, or the input size is
 smaller than X2, the queries containing only filters and selects will
be
 converted to not use
 map-reudce jobs.


 Comments…

 -namit








[jira] [Commented] (HIVE-2845) Add support for index joins in Hive

2012-07-29 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13424559#comment-13424559
 ] 

Namit Jain commented on HIVE-2845:
--

Say there is a index of table A on 'key'.

For a query of the type:

select .. from A join B on A.key=B.key;

the plan can be as follows:

scan B
for every row of B (or a batch of rows in B), lookup the value using the index 
in A


The basic infra-structure is needed first. A lot of optimizations can be added 
later.

 Add support for index joins in Hive
 ---

 Key: HIVE-2845
 URL: https://issues.apache.org/jira/browse/HIVE-2845
 Project: Hive
  Issue Type: New Feature
  Components: Indexing, Query Processor
Reporter: Namit Jain
  Labels: indexing, joins, performance

 Hive supports indexes, which are used for filters currently.
 It would be very useful to add support for index-based joins in Hive.
 If 2 tables A and B are being joined, and an index exists on the join key of 
 A,
 B can be scanned (by the mappers), and for each row in B, a lookup for the 
 corresponding row in A can be performed.
 This can be very useful for some usecases.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-3254) Reuse RunningJob

2012-07-29 Thread Lianhui Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13424569#comment-13424569
 ] 

Lianhui Wang commented on HIVE-3254:


yes, i think that can do.
but maybe the newRj is null.so you must check the null.
because the jobtracker always cache the fixed-size completed job's infos.
if the job that you get have completed,maybe the JT removed the job's 
information.

 Reuse RunningJob 
 -

 Key: HIVE-3254
 URL: https://issues.apache.org/jira/browse/HIVE-3254
 Project: Hive
  Issue Type: Bug
Reporter: binlijin

   private MapRedStats progress(ExecDriverTaskHandle th) throws IOException {
 while (!rj.isComplete()) {
try {
  Thread.sleep(pullInterval); 
} catch (InterruptedException e) { 
} 
RunningJob newRj = jc.getJob(rj.getJobID());
 }
   }
 Should we reuse the RunningJob? If not, why? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Hive-trunk-h0.21 - Build # 1574 - Still Failing

2012-07-29 Thread Apache Jenkins Server
Changes for Build #1571
[namit] HIVE-2101 mapjoin sometimes gives wrong results if there is a filter in 
the on condition
(Navis via namit)


Changes for Build #1572

Changes for Build #1573

Changes for Build #1574



No tests ran.

The Apache Jenkins build system has built Hive-trunk-h0.21 (build #1574)

Status: Still Failing

Check console output at https://builds.apache.org/job/Hive-trunk-h0.21/1574/ to 
view the results.

Build failed in Jenkins: Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false #90

2012-07-29 Thread Apache Jenkins Server
See 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/90/

--
[...truncated 10118 lines...]
 [echo] Project: odbc
 [copy] Warning: 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/odbc/src/conf
 does not exist.

ivy-resolve-test:
 [echo] Project: odbc

ivy-retrieve-test:
 [echo] Project: odbc

compile-test:
 [echo] Project: odbc

create-dirs:
 [echo] Project: serde
 [copy] Warning: 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/serde/src/test/resources
 does not exist.

init:
 [echo] Project: serde

ivy-init-settings:
 [echo] Project: serde

ivy-resolve:
 [echo] Project: serde
[ivy:resolve] :: loading settings :: file = 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/ivy/ivysettings.xml
[ivy:report] Processing 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/90/artifact/hive/build/ivy/resolution-cache/org.apache.hive-hive-serde-default.xml
 to 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/90/artifact/hive/build/ivy/report/org.apache.hive-hive-serde-default.html

ivy-retrieve:
 [echo] Project: serde

dynamic-serde:

compile:
 [echo] Project: serde

ivy-resolve-test:
 [echo] Project: serde

ivy-retrieve-test:
 [echo] Project: serde

compile-test:
 [echo] Project: serde
[javac] Compiling 26 source files to 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/90/artifact/hive/build/serde/test/classes
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.

create-dirs:
 [echo] Project: service
 [copy] Warning: 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/service/src/test/resources
 does not exist.

init:
 [echo] Project: service

ivy-init-settings:
 [echo] Project: service

ivy-resolve:
 [echo] Project: service
[ivy:resolve] :: loading settings :: file = 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/ivy/ivysettings.xml
[ivy:report] Processing 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/90/artifact/hive/build/ivy/resolution-cache/org.apache.hive-hive-service-default.xml
 to 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/90/artifact/hive/build/ivy/report/org.apache.hive-hive-service-default.html

ivy-retrieve:
 [echo] Project: service

compile:
 [echo] Project: service

ivy-resolve-test:
 [echo] Project: service

ivy-retrieve-test:
 [echo] Project: service

compile-test:
 [echo] Project: service
[javac] Compiling 2 source files to 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/90/artifact/hive/build/service/test/classes

test:
 [echo] Project: hive

test-shims:
 [echo] Project: hive

test-conditions:
 [echo] Project: shims

gen-test:
 [echo] Project: shims

create-dirs:
 [echo] Project: shims
 [copy] Warning: 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/shims/src/test/resources
 does not exist.

init:
 [echo] Project: shims

ivy-init-settings:
 [echo] Project: shims

ivy-resolve:
 [echo] Project: shims
[ivy:resolve] :: loading settings :: file = 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/ivy/ivysettings.xml
[ivy:report] Processing 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/90/artifact/hive/build/ivy/resolution-cache/org.apache.hive-hive-shims-default.xml
 to 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/90/artifact/hive/build/ivy/report/org.apache.hive-hive-shims-default.html

ivy-retrieve:
 [echo] Project: shims

compile:
 [echo] Project: shims
 [echo] Building shims 0.20

build_shims:
 [echo] Project: shims
 [echo] Compiling 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/shims/src/common/java;/home/hudson/hudson-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/hive/shims/src/0.20/java
 against hadoop 0.20.2 
(https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/90/artifact/hive/build/hadoopcore/hadoop-0.20.2)

ivy-init-settings:
 [echo] Project: shims

ivy-resolve-hadoop-shim:
 [echo] Project: shims
[ivy:resolve] :: loading settings :: file = 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/ivy/ivysettings.xml

ivy-retrieve-hadoop-shim:
 [echo] Project: shims
 [echo] Building shims 0.20S

build_shims:
 [echo] Project: shims
 [echo] Compiling 

[jira] [Commented] (HIVE-3314) Extract global limit configuration to optimizer

2012-07-29 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13424689#comment-13424689
 ] 

Namit Jain commented on HIVE-3314:
--

answered comments

 Extract global limit configuration to optimizer
 ---

 Key: HIVE-3314
 URL: https://issues.apache.org/jira/browse/HIVE-3314
 Project: Hive
  Issue Type: Task
  Components: Query Processor
Affects Versions: 0.10.0
Reporter: Navis
Assignee: Navis
Priority: Trivial

 SemanticAnalyzer is growing bigger and bigger. If some codes can be separated 
 cleanly, it would be better to do that for simplicity.
 Was in part of HIVE-2925. Suggested to separate issue as 
 https://issues.apache.org/jira/browse/HIVE-2925?focusedCommentId=13423754page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13423754

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-3218) Stream table of SMBJoin/BucketMapJoin with two or more partitions is not handled properly

2012-07-29 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13424691#comment-13424691
 ] 

Namit Jain commented on HIVE-3218:
--

@Navis, can you update the test file ?
Let us try to get this in.

 Stream table of SMBJoin/BucketMapJoin with two or more partitions is not 
 handled properly
 -

 Key: HIVE-3218
 URL: https://issues.apache.org/jira/browse/HIVE-3218
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.10.0
Reporter: Navis
Assignee: Navis
Priority: Critical
 Attachments: HIVE-3218.1.patch.txt


 {noformat}
 drop table hive_test_smb_bucket1;
 drop table hive_test_smb_bucket2;
 create table hive_test_smb_bucket1 (key int, value string) partitioned by (ds 
 string) clustered by (key) sorted by (key) into 2 buckets;
 create table hive_test_smb_bucket2 (key int, value string) partitioned by (ds 
 string) clustered by (key) sorted by (key) into 2 buckets;
 set hive.enforce.bucketing = true;
 set hive.enforce.sorting = true;
 insert overwrite table hive_test_smb_bucket1 partition (ds='2010-10-14') 
 select key, value from src;
 insert overwrite table hive_test_smb_bucket1 partition (ds='2010-10-15') 
 select key, value from src;
 insert overwrite table hive_test_smb_bucket2 partition (ds='2010-10-15') 
 select key, value from src;
 set hive.optimize.bucketmapjoin = true;
 set hive.optimize.bucketmapjoin.sortedmerge = true;
 set hive.input.format = 
 org.apache.hadoop.hive.ql.io.BucketizedHiveInputFormat;
 SELECT /* + MAPJOIN(b) */ * FROM hive_test_smb_bucket1 a JOIN 
 hive_test_smb_bucket2 b ON a.key = b.key;
 {noformat}
 which make bucket join context..
 {noformat}
 Alias Bucket Output File Name Mapping:
 
 hdfs://localhost:9000/user/hive/warehouse/hive_test_smb_bucket1/ds=2010-10-14/00_0
  0
 
 hdfs://localhost:9000/user/hive/warehouse/hive_test_smb_bucket1/ds=2010-10-14/01_0
  1
 
 hdfs://localhost:9000/user/hive/warehouse/hive_test_smb_bucket1/ds=2010-10-15/00_0
  0
 
 hdfs://localhost:9000/user/hive/warehouse/hive_test_smb_bucket1/ds=2010-10-15/01_0
  1
 {noformat}
 fails with exception
 {noformat}
 java.lang.RuntimeException: Hive Runtime Error while closing operators
   at org.apache.hadoop.hive.ql.exec.ExecMapper.close(ExecMapper.java:226)
   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:391)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:416)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
   at org.apache.hadoop.mapred.Child.main(Child.java:264)
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to rename 
 output from: 
 hdfs://localhost:9000/tmp/hive-navis/hive_2012-06-29_22-17-49_574_6018646381714861925/_task_tmp.-ext-10001/_tmp.01_0
  to: 
 hdfs://localhost:9000/tmp/hive-navis/hive_2012-06-29_22-17-49_574_6018646381714861925/_tmp.-ext-10001/01_0
   at 
 org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.commit(FileSinkOperator.java:198)
   at 
 org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.access$300(FileSinkOperator.java:100)
   at 
 org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:717)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:557)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
   at org.apache.hadoop.hive.ql.exec.ExecMapper.close(ExecMapper.java:193)
   ... 8 more
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-3218) Stream table of SMBJoin/BucketMapJoin with two or more partitions is not handled properly

2012-07-29 Thread Navis (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13424692#comment-13424692
 ] 

Navis commented on HIVE-3218:
-

@Namin Jain, updated test file just now.

 Stream table of SMBJoin/BucketMapJoin with two or more partitions is not 
 handled properly
 -

 Key: HIVE-3218
 URL: https://issues.apache.org/jira/browse/HIVE-3218
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.10.0
Reporter: Navis
Assignee: Navis
Priority: Critical
 Attachments: HIVE-3218.1.patch.txt


 {noformat}
 drop table hive_test_smb_bucket1;
 drop table hive_test_smb_bucket2;
 create table hive_test_smb_bucket1 (key int, value string) partitioned by (ds 
 string) clustered by (key) sorted by (key) into 2 buckets;
 create table hive_test_smb_bucket2 (key int, value string) partitioned by (ds 
 string) clustered by (key) sorted by (key) into 2 buckets;
 set hive.enforce.bucketing = true;
 set hive.enforce.sorting = true;
 insert overwrite table hive_test_smb_bucket1 partition (ds='2010-10-14') 
 select key, value from src;
 insert overwrite table hive_test_smb_bucket1 partition (ds='2010-10-15') 
 select key, value from src;
 insert overwrite table hive_test_smb_bucket2 partition (ds='2010-10-15') 
 select key, value from src;
 set hive.optimize.bucketmapjoin = true;
 set hive.optimize.bucketmapjoin.sortedmerge = true;
 set hive.input.format = 
 org.apache.hadoop.hive.ql.io.BucketizedHiveInputFormat;
 SELECT /* + MAPJOIN(b) */ * FROM hive_test_smb_bucket1 a JOIN 
 hive_test_smb_bucket2 b ON a.key = b.key;
 {noformat}
 which make bucket join context..
 {noformat}
 Alias Bucket Output File Name Mapping:
 
 hdfs://localhost:9000/user/hive/warehouse/hive_test_smb_bucket1/ds=2010-10-14/00_0
  0
 
 hdfs://localhost:9000/user/hive/warehouse/hive_test_smb_bucket1/ds=2010-10-14/01_0
  1
 
 hdfs://localhost:9000/user/hive/warehouse/hive_test_smb_bucket1/ds=2010-10-15/00_0
  0
 
 hdfs://localhost:9000/user/hive/warehouse/hive_test_smb_bucket1/ds=2010-10-15/01_0
  1
 {noformat}
 fails with exception
 {noformat}
 java.lang.RuntimeException: Hive Runtime Error while closing operators
   at org.apache.hadoop.hive.ql.exec.ExecMapper.close(ExecMapper.java:226)
   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:391)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:416)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
   at org.apache.hadoop.mapred.Child.main(Child.java:264)
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to rename 
 output from: 
 hdfs://localhost:9000/tmp/hive-navis/hive_2012-06-29_22-17-49_574_6018646381714861925/_task_tmp.-ext-10001/_tmp.01_0
  to: 
 hdfs://localhost:9000/tmp/hive-navis/hive_2012-06-29_22-17-49_574_6018646381714861925/_tmp.-ext-10001/01_0
   at 
 org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.commit(FileSinkOperator.java:198)
   at 
 org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.access$300(FileSinkOperator.java:100)
   at 
 org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:717)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:557)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
   at org.apache.hadoop.hive.ql.exec.ExecMapper.close(ExecMapper.java:193)
   ... 8 more
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-3314) Extract global limit configuration to optimizer

2012-07-29 Thread Navis (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13424693#comment-13424693
 ] 

Navis commented on HIVE-3314:
-

updated patch

 Extract global limit configuration to optimizer
 ---

 Key: HIVE-3314
 URL: https://issues.apache.org/jira/browse/HIVE-3314
 Project: Hive
  Issue Type: Task
  Components: Query Processor
Affects Versions: 0.10.0
Reporter: Navis
Assignee: Navis
Priority: Trivial

 SemanticAnalyzer is growing bigger and bigger. If some codes can be separated 
 cleanly, it would be better to do that for simplicity.
 Was in part of HIVE-2925. Suggested to separate issue as 
 https://issues.apache.org/jira/browse/HIVE-2925?focusedCommentId=13423754page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13423754

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-3227) Implement data loading from user provided string directly for test

2012-07-29 Thread Navis (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13424695#comment-13424695
 ] 

Navis commented on HIVE-3227:
-

You're right. I didn't thought about that.

 Implement data loading from user provided string directly for test
 --

 Key: HIVE-3227
 URL: https://issues.apache.org/jira/browse/HIVE-3227
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor, Testing Infrastructure
Affects Versions: 0.10.0
Reporter: Navis
Assignee: Navis
Priority: Trivial

 {code}
 load data instream 'key value\nkey2 value2' into table test;
 {code}
 This will make test easier and also can reduce test time. For example,
 {code}
 -- ppr_pushdown.q
 create table ppr_test (key string) partitioned by (ds string);
 alter table ppr_test add partition (ds = '1234');
 insert overwrite table ppr_test partition(ds = '1234') select * from (select 
 '1234' from src limit 1 union all select 'abcd' from src limit 1) s;
 {code}
 last query is 4MR job. But can be replaced by
 {code}
 create table ppr_test (key string) partitioned by (ds string) ROW FORMAT 
 delimited fields terminated by ' ';
 alter table ppr_test add partition (ds = '1234');
 load data local instream '1234\nabcd' overwrite into table ppr_test 
 partition(ds = '1234');
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-3218) Stream table of SMBJoin/BucketMapJoin with two or more partitions is not handled properly

2012-07-29 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13424698#comment-13424698
 ] 

Namit Jain commented on HIVE-3218:
--

+1

 Stream table of SMBJoin/BucketMapJoin with two or more partitions is not 
 handled properly
 -

 Key: HIVE-3218
 URL: https://issues.apache.org/jira/browse/HIVE-3218
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.10.0
Reporter: Navis
Assignee: Navis
Priority: Critical
 Attachments: HIVE-3218.1.patch.txt


 {noformat}
 drop table hive_test_smb_bucket1;
 drop table hive_test_smb_bucket2;
 create table hive_test_smb_bucket1 (key int, value string) partitioned by (ds 
 string) clustered by (key) sorted by (key) into 2 buckets;
 create table hive_test_smb_bucket2 (key int, value string) partitioned by (ds 
 string) clustered by (key) sorted by (key) into 2 buckets;
 set hive.enforce.bucketing = true;
 set hive.enforce.sorting = true;
 insert overwrite table hive_test_smb_bucket1 partition (ds='2010-10-14') 
 select key, value from src;
 insert overwrite table hive_test_smb_bucket1 partition (ds='2010-10-15') 
 select key, value from src;
 insert overwrite table hive_test_smb_bucket2 partition (ds='2010-10-15') 
 select key, value from src;
 set hive.optimize.bucketmapjoin = true;
 set hive.optimize.bucketmapjoin.sortedmerge = true;
 set hive.input.format = 
 org.apache.hadoop.hive.ql.io.BucketizedHiveInputFormat;
 SELECT /* + MAPJOIN(b) */ * FROM hive_test_smb_bucket1 a JOIN 
 hive_test_smb_bucket2 b ON a.key = b.key;
 {noformat}
 which make bucket join context..
 {noformat}
 Alias Bucket Output File Name Mapping:
 
 hdfs://localhost:9000/user/hive/warehouse/hive_test_smb_bucket1/ds=2010-10-14/00_0
  0
 
 hdfs://localhost:9000/user/hive/warehouse/hive_test_smb_bucket1/ds=2010-10-14/01_0
  1
 
 hdfs://localhost:9000/user/hive/warehouse/hive_test_smb_bucket1/ds=2010-10-15/00_0
  0
 
 hdfs://localhost:9000/user/hive/warehouse/hive_test_smb_bucket1/ds=2010-10-15/01_0
  1
 {noformat}
 fails with exception
 {noformat}
 java.lang.RuntimeException: Hive Runtime Error while closing operators
   at org.apache.hadoop.hive.ql.exec.ExecMapper.close(ExecMapper.java:226)
   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:391)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:416)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
   at org.apache.hadoop.mapred.Child.main(Child.java:264)
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to rename 
 output from: 
 hdfs://localhost:9000/tmp/hive-navis/hive_2012-06-29_22-17-49_574_6018646381714861925/_task_tmp.-ext-10001/_tmp.01_0
  to: 
 hdfs://localhost:9000/tmp/hive-navis/hive_2012-06-29_22-17-49_574_6018646381714861925/_tmp.-ext-10001/01_0
   at 
 org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.commit(FileSinkOperator.java:198)
   at 
 org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.access$300(FileSinkOperator.java:100)
   at 
 org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:717)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:557)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
   at org.apache.hadoop.hive.ql.exec.ExecMapper.close(ExecMapper.java:193)
   ... 8 more
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-3218) Stream table of SMBJoin/BucketMapJoin with two or more partitions is not handled properly

2012-07-29 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-3218:
-

Attachment: hive.3218.2.patch

 Stream table of SMBJoin/BucketMapJoin with two or more partitions is not 
 handled properly
 -

 Key: HIVE-3218
 URL: https://issues.apache.org/jira/browse/HIVE-3218
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.10.0
Reporter: Navis
Assignee: Navis
Priority: Critical
 Attachments: HIVE-3218.1.patch.txt, hive.3218.2.patch


 {noformat}
 drop table hive_test_smb_bucket1;
 drop table hive_test_smb_bucket2;
 create table hive_test_smb_bucket1 (key int, value string) partitioned by (ds 
 string) clustered by (key) sorted by (key) into 2 buckets;
 create table hive_test_smb_bucket2 (key int, value string) partitioned by (ds 
 string) clustered by (key) sorted by (key) into 2 buckets;
 set hive.enforce.bucketing = true;
 set hive.enforce.sorting = true;
 insert overwrite table hive_test_smb_bucket1 partition (ds='2010-10-14') 
 select key, value from src;
 insert overwrite table hive_test_smb_bucket1 partition (ds='2010-10-15') 
 select key, value from src;
 insert overwrite table hive_test_smb_bucket2 partition (ds='2010-10-15') 
 select key, value from src;
 set hive.optimize.bucketmapjoin = true;
 set hive.optimize.bucketmapjoin.sortedmerge = true;
 set hive.input.format = 
 org.apache.hadoop.hive.ql.io.BucketizedHiveInputFormat;
 SELECT /* + MAPJOIN(b) */ * FROM hive_test_smb_bucket1 a JOIN 
 hive_test_smb_bucket2 b ON a.key = b.key;
 {noformat}
 which make bucket join context..
 {noformat}
 Alias Bucket Output File Name Mapping:
 
 hdfs://localhost:9000/user/hive/warehouse/hive_test_smb_bucket1/ds=2010-10-14/00_0
  0
 
 hdfs://localhost:9000/user/hive/warehouse/hive_test_smb_bucket1/ds=2010-10-14/01_0
  1
 
 hdfs://localhost:9000/user/hive/warehouse/hive_test_smb_bucket1/ds=2010-10-15/00_0
  0
 
 hdfs://localhost:9000/user/hive/warehouse/hive_test_smb_bucket1/ds=2010-10-15/01_0
  1
 {noformat}
 fails with exception
 {noformat}
 java.lang.RuntimeException: Hive Runtime Error while closing operators
   at org.apache.hadoop.hive.ql.exec.ExecMapper.close(ExecMapper.java:226)
   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:391)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:416)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
   at org.apache.hadoop.mapred.Child.main(Child.java:264)
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to rename 
 output from: 
 hdfs://localhost:9000/tmp/hive-navis/hive_2012-06-29_22-17-49_574_6018646381714861925/_task_tmp.-ext-10001/_tmp.01_0
  to: 
 hdfs://localhost:9000/tmp/hive-navis/hive_2012-06-29_22-17-49_574_6018646381714861925/_tmp.-ext-10001/01_0
   at 
 org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.commit(FileSinkOperator.java:198)
   at 
 org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.access$300(FileSinkOperator.java:100)
   at 
 org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:717)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:557)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
   at org.apache.hadoop.hive.ql.exec.ExecMapper.close(ExecMapper.java:193)
   ... 8 more
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-3218) Stream table of SMBJoin/BucketMapJoin with two or more partitions is not handled properly

2012-07-29 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-3218:
-

Status: Patch Available  (was: Open)

 Stream table of SMBJoin/BucketMapJoin with two or more partitions is not 
 handled properly
 -

 Key: HIVE-3218
 URL: https://issues.apache.org/jira/browse/HIVE-3218
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.10.0
Reporter: Navis
Assignee: Navis
Priority: Critical
 Attachments: HIVE-3218.1.patch.txt, hive.3218.2.patch


 {noformat}
 drop table hive_test_smb_bucket1;
 drop table hive_test_smb_bucket2;
 create table hive_test_smb_bucket1 (key int, value string) partitioned by (ds 
 string) clustered by (key) sorted by (key) into 2 buckets;
 create table hive_test_smb_bucket2 (key int, value string) partitioned by (ds 
 string) clustered by (key) sorted by (key) into 2 buckets;
 set hive.enforce.bucketing = true;
 set hive.enforce.sorting = true;
 insert overwrite table hive_test_smb_bucket1 partition (ds='2010-10-14') 
 select key, value from src;
 insert overwrite table hive_test_smb_bucket1 partition (ds='2010-10-15') 
 select key, value from src;
 insert overwrite table hive_test_smb_bucket2 partition (ds='2010-10-15') 
 select key, value from src;
 set hive.optimize.bucketmapjoin = true;
 set hive.optimize.bucketmapjoin.sortedmerge = true;
 set hive.input.format = 
 org.apache.hadoop.hive.ql.io.BucketizedHiveInputFormat;
 SELECT /* + MAPJOIN(b) */ * FROM hive_test_smb_bucket1 a JOIN 
 hive_test_smb_bucket2 b ON a.key = b.key;
 {noformat}
 which make bucket join context..
 {noformat}
 Alias Bucket Output File Name Mapping:
 
 hdfs://localhost:9000/user/hive/warehouse/hive_test_smb_bucket1/ds=2010-10-14/00_0
  0
 
 hdfs://localhost:9000/user/hive/warehouse/hive_test_smb_bucket1/ds=2010-10-14/01_0
  1
 
 hdfs://localhost:9000/user/hive/warehouse/hive_test_smb_bucket1/ds=2010-10-15/00_0
  0
 
 hdfs://localhost:9000/user/hive/warehouse/hive_test_smb_bucket1/ds=2010-10-15/01_0
  1
 {noformat}
 fails with exception
 {noformat}
 java.lang.RuntimeException: Hive Runtime Error while closing operators
   at org.apache.hadoop.hive.ql.exec.ExecMapper.close(ExecMapper.java:226)
   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:391)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:416)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
   at org.apache.hadoop.mapred.Child.main(Child.java:264)
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to rename 
 output from: 
 hdfs://localhost:9000/tmp/hive-navis/hive_2012-06-29_22-17-49_574_6018646381714861925/_task_tmp.-ext-10001/_tmp.01_0
  to: 
 hdfs://localhost:9000/tmp/hive-navis/hive_2012-06-29_22-17-49_574_6018646381714861925/_tmp.-ext-10001/01_0
   at 
 org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.commit(FileSinkOperator.java:198)
   at 
 org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.access$300(FileSinkOperator.java:100)
   at 
 org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:717)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:557)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
   at org.apache.hadoop.hive.ql.exec.ExecMapper.close(ExecMapper.java:193)
   ... 8 more
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-3311) Convert runtime exceptions to semantic exceptions for validation of alter table commands

2012-07-29 Thread Sambavi Muthukrishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sambavi Muthukrishnan updated HIVE-3311:


Attachment: HIVE-3311.2.patch

 Convert runtime exceptions to semantic exceptions for validation of alter 
 table commands
 

 Key: HIVE-3311
 URL: https://issues.apache.org/jira/browse/HIVE-3311
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.9.0
Reporter: Sambavi Muthukrishnan
Assignee: Sambavi Muthukrishnan
Priority: Minor
 Attachments: HIVE-3311.2.patch

   Original Estimate: 4h
  Remaining Estimate: 4h

 validateAlterTableType in DDLTask.java does a bunch of checks to ensure that 
 the alter table/view commands are correct (operations match table type, 
 command macthes table type).
 This JIRA tracks moving these to semantic exceptions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-3311) Convert runtime exceptions to semantic exceptions for validation of alter table commands

2012-07-29 Thread Sambavi Muthukrishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sambavi Muthukrishnan updated HIVE-3311:


Status: Patch Available  (was: Open)

 Convert runtime exceptions to semantic exceptions for validation of alter 
 table commands
 

 Key: HIVE-3311
 URL: https://issues.apache.org/jira/browse/HIVE-3311
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.9.0
Reporter: Sambavi Muthukrishnan
Assignee: Sambavi Muthukrishnan
Priority: Minor
 Attachments: HIVE-3311.2.patch

   Original Estimate: 4h
  Remaining Estimate: 4h

 validateAlterTableType in DDLTask.java does a bunch of checks to ensure that 
 the alter table/view commands are correct (operations match table type, 
 command macthes table type).
 This JIRA tracks moving these to semantic exceptions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-3311) Convert runtime exceptions to semantic exceptions for validation of alter table commands

2012-07-29 Thread Sambavi Muthukrishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sambavi Muthukrishnan updated HIVE-3311:


Attachment: HIVE-3311.3.patch

 Convert runtime exceptions to semantic exceptions for validation of alter 
 table commands
 

 Key: HIVE-3311
 URL: https://issues.apache.org/jira/browse/HIVE-3311
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.9.0
Reporter: Sambavi Muthukrishnan
Assignee: Sambavi Muthukrishnan
Priority: Minor
 Attachments: HIVE-3311.2.patch, HIVE-3311.3.patch

   Original Estimate: 4h
  Remaining Estimate: 4h

 validateAlterTableType in DDLTask.java does a bunch of checks to ensure that 
 the alter table/view commands are correct (operations match table type, 
 command macthes table type).
 This JIRA tracks moving these to semantic exceptions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-3311) Convert runtime exceptions to semantic exceptions for validation of alter table commands

2012-07-29 Thread Sambavi Muthukrishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sambavi Muthukrishnan updated HIVE-3311:


Attachment: (was: HIVE-3311.3.patch)

 Convert runtime exceptions to semantic exceptions for validation of alter 
 table commands
 

 Key: HIVE-3311
 URL: https://issues.apache.org/jira/browse/HIVE-3311
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.9.0
Reporter: Sambavi Muthukrishnan
Assignee: Sambavi Muthukrishnan
Priority: Minor
 Attachments: HIVE-3311.2.patch, HIVE-3311.3.patch

   Original Estimate: 4h
  Remaining Estimate: 4h

 validateAlterTableType in DDLTask.java does a bunch of checks to ensure that 
 the alter table/view commands are correct (operations match table type, 
 command macthes table type).
 This JIRA tracks moving these to semantic exceptions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-3311) Convert runtime exceptions to semantic exceptions for validation of alter table commands

2012-07-29 Thread Sambavi Muthukrishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sambavi Muthukrishnan updated HIVE-3311:


Status: Open  (was: Patch Available)

 Convert runtime exceptions to semantic exceptions for validation of alter 
 table commands
 

 Key: HIVE-3311
 URL: https://issues.apache.org/jira/browse/HIVE-3311
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.9.0
Reporter: Sambavi Muthukrishnan
Assignee: Sambavi Muthukrishnan
Priority: Minor
 Attachments: HIVE-3311.2.patch, HIVE-3311.3.patch

   Original Estimate: 4h
  Remaining Estimate: 4h

 validateAlterTableType in DDLTask.java does a bunch of checks to ensure that 
 the alter table/view commands are correct (operations match table type, 
 command macthes table type).
 This JIRA tracks moving these to semantic exceptions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-3311) Convert runtime exceptions to semantic exceptions for validation of alter table commands

2012-07-29 Thread Sambavi Muthukrishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sambavi Muthukrishnan updated HIVE-3311:


Status: Patch Available  (was: Open)

 Convert runtime exceptions to semantic exceptions for validation of alter 
 table commands
 

 Key: HIVE-3311
 URL: https://issues.apache.org/jira/browse/HIVE-3311
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.9.0
Reporter: Sambavi Muthukrishnan
Assignee: Sambavi Muthukrishnan
Priority: Minor
 Attachments: HIVE-3311.2.patch, HIVE-3311.3.patch

   Original Estimate: 4h
  Remaining Estimate: 4h

 validateAlterTableType in DDLTask.java does a bunch of checks to ensure that 
 the alter table/view commands are correct (operations match table type, 
 command macthes table type).
 This JIRA tracks moving these to semantic exceptions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-3311) Convert runtime exceptions to semantic exceptions for validation of alter table commands

2012-07-29 Thread Sambavi Muthukrishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sambavi Muthukrishnan updated HIVE-3311:


Attachment: HIVE-3311.3.patch

 Convert runtime exceptions to semantic exceptions for validation of alter 
 table commands
 

 Key: HIVE-3311
 URL: https://issues.apache.org/jira/browse/HIVE-3311
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.9.0
Reporter: Sambavi Muthukrishnan
Assignee: Sambavi Muthukrishnan
Priority: Minor
 Attachments: HIVE-3311.2.patch, HIVE-3311.3.patch

   Original Estimate: 4h
  Remaining Estimate: 4h

 validateAlterTableType in DDLTask.java does a bunch of checks to ensure that 
 the alter table/view commands are correct (operations match table type, 
 command macthes table type).
 This JIRA tracks moving these to semantic exceptions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira