[jira] [Commented] (HIVE-2379) Hive/HBase integration could be improved

2013-04-21 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637486#comment-13637486
 ] 

Ashutosh Chauhan commented on HIVE-2379:


Navis, as Nick has noted on [this jira | 
https://issues.apache.org/jira/browse/HIVE-2379?focusedCommentId=13568308page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13568308]
 we should probably use TableMapReduceUtil.addDependencyJars(job) .

 Hive/HBase integration could be improved
 

 Key: HIVE-2379
 URL: https://issues.apache.org/jira/browse/HIVE-2379
 Project: Hive
  Issue Type: Bug
  Components: CLI, Clients, HBase Handler
Affects Versions: 0.7.1, 0.8.0, 0.9.0
Reporter: Roman Shaposhnik
Assignee: Navis
Priority: Critical
 Attachments: HIVE-2379.D7347.1.patch, HIVE-2379.D7347.2.patch


 For now any Hive/HBase queries would require the following jars to be 
 explicitly added via hive's add jar command:
 add jar /usr/lib/hive/lib/hbase-0.90.1-cdh3u0.jar;
 add jar /usr/lib/hive/lib/hive-hbase-handler-0.7.0-cdh3u0.jar;
 add jar /usr/lib/hive/lib/zookeeper-3.3.1.jar;
 add jar /usr/lib/hive/lib/guava-r06.jar;
 the longer term solution, perhaps, should be to have the code at submit time 
 call hbase's 
 TableMapREduceUtil.addDependencyJar(job, HBaseStorageHandler.class) to ship 
 it in distributedcache.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3952) merge map-job followed by map-reduce job

2013-04-21 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637488#comment-13637488
 ] 

Ashutosh Chauhan commented on HIVE-3952:


Patch is not applying any longer. 
It will be good to have this in 0.11.

 merge map-job followed by map-reduce job
 

 Key: HIVE-3952
 URL: https://issues.apache.org/jira/browse/HIVE-3952
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Namit Jain
Assignee: Vinod Kumar Vavilapalli
 Attachments: HIVE-3952-20130226.txt, HIVE-3952-20130227.1.txt, 
 HIVE-3952-20130301.txt


 Consider the query like:
 select count(*) FROM
 ( select idOne, idTwo, value FROM
   bigTable   
   JOIN
 
   smallTableOne on (bigTable.idOne = smallTableOne.idOne) 
   
   ) firstjoin 
 
 JOIN  
 
 smallTableTwo on (firstjoin.idTwo = smallTableTwo.idTwo);
 where smallTableOne and smallTableTwo are smaller than 
 hive.auto.convert.join.noconditionaltask.size and
 hive.auto.convert.join.noconditionaltask is set to true.
 The joins are collapsed into mapjoins, and it leads to a map-only job
 (for the map-joins) followed by a map-reduce job (for the group by).
 Ideally, the map-only job should be merged with the following map-reduce job.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3861) Upgrade hbase dependency to 0.94

2013-04-21 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-3861:
-

Attachment: HIVE-3861.4.patch

 Upgrade hbase dependency to 0.94
 

 Key: HIVE-3861
 URL: https://issues.apache.org/jira/browse/HIVE-3861
 Project: Hive
  Issue Type: Bug
  Components: HBase Handler
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Attachments: HIVE-3861.2.patch, HIVE-3861.3.patch, HIVE-3861.4.patch, 
 HIVE-3861.patch


 Hive tests fail to run against hbase v0.94.2. Proposing to upgrade the 
 dependency and change the test setup to properly work with the newer version.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3861) Upgrade hbase dependency to 0.94

2013-04-21 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637509#comment-13637509
 ] 

Gunther Hagleitner commented on HIVE-3861:
--

bq. protobuf jar is already pulled in via ql. I think we don't need to declare 
it here. Transitive dependency should take care of it.

Done.

bq. file:// vs file:/// whats the reason for that? Can you test this on 
windows? I remember last time we had trouble with hbase integration on windows 
for which this thing was related.

You need the three slashes especially on windows. The RFCs require /// (two 
for the authority, one to separate host from path). When you don't have a host 
that leaves file:///path. On unix you typically get away with two, but on 
windows (C:...) you need the third.


 Upgrade hbase dependency to 0.94
 

 Key: HIVE-3861
 URL: https://issues.apache.org/jira/browse/HIVE-3861
 Project: Hive
  Issue Type: Bug
  Components: HBase Handler
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Attachments: HIVE-3861.2.patch, HIVE-3861.3.patch, HIVE-3861.4.patch, 
 HIVE-3861.patch


 Hive tests fail to run against hbase v0.94.2. Proposing to upgrade the 
 dependency and change the test setup to properly work with the newer version.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3861) Upgrade hbase dependency to 0.94

2013-04-21 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-3861:
-

Status: Patch Available  (was: Open)

 Upgrade hbase dependency to 0.94
 

 Key: HIVE-3861
 URL: https://issues.apache.org/jira/browse/HIVE-3861
 Project: Hive
  Issue Type: Bug
  Components: HBase Handler
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Attachments: HIVE-3861.2.patch, HIVE-3861.3.patch, HIVE-3861.4.patch, 
 HIVE-3861.patch


 Hive tests fail to run against hbase v0.94.2. Proposing to upgrade the 
 dependency and change the test setup to properly work with the newer version.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4378) Counters hit performance even when not used

2013-04-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637513#comment-13637513
 ] 

Hudson commented on HIVE-4378:
--

Integrated in Hive-trunk-h0.21 #2072 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2072/])
HIVE-4378 : Counters hit performance even when not used (Gunther Hagleitner 
via Ashutosh Chauhan) (Revision 1470100)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1470100
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FileSinkOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/GroupByOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java


 Counters hit performance even when not used
 ---

 Key: HIVE-4378
 URL: https://issues.apache.org/jira/browse/HIVE-4378
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Fix For: 0.12.0

 Attachments: HIVE-4378.1.patch


 preprocess/postprocess counters perform a number of computations even when 
 there are no counters to update. Performance runs are captured in: 
 https://issues.apache.org/jira/browse/HIVE-4318

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4318) OperatorHooks hit performance even when not used

2013-04-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637514#comment-13637514
 ] 

Hudson commented on HIVE-4318:
--

Integrated in Hive-trunk-h0.21 #2072 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2072/])
HIVE-4318 : OperatorHooks hit performance even when not used (Gunther 
Hagleitner via Ashutosh Chauhan) (Revision 1470101)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1470101
Files : 
* /hive/trunk/build-common.xml
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/conf/hive-default.xml.template
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ExecMapper.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ExecReducer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/OperatorHook.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/OperatorHookContext.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/OperatorHookUtils.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/profiler/HiveProfilePublisher.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/profiler/HiveProfilePublisherInfo.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/profiler/HiveProfiler.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/profiler/HiveProfilerAggregateStat.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/profiler/HiveProfilerConnectionInfo.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/profiler/HiveProfilerStats.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/profiler/HiveProfilerStatsAggregator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/profiler/HiveProfilerUtils.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/TstOperatorHook.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/TstOperatorHookUtils.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/hooks/HiveProfilerResultsHook.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/hooks/PostTestOperatorHook.java
* /hive/trunk/ql/src/test/queries/clientpositive/hiveprofiler0.q
* /hive/trunk/ql/src/test/queries/clientpositive/hiveprofiler_script0.q
* /hive/trunk/ql/src/test/queries/clientpositive/hiveprofiler_union0.q
* /hive/trunk/ql/src/test/queries/clientpositive/operatorhook.q
* /hive/trunk/ql/src/test/results/clientpositive/hiveprofiler0.q.out
* /hive/trunk/ql/src/test/results/clientpositive/hiveprofiler_script0.q.out
* /hive/trunk/ql/src/test/results/clientpositive/hiveprofiler_union0.q.out
* /hive/trunk/ql/src/test/results/clientpositive/operatorhook.q.out


 OperatorHooks hit performance even when not used
 

 Key: HIVE-4318
 URL: https://issues.apache.org/jira/browse/HIVE-4318
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
 Environment: Ubuntu LXC (64 bit)
Reporter: Gopal V
Assignee: Gunther Hagleitner
 Fix For: 0.12.0

 Attachments: HIVE-4318.1.patch, HIVE-4318.2.patch, HIVE-4318.3.patch, 
 HIVE-4318.patch.pam.txt


 Operator Hooks inserted into Operator.java cause a performance hit even when 
 it is not being used.
 For a count(1) query tested with  without the operator hook calls.
 {code:title=with}
 2013-04-09 07:33:58,920 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 
 84.07 sec
 Total MapReduce CPU Time Spent: 1 minutes 24 seconds 70 msec
 OK
 28800991
 Time taken: 40.407 seconds, Fetched: 1 row(s)
 {code}
 {code:title=without}
 2013-04-09 07:33:02,355 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 
 68.48 sec
 ...
 Total MapReduce CPU Time Spent: 1 minutes 8 seconds 480 msec
 OK
 28800991
 Time taken: 35.907 seconds, Fetched: 1 row(s)
 {code}
 The effect is multiplied by the number of operators in the pipeline that has 
 to forward the row - the more operators there are the, the slower the query.
 The modification made to test this was 
 {code:title=Operator.java}
 --- ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java
 +++ ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java
 @@ -526,16 +526,16 @@ public void process(Object row, int tag) throws 
 HiveException {
return;
  }
  OperatorHookContext opHookContext = new OperatorHookContext(this, row, 
 tag);
 -preProcessCounter();
 -enterOperatorHooks(opHookContext);
 +//preProcessCounter();
 +//enterOperatorHooks(opHookContext);
  processOp(row, tag);
 -exitOperatorHooks(opHookContext);
 -postProcessCounter();
 +//exitOperatorHooks(opHookContext);
 +//postProcessCounter();
}
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA 

[jira] [Commented] (HIVE-4304) Remove unused builtins and pdk submodules

2013-04-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637512#comment-13637512
 ] 

Hudson commented on HIVE-4304:
--

Integrated in Hive-trunk-h0.21 #2072 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2072/])
HIVE-4304 : Remove unused builtins and pdk submodules (Travis Crawford via 
Ashutosh Chauhan) (Revision 1470203)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1470203
Files : 
* /hive/trunk/bin/hive
* /hive/trunk/build.properties
* /hive/trunk/build.xml
* /hive/trunk/builtins
* /hive/trunk/eclipse-templates/.classpath
* /hive/trunk/hcatalog/build-support/ant/deploy.xml
* /hive/trunk/pdk
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java
* /hive/trunk/ql/src/test/results/clientpositive/show_functions.q.out


 Remove unused builtins and pdk submodules
 -

 Key: HIVE-4304
 URL: https://issues.apache.org/jira/browse/HIVE-4304
 Project: Hive
  Issue Type: Improvement
Reporter: Travis Crawford
Assignee: Travis Crawford
 Fix For: 0.12.0

 Attachments: HIVE-4304.1.patch, HIVE-4304.patch


 Moving from email. The 
 [builtins|http://svn.apache.org/repos/asf/hive/trunk/builtins/] and 
 [pdk|http://svn.apache.org/repos/asf/hive/trunk/pdk/] submodules are not 
 believed to be in use and should be removed. The main benefits are 
 simplification and maintainability of the Hive code base.
 Forwarded conversation
 Subject: builtins submodule - is it still needed?
 
 From: Travis Crawford traviscrawf...@gmail.com
 Date: Thu, Apr 4, 2013 at 2:01 PM
 To: u...@hive.apache.org, dev@hive.apache.org
 Hey hive gurus -
 Is the builtins hive submodule in use? The submodule was added in
 HIVE-2523 as a location for builtin-UDFs, but it appears to not have
 taken off. Any objections to removing it?
 DETAILS
 For HIVE-4278 I'm making some build changes for the HCatalog
 integration. The builtins submodule causes issues because it delays
 building until the packaging phase - so HCatalog can't depend on
 builtins, which it does transitively.
 While investigating a path forward I discovered the builtins
 submodule contains very little code, and likely could either go away
 entirely or merge into ql, simplifying things both for users and
 developers.
 Thoughts? Can anyone with context help me understand builtins, both
 in general and around its non-standard build? For your trouble I'll
 either make the submodule go away/merge into another submodule, or
 update the docs with what we learn.
 Thanks!
 Travis
 --
 From: Ashutosh Chauhan ashutosh.chau...@gmail.com
 Date: Fri, Apr 5, 2013 at 3:10 PM
 To: dev@hive.apache.org
 Cc: u...@hive.apache.org u...@hive.apache.org
 I haven't used it myself anytime till now. Neither have met anyone who used
 it or plan to use it.
 Ashutosh
 On Thu, Apr 4, 2013 at 2:01 PM, Travis Crawford 
 traviscrawf...@gmail.comwrote:
 --
 From: Gunther Hagleitner ghagleit...@hortonworks.com
 Date: Fri, Apr 5, 2013 at 3:11 PM
 To: dev@hive.apache.org
 Cc: u...@hive.apache.org
 +1
 I would actually go a step further and propose to remove both PDK and
 builtins. I've went through the code for both and here is what I found:
 Builtins:
 - BuiltInUtils.java: Empty file
 - UDAFUnionMap: Merges maps. Doesn't seem to be useful by itself, but was
 intended as a building block for PDK
 PDK:
 - some helper build.xml/test setup + teardown scripts
 - Classes/annotations to help run unit tests
 - rot13 as an example
 From what I can tell it's a fair assessment that it hasn't taken off, last
 commits to it seem to have happened more than 1.5 years ago.
 Thanks,
 Gunther.
 On Thu, Apr 4, 2013 at 2:01 PM, Travis Crawford 
 traviscrawf...@gmail.comwrote:
 --
 From: Owen O'Malley omal...@apache.org
 Date: Fri, Apr 5, 2013 at 4:45 PM
 To: u...@hive.apache.org
 +1 to removing them. 
 We have a Rot13 example in 
 ql/src/test/org/apache/hadoop/hive/ql/io/udf/Rot13{In,Out}putFormat.java 
 anyways. *smile*
 -- Owen

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4356) remove duplicate impersonation parameters for hiveserver2

2013-04-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637515#comment-13637515
 ] 

Hudson commented on HIVE-4356:
--

Integrated in Hive-trunk-h0.21 #2072 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2072/])
HIVE-4356 :  remove duplicate impersonation parameters for hiveserver2 
(Gunther Hagleitner via Ashutosh Chauhan) (Revision 1470102)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1470102
Files : 
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/conf/hive-default.xml.template
* /hive/trunk/service/src/java/org/apache/hive/service/auth/PlainSaslHelper.java
* /hive/trunk/service/src/java/org/apache/hive/service/cli/CLIService.java
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/thrift/ThriftCLIService.java
* /hive/trunk/service/src/test/org/apache/hive/service/auth
* 
/hive/trunk/service/src/test/org/apache/hive/service/auth/TestPlainSaslHelper.java
* /hive/trunk/service/src/test/org/apache/hive/service/cli/thrift
* 
/hive/trunk/service/src/test/org/apache/hive/service/cli/thrift/TestThriftCLIService.java


 remove duplicate impersonation parameters for hiveserver2
 -

 Key: HIVE-4356
 URL: https://issues.apache.org/jira/browse/HIVE-4356
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 0.12.0

 Attachments: HIVE-4356.1.patch


 There are two parameters controlling impersonation in hiveserver2. 
 hive.server2.enable.doAs that controls this in kerberos secure mode, while 
 hive.server2.enable.doAs controls this for unsecure mode.
 We should have just one for both modes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4189) ORC fails with String column that ends in lots of nulls

2013-04-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637516#comment-13637516
 ] 

Hudson commented on HIVE-4189:
--

Integrated in Hive-trunk-h0.21 #2072 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2072/])
HIVE-4189 : ORC fails with String column that ends in lots of nulls (Kevin
Wilfong) (Revision 1470080)

 Result = ABORTED
omalley : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1470080
Files : 
* /hive/trunk/data/files/nulls.txt
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/WriterImpl.java
* /hive/trunk/ql/src/test/queries/clientpositive/orc_ends_with_nulls.q
* /hive/trunk/ql/src/test/results/clientpositive/orc_ends_with_nulls.q.out


 ORC fails with String column that ends in lots of nulls
 ---

 Key: HIVE-4189
 URL: https://issues.apache.org/jira/browse/HIVE-4189
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Affects Versions: 0.11.0
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Fix For: 0.11.0

 Attachments: HIVE-4189.1.patch.txt, HIVE-4189.2.patch.txt


 When ORC attempts to write out a string column that ends in enough nulls to 
 span an index stride, StringTreeWriter's writeStripe method will get an 
 exception from TreeWriter's writeStripe method
 Column has wrong number of index entries found: x expected: y
 This is caused by rowIndexValueCount having multiple entries equal to the 
 number of non-null rows in the column, combined with the fact that 
 StringTreeWriter has special logic for constructing its index.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4310) optimize count(distinct) with hive.map.groupby.sorted

2013-04-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637517#comment-13637517
 ] 

Hudson commented on HIVE-4310:
--

Integrated in Hive-trunk-h0.21 #2072 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2072/])
HIVE-4310 optimize count(distinct) with hive.map.groupby.sorted
(Namit Jain via Gang Tim Liu) (Revision 1470182)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1470182
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/GroupByOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GroupByOptimizer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/GroupByDesc.java
* /hive/trunk/ql/src/test/queries/clientpositive/groupby_sort_11.q
* /hive/trunk/ql/src/test/queries/clientpositive/groupby_sort_8.q
* /hive/trunk/ql/src/test/results/clientpositive/groupby_sort_11.q.out
* /hive/trunk/ql/src/test/results/clientpositive/groupby_sort_8.q.out
* /hive/trunk/ql/src/test/results/compiler/plan/groupby2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby3.q.xml


 optimize count(distinct) with hive.map.groupby.sorted
 -

 Key: HIVE-4310
 URL: https://issues.apache.org/jira/browse/HIVE-4310
 Project: Hive
  Issue Type: Improvement
Reporter: Namit Jain
Assignee: Namit Jain
 Fix For: 0.12.0

 Attachments: hive.4310.1.patch, hive.4310.1.patch-nohcat, 
 hive.4310.2.patch-nohcat, hive.4310.3.patch-nohcat, hive.4310.4.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2379) Hive/HBase integration could be improved

2013-04-21 Thread Navis (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637522#comment-13637522
 ] 

Navis commented on HIVE-2379:
-

It uses TableMapReduceUtil.addDependencyJars(job) in HiveStorageHandler.

 Hive/HBase integration could be improved
 

 Key: HIVE-2379
 URL: https://issues.apache.org/jira/browse/HIVE-2379
 Project: Hive
  Issue Type: Bug
  Components: CLI, Clients, HBase Handler
Affects Versions: 0.7.1, 0.8.0, 0.9.0
Reporter: Roman Shaposhnik
Assignee: Navis
Priority: Critical
 Attachments: HIVE-2379.D7347.1.patch, HIVE-2379.D7347.2.patch


 For now any Hive/HBase queries would require the following jars to be 
 explicitly added via hive's add jar command:
 add jar /usr/lib/hive/lib/hbase-0.90.1-cdh3u0.jar;
 add jar /usr/lib/hive/lib/hive-hbase-handler-0.7.0-cdh3u0.jar;
 add jar /usr/lib/hive/lib/zookeeper-3.3.1.jar;
 add jar /usr/lib/hive/lib/guava-r06.jar;
 the longer term solution, perhaps, should be to have the code at submit time 
 call hbase's 
 TableMapREduceUtil.addDependencyJar(job, HBaseStorageHandler.class) to ship 
 it in distributedcache.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4106) SMB joins fail in multi-way joins

2013-04-21 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637524#comment-13637524
 ] 

Namit Jain commented on HIVE-4106:
--

[~ashutoshc], confirmed that the test still failed after HIVE-4371

 SMB joins fail in multi-way joins
 -

 Key: HIVE-4106
 URL: https://issues.apache.org/jira/browse/HIVE-4106
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.11.0
Reporter: Vikram Dixit K
Assignee: Namit Jain
Priority: Blocker
 Attachments: auto_sortmerge_join_12.q, hive.4106.1.patch, 
 hive.4106.2.patch, HIVE-4106.patch


 I see array out of bounds exception in case of multi way smb joins. This is 
 related to changes that went in as part of HIVE-3403. This issue has been 
 discussed in HIVE-3891.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4342) NPE for query involving UNION ALL with nested JOIN and UNION ALL

2013-04-21 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637537#comment-13637537
 ] 

Namit Jain commented on HIVE-4342:
--

+1

 NPE for query involving UNION ALL with nested JOIN and UNION ALL
 

 Key: HIVE-4342
 URL: https://issues.apache.org/jira/browse/HIVE-4342
 Project: Hive
  Issue Type: Bug
  Components: Logging, Metastore, Query Processor
Affects Versions: 0.9.0
 Environment: Red Hat Linux VM with Hive 0.9 and Hadoop 2.0
Reporter: Mihir Kulkarni
Assignee: Navis
Priority: Critical
 Attachments: HIVE-4342.D10407.1.patch, HiveCommands.txt, Query.txt, 
 sourceData1.txt, sourceData2.txt


 UNION ALL query with JOIN in first part and another UNION ALL in second part 
 gives NPE.
 bq. JOIN
 UNION ALL
 bq. UNION ALL
 Attachments:
 1. HiveCommands.txt : command script to setup schema for query under 
 consideration.
 2. sourceData1.txt and sourceData2.txt : required for above command script.
 3. Query.txt : Exact query which produces NPE.
 NOTE: you will need to update path to sourceData1.txt and sourceData2.txt in 
 the HiveCommands.txt to suit your environment.
 Attached files contain the schema and exact query which fails on Hive 0.9.
 It is worthwhile to note that the same query executes successfully on Hive 
 0.7.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4371) some issue with merging join trees

2013-04-21 Thread Phabricator (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-4371:
--

Attachment: HIVE-4371.D10323.2.patch

navis updated the revision HIVE-4371 [jira] some issue with merging join 
trees.

  Added test case

Reviewers: JIRA

REVISION DETAIL
  https://reviews.facebook.net/D10323

CHANGE SINCE LAST DIFF
  https://reviews.facebook.net/D10323?vs=32343id=32577#toc

AFFECTED FILES
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/AbstractSMBJoinProc.java
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/AvgPartitionSizeBasedBigTableSelectorForAutoSMJ.java
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/SizeBasedBigTableSelectorForAutoSMJ.java
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/TableSizeBasedBigTableSelectorForAutoSMJ.java
  ql/src/test/queries/clientpositive/auto_sortmerge_join_12.q
  ql/src/test/results/clientpositive/auto_sortmerge_join_12.q.out

To: JIRA, navis


 some issue with merging join trees
 --

 Key: HIVE-4371
 URL: https://issues.apache.org/jira/browse/HIVE-4371
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.11.0
Reporter: Namit Jain
Assignee: Navis
 Attachments: HIVE-4371.D10323.1.patch, HIVE-4371.D10323.2.patch


 [~navis], I would really appreciate if you can take a look.
 I am attaching a testcase, for which in the optimizer the join context left
 aliases and right aliases do not look correct.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4106) SMB joins fail in multi-way joins

2013-04-21 Thread Navis (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637568#comment-13637568
 ] 

Navis commented on HIVE-4106:
-

[~namit] Sorry, it was a different NPE. Fixed that.

 SMB joins fail in multi-way joins
 -

 Key: HIVE-4106
 URL: https://issues.apache.org/jira/browse/HIVE-4106
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.11.0
Reporter: Vikram Dixit K
Assignee: Namit Jain
Priority: Blocker
 Attachments: auto_sortmerge_join_12.q, hive.4106.1.patch, 
 hive.4106.2.patch, HIVE-4106.patch


 I see array out of bounds exception in case of multi way smb joins. This is 
 related to changes that went in as part of HIVE-3403. This issue has been 
 discussed in HIVE-3891.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HIVE-4130) Bring the Lead/Lag UDFs interface in line with Lead/Lag UDAFs

2013-04-21 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan resolved HIVE-4130.


   Resolution: Fixed
Fix Version/s: 0.12.0

Committed to trunk. Thanks, Harish!

 Bring the Lead/Lag UDFs interface in line with Lead/Lag UDAFs
 -

 Key: HIVE-4130
 URL: https://issues.apache.org/jira/browse/HIVE-4130
 Project: Hive
  Issue Type: Bug
  Components: PTF-Windowing
Reporter: Harish Butani
Assignee: Harish Butani
 Fix For: 0.12.0

 Attachments: HIVE-4130.D10233.1.patch, HIVE-4130.D10233.2.patch, 
 HIVE-4130.D10233.3.patch


 - support a default value arg
 - both amt and defaultValue args can be optional

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4364) beeline always exits with 0 status, should exit with non-zero status on error

2013-04-21 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-4364:
---

   Resolution: Fixed
Fix Version/s: (was: 0.11.0)
   0.12.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Rob!

 beeline always exits with 0 status, should exit with non-zero status on error
 -

 Key: HIVE-4364
 URL: https://issues.apache.org/jira/browse/HIVE-4364
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.10.0
Reporter: Rob Weltman
Assignee: Rob Weltman
 Fix For: 0.12.0

 Attachments: HIVE-4364.1.patch.txt, HIVE-4364.2.patch.txt


 beeline should exit with non-zero status on error so that executors such as a 
 shell script or Oozie can detect failure.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4342) NPE for query involving UNION ALL with nested JOIN and UNION ALL

2013-04-21 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-4342:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed. Thanks Navis

 NPE for query involving UNION ALL with nested JOIN and UNION ALL
 

 Key: HIVE-4342
 URL: https://issues.apache.org/jira/browse/HIVE-4342
 Project: Hive
  Issue Type: Bug
  Components: Logging, Metastore, Query Processor
Affects Versions: 0.9.0
 Environment: Red Hat Linux VM with Hive 0.9 and Hadoop 2.0
Reporter: Mihir Kulkarni
Assignee: Navis
Priority: Critical
 Attachments: HIVE-4342.D10407.1.patch, HiveCommands.txt, Query.txt, 
 sourceData1.txt, sourceData2.txt


 UNION ALL query with JOIN in first part and another UNION ALL in second part 
 gives NPE.
 bq. JOIN
 UNION ALL
 bq. UNION ALL
 Attachments:
 1. HiveCommands.txt : command script to setup schema for query under 
 consideration.
 2. sourceData1.txt and sourceData2.txt : required for above command script.
 3. Query.txt : Exact query which produces NPE.
 NOTE: you will need to update path to sourceData1.txt and sourceData2.txt in 
 the HiveCommands.txt to suit your environment.
 Attached files contain the schema and exact query which fails on Hive 0.9.
 It is worthwhile to note that the same query executes successfully on Hive 
 0.7.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HIVE-4333) most windowing tests fail on hadoop 2

2013-04-21 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan resolved HIVE-4333.


   Resolution: Fixed
Fix Version/s: 0.12.0

Committed to trunk. Thanks, Harish!

 most windowing tests fail on hadoop 2
 -

 Key: HIVE-4333
 URL: https://issues.apache.org/jira/browse/HIVE-4333
 Project: Hive
  Issue Type: Bug
  Components: PTF-Windowing
Affects Versions: 0.11.0
Reporter: Gunther Hagleitner
Assignee: Harish Butani
 Fix For: 0.12.0

 Attachments: HIVE-4333.1.patch.txt, HIVE-4333.D10389.1.patch, 
 HIVE-4333.D10389.2.patch


 Problem is different order of results on hadoop 2

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4342) NPE for query involving UNION ALL with nested JOIN and UNION ALL

2013-04-21 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-4342:
---

Fix Version/s: 0.12.0

 NPE for query involving UNION ALL with nested JOIN and UNION ALL
 

 Key: HIVE-4342
 URL: https://issues.apache.org/jira/browse/HIVE-4342
 Project: Hive
  Issue Type: Bug
  Components: Logging, Metastore, Query Processor
Affects Versions: 0.9.0
 Environment: Red Hat Linux VM with Hive 0.9 and Hadoop 2.0
Reporter: Mihir Kulkarni
Assignee: Navis
Priority: Critical
 Fix For: 0.12.0

 Attachments: HIVE-4342.D10407.1.patch, HiveCommands.txt, Query.txt, 
 sourceData1.txt, sourceData2.txt


 UNION ALL query with JOIN in first part and another UNION ALL in second part 
 gives NPE.
 bq. JOIN
 UNION ALL
 bq. UNION ALL
 Attachments:
 1. HiveCommands.txt : command script to setup schema for query under 
 consideration.
 2. sourceData1.txt and sourceData2.txt : required for above command script.
 3. Query.txt : Exact query which produces NPE.
 NOTE: you will need to update path to sourceData1.txt and sourceData2.txt in 
 the HiveCommands.txt to suit your environment.
 Attached files contain the schema and exact query which fails on Hive 0.9.
 It is worthwhile to note that the same query executes successfully on Hive 
 0.7.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2379) Hive/HBase integration could be improved

2013-04-21 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637588#comment-13637588
 ] 

Ashutosh Chauhan commented on HIVE-2379:


Navis, current patch uses  TableMapReduceUtil.addDependencyJars(job, 
Object...). Recommendation is to use  
TableMapReduceUtil.addDependencyJars(job). Difference between two apis is that 
second one automatically puts in hbase dependencies, so that we don't have to. 
That way in future if HBase adds new dependency, we will still be safe. So, we 
need to do:
{code}
 TableMapReduceUtil.addDependencyJars(job)
 TableMapReduceUtil.addDependencyJars(job, HBaseStorageHandler.class)
{code}
thats it.

 Hive/HBase integration could be improved
 

 Key: HIVE-2379
 URL: https://issues.apache.org/jira/browse/HIVE-2379
 Project: Hive
  Issue Type: Bug
  Components: CLI, Clients, HBase Handler
Affects Versions: 0.7.1, 0.8.0, 0.9.0
Reporter: Roman Shaposhnik
Assignee: Navis
Priority: Critical
 Attachments: HIVE-2379.D7347.1.patch, HIVE-2379.D7347.2.patch


 For now any Hive/HBase queries would require the following jars to be 
 explicitly added via hive's add jar command:
 add jar /usr/lib/hive/lib/hbase-0.90.1-cdh3u0.jar;
 add jar /usr/lib/hive/lib/hive-hbase-handler-0.7.0-cdh3u0.jar;
 add jar /usr/lib/hive/lib/zookeeper-3.3.1.jar;
 add jar /usr/lib/hive/lib/guava-r06.jar;
 the longer term solution, perhaps, should be to have the code at submit time 
 call hbase's 
 TableMapREduceUtil.addDependencyJar(job, HBaseStorageHandler.class) to ship 
 it in distributedcache.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3861) Upgrade hbase dependency to 0.94

2013-04-21 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637590#comment-13637590
 ] 

Ashutosh Chauhan commented on HIVE-3861:


+1 will commit if tests pass.

 Upgrade hbase dependency to 0.94
 

 Key: HIVE-3861
 URL: https://issues.apache.org/jira/browse/HIVE-3861
 Project: Hive
  Issue Type: Bug
  Components: HBase Handler
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Attachments: HIVE-3861.2.patch, HIVE-3861.3.patch, HIVE-3861.4.patch, 
 HIVE-3861.patch


 Hive tests fail to run against hbase v0.94.2. Proposing to upgrade the 
 dependency and change the test setup to properly work with the newer version.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4106) SMB joins fail in multi-way joins

2013-04-21 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637591#comment-13637591
 ] 

Ashutosh Chauhan commented on HIVE-4106:


[~navis] Didn't get you. You mean this jira is a non-issue after your latest 
patch on HIVE-4371.  


 SMB joins fail in multi-way joins
 -

 Key: HIVE-4106
 URL: https://issues.apache.org/jira/browse/HIVE-4106
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.11.0
Reporter: Vikram Dixit K
Assignee: Namit Jain
Priority: Blocker
 Attachments: auto_sortmerge_join_12.q, hive.4106.1.patch, 
 hive.4106.2.patch, HIVE-4106.patch


 I see array out of bounds exception in case of multi way smb joins. This is 
 related to changes that went in as part of HIVE-3403. This issue has been 
 discussed in HIVE-3891.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4365) wrong result in left semi join

2013-04-21 Thread Phabricator (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637596#comment-13637596
 ] 

Phabricator commented on HIVE-4365:
---

ashutoshc has accepted the revision HIVE-4365 [jira] wrong result in left semi 
join.

  +1 will commit if tests pass.

REVISION DETAIL
  https://reviews.facebook.net/D10341

BRANCH
  HIVE-4365

ARCANIST PROJECT
  hive

To: JIRA, ashutoshc, navis


 wrong result in left semi join
 --

 Key: HIVE-4365
 URL: https://issues.apache.org/jira/browse/HIVE-4365
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.9.0, 0.10.0
Reporter: ransom.hezhiqiang
Assignee: Navis
 Attachments: HIVE-4365.D10341.1.patch, HIVE-4365.D10341.2.patch


 wrong result in left semi join while hive.optimize.ppd=true
 for example:
 1、create table
create table t1(c1 int,c2 int, c3 int, c4 int, c5 double,c6 int,c7 string) 
   row format DELIMITED FIELDS TERMINATED BY '|';
create table t2(c1 int) ;
 2、load data
 load data local inpath '/home/test/t1.txt' OVERWRITE into table t1;
 load data local inpath '/home/test/t2.txt' OVERWRITE into table t2;
 t1 data:
 1|3|10003|52|781.96|555|201203
 1|3|10003|39|782.96|555|201203
 1|3|10003|87|783.96|555|201203
 2|5|10004|24|789.96|555|201203
 2|5|10004|58|788.96|555|201203
 t2 data:
 555
 3、excute Query
 select t1.c1,t1.c2,t1.c3,t1.c4,t1.c5,t1.c6,t1.c7  from t1 left semi join t2 
 on t1.c6 = t2.c1 and  t1.c1 =  '1' and t1.c7 = '201203' ;   
 can got result.
 select t1.c1,t1.c2,t1.c3,t1.c4,t1.c5,t1.c6,t1.c7  from t1 left semi join t2 
 on t1.c6 = t2.c1 where t1.c1 =  '1' and t1.c7 = '201203' ;   
 can't got result.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4295) Lateral view makes invalid result if CP is disabled

2013-04-21 Thread Phabricator (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637599#comment-13637599
 ] 

Phabricator commented on HIVE-4295:
---

ashutoshc has accepted the revision HIVE-4295 [jira] Lateral view makes 
invalid result if CP is disabled.

  +1 will commit if tests pass.

REVISION DETAIL
  https://reviews.facebook.net/D9963

BRANCH
  HIVE-4295

ARCANIST PROJECT
  hive

To: JIRA, ashutoshc, navis


 Lateral view makes invalid result if CP is disabled
 ---

 Key: HIVE-4295
 URL: https://issues.apache.org/jira/browse/HIVE-4295
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Navis
Assignee: Navis
Priority: Minor
 Attachments: HIVE-4295.2.patch.txt, HIVE-4295.D9963.1.patch


 For example,
 {noformat}
 SELECT src.key, myKey, myVal FROM src lateral view 
 explode(map(1,'one',2,'two',3,'three')) x AS myKey,myVal LIMIT 3;
 238   1   one
 238   2   two
 238   3   three
 {noformat}
 After CP disabled,
 {noformat}
 SELECT src.key, myKey, myVal FROM src lateral view 
 explode(map(1,'one',2,'two',3,'three')) x AS myKey,myVal LIMIT 3;
 238   0   hdfs://localhost:9000/user/hive/warehouse/src/kv1.txt
 238   0   hdfs://localhost:9000/user/hive/warehouse/src/kv1.txt
 238   0   hdfs://localhost:9000/user/hive/warehouse/src/kv1.txt
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-4389) thrift files are re-generated by compiling

2013-04-21 Thread Namit Jain (JIRA)
Namit Jain created HIVE-4389:


 Summary: thrift files are re-generated by compiling
 Key: HIVE-4389
 URL: https://issues.apache.org/jira/browse/HIVE-4389
 Project: Hive
  Issue Type: Bug
Reporter: Namit Jain
Assignee: Namit Jain


I am not sure what is going on, but there seems to be a bunch of thrift changes
if I perform ant thriftif.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4389) thrift files are re-generated by compiling

2013-04-21 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637609#comment-13637609
 ] 

Namit Jain commented on HIVE-4389:
--

https://reviews.facebook.net/D10413

 thrift files are re-generated by compiling
 --

 Key: HIVE-4389
 URL: https://issues.apache.org/jira/browse/HIVE-4389
 Project: Hive
  Issue Type: Bug
Reporter: Namit Jain
Assignee: Namit Jain
 Attachments: hive.4389.1.patch


 I am not sure what is going on, but there seems to be a bunch of thrift 
 changes
 if I perform ant thriftif.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4389) thrift files are re-generated by compiling

2013-04-21 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-4389:
-

Attachment: hive.4389.1.patch

 thrift files are re-generated by compiling
 --

 Key: HIVE-4389
 URL: https://issues.apache.org/jira/browse/HIVE-4389
 Project: Hive
  Issue Type: Bug
Reporter: Namit Jain
Assignee: Namit Jain
 Attachments: hive.4389.1.patch


 I am not sure what is going on, but there seems to be a bunch of thrift 
 changes
 if I perform ant thriftif.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HIVE-4389) thrift files are re-generated by compiling

2013-04-21 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan resolved HIVE-4389.


Resolution: Duplicate

Dupe of HIVE-4300 
[~namit] Can you review that ?

 thrift files are re-generated by compiling
 --

 Key: HIVE-4389
 URL: https://issues.apache.org/jira/browse/HIVE-4389
 Project: Hive
  Issue Type: Bug
Reporter: Namit Jain
Assignee: Namit Jain
 Attachments: hive.4389.1.patch


 I am not sure what is going on, but there seems to be a bunch of thrift 
 changes
 if I perform ant thriftif.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-4390) Enable capturing input URI entities for DML statements

2013-04-21 Thread Prasad Mujumdar (JIRA)
Prasad Mujumdar created HIVE-4390:
-

 Summary: Enable capturing input URI entities for DML statements
 Key: HIVE-4390
 URL: https://issues.apache.org/jira/browse/HIVE-4390
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 0.10.0
Reporter: Prasad Mujumdar
Assignee: Prasad Mujumdar


The query compiler doesn't capture the files or directories accessed by 
following statements -
 * Load data
 * Export
 * Import
 * Alter table/partition set location
This is very useful information to access from the hooks for 
monitoring/auditing etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4390) Enable capturing input URI entities for DML statements

2013-04-21 Thread Prasad Mujumdar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasad Mujumdar updated HIVE-4390:
--

Attachment: HIVE-4390-2.patch

Review request on https://reviews.facebook.net/D10419

 Enable capturing input URI entities for DML statements
 --

 Key: HIVE-4390
 URL: https://issues.apache.org/jira/browse/HIVE-4390
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 0.10.0
Reporter: Prasad Mujumdar
Assignee: Prasad Mujumdar
 Attachments: HIVE-4390-2.patch


 The query compiler doesn't capture the files or directories accessed by 
 following statements -
  * Load data
  * Export
  * Import
  * Alter table/partition set location
 This is very useful information to access from the hooks for 
 monitoring/auditing etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4390) Enable capturing input URI entities for DML statements

2013-04-21 Thread Prasad Mujumdar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasad Mujumdar updated HIVE-4390:
--

Status: Patch Available  (was: Open)

Patch enables capturing read entities for external file/dir locations. To 
maintain backward compatibility with existing hooks, this is enabled under a 
config setting that is disabled by default.

 Enable capturing input URI entities for DML statements
 --

 Key: HIVE-4390
 URL: https://issues.apache.org/jira/browse/HIVE-4390
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 0.10.0
Reporter: Prasad Mujumdar
Assignee: Prasad Mujumdar
 Attachments: HIVE-4390-2.patch


 The query compiler doesn't capture the files or directories accessed by 
 following statements -
  * Load data
  * Export
  * Import
  * Alter table/partition set location
 This is very useful information to access from the hooks for 
 monitoring/auditing etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4305) Use a single system for dependency resolution

2013-04-21 Thread Carl Steinbach (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637633#comment-13637633
 ] 

Carl Steinbach commented on HIVE-4305:
--

bq. It contains native executables.

So does Hive. Please look around.

bq. It contains native libraries.

So does Hive. Please look around.

bq. It contains jni libraries.

I'm not sure what makes this different from the previous two cases, but will 
admit that you've got me there.

bq. Moving to Maven would be making it better in the opinion of the majority of 
the development community.

So you have already conducted a poll? How large was your sample size and what 
are the margins of error? Can you please publish the questions that were used? 
Thanks.

bq. Certainly, I believe it is possible to make things worse with Maven.

How do we know that isn't going to happen here?

bq. I'm not a fan of how the Hadoop mavenization was done...

In your opinion what did they do wrong? It's starting to sound like there's 
more than a little room for error when transitioning a project over to Maven. 
Maybe the Maven community would benefit from having a best practices document 
that highlighted these potential pitfalls.


bq. ... and I deeply regret not taking the time to make it better as it went in.

It's not too late to fix it, right? You're a committer on that project, which 
along with your extensive Maven knowledge puts you in an excellent position to 
lead this effort.

bq. ... but it was still better than the ant + ivy + maven ant tasks that we 
had ...

I think part of the problem with the original build was that you were using 
three components (ant, ivy, maven ant tasks), when you only needed to use two 
(ant, ivy). Our build, especially post-HCatalog merge, suffers from this same 
problem. This ticket was originally filed to track the task of removing either 
Ivy or maven ant asks, but was quickly hijacked by people intent on replacing 
everything with Maven. Owen, would it be OK with you if we return to the 
original focus of this ticket and resolve it before again considering your 
proposal? This will also give you time to strengthen your argument by fixing 
the bugs and design defects in Hadoop's Maven build.

bq. If it hadn't been, it would have been rejected.

I hope you're joking.

bq. That said, in my experience most projects are better off with Maven builds 
than ant + ivy + maven ant tasks.

What characteristics define a project that falls into the latter category, or 
did you mean to say *all* instead of *most*?

 Use a single system for dependency resolution
 -

 Key: HIVE-4305
 URL: https://issues.apache.org/jira/browse/HIVE-4305
 Project: Hive
  Issue Type: Improvement
  Components: Build Infrastructure, HCatalog
Reporter: Travis Crawford
Assignee: Carl Steinbach

 Both Hive and HCatalog use ant as their build tool. However, Hive uses ivy 
 for dependency resolution while HCatalog uses maven-ant-tasks. With the 
 project merge we should converge on a single tool for dependency resolution.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-4391) Windowing function doesn't work in conjunction with CTAS

2013-04-21 Thread Ashutosh Chauhan (JIRA)
Ashutosh Chauhan created HIVE-4391:
--

 Summary: Windowing function doesn't work in conjunction with CTAS
 Key: HIVE-4391
 URL: https://issues.apache.org/jira/browse/HIVE-4391
 Project: Hive
  Issue Type: Bug
  Components: PTF-Windowing
Affects Versions: 0.11.0
Reporter: Ashutosh Chauhan


Simple CTAS query like following fails
 create table t3 as select *, rank() over() as rr from t2;
with exception 
FAILED: Error in metadata: InvalidObjectException(message:t3 is not a valid 
object name)



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4391) Windowing function doesn't work in conjunction with CTAS

2013-04-21 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637661#comment-13637661
 ] 

Ashutosh Chauhan commented on HIVE-4391:


whereas following non-windowing functions with CTAS succeeds
create table t3 as select *, if(c110,10,0) from t2; 
create table t4 as select *, c1*2 from t2;  

 Windowing function doesn't work in conjunction with CTAS
 

 Key: HIVE-4391
 URL: https://issues.apache.org/jira/browse/HIVE-4391
 Project: Hive
  Issue Type: Bug
  Components: PTF-Windowing
Affects Versions: 0.11.0
Reporter: Ashutosh Chauhan

 Simple CTAS query like following fails
  create table t3 as select *, rank() over() as rr from t2;
 with exception 
 FAILED: Error in metadata: InvalidObjectException(message:t3 is not a valid 
 object name)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3952) merge map-job followed by map-reduce job

2013-04-21 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HIVE-3952:
--

Attachment: HIVE-3952-20130421.txt

Thanks for the info Ashutosh.

Attaching updated patch against latest trunk. Also fixes the offending test 
related issues. Latest patch also on review-board. Tx.

 merge map-job followed by map-reduce job
 

 Key: HIVE-3952
 URL: https://issues.apache.org/jira/browse/HIVE-3952
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Namit Jain
Assignee: Vinod Kumar Vavilapalli
 Attachments: HIVE-3952-20130226.txt, HIVE-3952-20130227.1.txt, 
 HIVE-3952-20130301.txt, HIVE-3952-20130421.txt


 Consider the query like:
 select count(*) FROM
 ( select idOne, idTwo, value FROM
   bigTable   
   JOIN
 
   smallTableOne on (bigTable.idOne = smallTableOne.idOne) 
   
   ) firstjoin 
 
 JOIN  
 
 smallTableTwo on (firstjoin.idTwo = smallTableTwo.idTwo);
 where smallTableOne and smallTableTwo are smaller than 
 hive.auto.convert.join.noconditionaltask.size and
 hive.auto.convert.join.noconditionaltask is set to true.
 The joins are collapsed into mapjoins, and it leads to a map-only job
 (for the map-joins) followed by a map-reduce job (for the group by).
 Ideally, the map-only job should be merged with the following map-reduce job.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3952) merge map-job followed by map-reduce job

2013-04-21 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HIVE-3952:
--

Status: Patch Available  (was: Open)

 merge map-job followed by map-reduce job
 

 Key: HIVE-3952
 URL: https://issues.apache.org/jira/browse/HIVE-3952
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Namit Jain
Assignee: Vinod Kumar Vavilapalli
 Attachments: HIVE-3952-20130226.txt, HIVE-3952-20130227.1.txt, 
 HIVE-3952-20130301.txt, HIVE-3952-20130421.txt


 Consider the query like:
 select count(*) FROM
 ( select idOne, idTwo, value FROM
   bigTable   
   JOIN
 
   smallTableOne on (bigTable.idOne = smallTableOne.idOne) 
   
   ) firstjoin 
 
 JOIN  
 
 smallTableTwo on (firstjoin.idTwo = smallTableTwo.idTwo);
 where smallTableOne and smallTableTwo are smaller than 
 hive.auto.convert.join.noconditionaltask.size and
 hive.auto.convert.join.noconditionaltask is set to true.
 The joins are collapsed into mapjoins, and it leads to a map-only job
 (for the map-joins) followed by a map-reduce job (for the group by).
 Ideally, the map-only job should be merged with the following map-reduce job.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2379) Hive/HBase integration could be improved

2013-04-21 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637682#comment-13637682
 ] 

Nick Dimiduk commented on HIVE-2379:


Nit: the version that accepts a {{Job}} as an argument is the one you want. The 
other one is {{Configuration, Class...}}. Because this is confusing and 
annoying, I opened HBASE-8386.

 Hive/HBase integration could be improved
 

 Key: HIVE-2379
 URL: https://issues.apache.org/jira/browse/HIVE-2379
 Project: Hive
  Issue Type: Bug
  Components: CLI, Clients, HBase Handler
Affects Versions: 0.7.1, 0.8.0, 0.9.0
Reporter: Roman Shaposhnik
Assignee: Navis
Priority: Critical
 Attachments: HIVE-2379.D7347.1.patch, HIVE-2379.D7347.2.patch


 For now any Hive/HBase queries would require the following jars to be 
 explicitly added via hive's add jar command:
 add jar /usr/lib/hive/lib/hbase-0.90.1-cdh3u0.jar;
 add jar /usr/lib/hive/lib/hive-hbase-handler-0.7.0-cdh3u0.jar;
 add jar /usr/lib/hive/lib/zookeeper-3.3.1.jar;
 add jar /usr/lib/hive/lib/guava-r06.jar;
 the longer term solution, perhaps, should be to have the code at submit time 
 call hbase's 
 TableMapREduceUtil.addDependencyJar(job, HBaseStorageHandler.class) to ship 
 it in distributedcache.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2055) Hive HBase Integration issue

2013-04-21 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637685#comment-13637685
 ] 

Nick Dimiduk commented on HIVE-2055:


[~ashutoshc] Just as with PIG-2786 vs PIG-3285, there are two separate issues. 
The former is having HBase jars on the classpath for bin/hive invocations. The 
latter is for shipping the dependencies to the cluster for MR jobs. The former 
is also effectively identical to HCATALOG-621 in that this is necessary for DDL 
operations. This patch resolves the former. The latter must be resolved with a 
code change, as is underway in HIVE-2379.

 Hive HBase Integration issue
 

 Key: HIVE-2055
 URL: https://issues.apache.org/jira/browse/HIVE-2055
 Project: Hive
  Issue Type: Bug
  Components: HBase Handler
Reporter: sajith v
 Attachments: HIVE-2055.patch


 Created an external table in hive , which points to the HBase table. When 
 tried to query a column using the column name in select clause got the 
 following exception : ( java.lang.ClassNotFoundException: 
 org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat), errorCode:12, 
 SQLState:42000)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4103) Remove System.gc() call from the map-join local-task loop

2013-04-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637709#comment-13637709
 ] 

Hudson commented on HIVE-4103:
--

Integrated in Hive-trunk-h0.21 #2073 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2073/])
HIVE-4103 : Remove System.gc() call from the map-join local-task loop 
(Gopal V via Ashutosh Chauhan) (Revision 1470227)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1470227
Files : 
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/HashMapWrapper.java


 Remove System.gc() call from the map-join local-task loop
 -

 Key: HIVE-4103
 URL: https://issues.apache.org/jira/browse/HIVE-4103
 Project: Hive
  Issue Type: Bug
Reporter: Gopal V
Assignee: Gopal V
Priority: Minor
 Fix For: 0.12.0

 Attachments: HIVE-4103.patch


 Hive's HashMapWrapper calls System.gc() twice within the 
 HashMapWrapper::isAbort() which produces a significant slow-down during the 
 loop.
 {code}
 2013-03-01 04:54:28 The gc calls took 677 ms
 2013-03-01 04:54:28 Processing rows:20  Hashtable size: 
 19  Memory usage:   62955432rate:   0.033
 2013-03-01 04:54:31 The gc calls took 956 ms
 2013-03-01 04:54:31 Processing rows:30  Hashtable size: 
 29  Memory usage:   90826656rate:   0.048
 2013-03-01 04:54:33 The gc calls took 967 ms
 2013-03-01 04:54:33 Processing rows:384160  Hashtable size: 
 384160  Memory usage:   114412712   rate:   0.06
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4248) Implement a memory manager for ORC

2013-04-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637710#comment-13637710
 ] 

Hudson commented on HIVE-4248:
--

Integrated in Hive-trunk-h0.21 #2073 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2073/])
HIVE-4248 : Implement a memory manager for ORC (Owen Omalley via Ashutosh 
Chauhan) (Revision 1470249)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1470249
Files : 
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/MemoryManager.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcFile.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcOutputFormat.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/WriterImpl.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestFileDump.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestOrcFile.java


 Implement a memory manager for ORC
 --

 Key: HIVE-4248
 URL: https://issues.apache.org/jira/browse/HIVE-4248
 Project: Hive
  Issue Type: New Feature
  Components: Serializers/Deserializers
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Fix For: 0.12.0

 Attachments: HIVE-4248.D9993.1.patch, HIVE-4248.D9993.2.patch, 
 HIVE-4248.D9993.4.patch


 With the large default stripe size (256MB) and dynamic partitions, it is 
 quite easy for users to run out of memory when writing ORC files. We probably 
 need a solution that keeps track of the total number of concurrent ORC 
 writers and divides the available heap space between them. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Hive-trunk-h0.21 - Build # 2073 - Failure

2013-04-21 Thread Apache Jenkins Server
Changes for Build #2039
[hashutosh] HIVE-3850 : hour() function returns 12 hour clock value when using 
timestamp datatype (Anandha and Franklin via Ashutosh Chauhan)


Changes for Build #2040
[hashutosh] HIVE-3951 : Allow Decimal type columns in Regex Serde (Mark Grover 
via Ashutosh Chauhan)

[namit] HIVE-4270 bug in hive.map.groupby.sorted in the presence of multiple 
input partitions
(Namit via Gang Tim Liu)


Changes for Build #2041

Changes for Build #2042

Changes for Build #2043
[hashutosh] HIVE-4252 : hiveserver2 string representation of complex types are 
inconsistent with cli (Thejas Nair via Ashutosh Chauhan)

[hashutosh] HIVE-4179 : NonBlockingOpDeDup does not merge SEL operators 
correctly (Gunther Hagleitner via Ashutosh Chauhan)

[hashutosh] HIVE-4269 : fix handling of binary type in hiveserver2, jdbc driver 
(Thejas Nair via Ashutosh Chauhan)

[namit] HIVE-4174 Round UDF converts BigInts to double
(Chen Chun via namit)

[namit] HIVE-4240 optimize hive.enforce.bucketing and hive.enforce sorting 
insert
(Gang Tim Liu via namit)

[navis] HIVE-4288 Add IntelliJ project files files to .gitignore (Roshan Naik 
via Navis)


Changes for Build #2044
[namit] HIVE-4289 HCatalog build fails when behind a firewall
(Samuel Yuan via namit)

[namit] HIVE-4281 add hive.map.groupby.sorted.testmode
(Namit via Gang Tim Liu)

[hashutosh] Moving hcatalog site outside of trunk

[hashutosh] Moving hcatalog branches outside of trunk

[hashutosh] HIVE-4259 : SEL operator created with missing columnExprMap for 
unions (Gunther Hagleitner via Ashutosh Chauhan)

[hashutosh] HIVE-4156 : need to add protobuf classes to hive-exec.jar (Owen 
Omalley via Ashutosh Chauhan)

[hashutosh] HIVE-3464 : Merging join tree may reorder joins which could be 
invalid (Navis via Ashutosh Chauhan)

[hashutosh] HIVE-4138 : ORC's union object inspector returns a type name that 
isn't parseable by TypeInfoUtils (Owen Omalley via Ashutosh Chauhan)

[cws] HIVE-4119. ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails with 
NPE if the table is empty (Shreepadma Venugopalan via cws)


Changes for Build #2045

Changes for Build #2046
[hashutosh] HIVE-4067 : Followup to HIVE-701: reduce ambiguity in grammar 
(Samuel Yuan via Ashutosh Chauhan)


Changes for Build #2047

Changes for Build #2048
[gangtimliu] HIVE-4298: add tests for distincts for hive.map.groutp.sorted. 
(Namit via Gang Tim Liu)

[hashutosh] HIVE-4128 : Support avg(decimal) (Brock Noland via Ashutosh Chauhan)

[kevinwilfong] HIVE-4151. HiveProfiler NPE with ScriptOperator. (Pamela Vagata 
via kevinwilfong)


Changes for Build #2049
[hashutosh] HIVE-3985 : Update new UDAFs introduced for Windowing to work with 
new Decimal Type (Brock Noland via Ashutosh Chauhan)

[hashutosh] HIVE-3840 : hive cli null representation in output is inconsistent 
(Thejas Nair via Ashutosh Chauhan)

[hashutosh] HIVE-4262 : fix last_value UDAF behavior (Harish Butani via 
Ashutosh Chauhan)

[hashutosh] HIVE-4292 : hiveserver2 should support -hiveconf commandline 
parameter (Thejas Nair via Ashutosh Chauhan)


Changes for Build #2050
[hashutosh] HIVE-3908 : create view statement's outputs contains the view and a 
temporary dir. (Prasad Mujumdar via Ashutosh Chauhan)

[hashutosh] HIVE-4302 : Fix how RowSchema and RowResolver are set on 
ReduceSinkOp that precedes PTFOp (Harish Butani via Ashutosh Chauhan)


Changes for Build #2051
[hashutosh] HIVE-3992 : Hive RCFile::sync(long) does a sub-sequence linear 
search for sync blocks (Gopal V via Ashutosh Chauhan)


Changes for Build #2052

Changes for Build #2053
[navis] Missing test results from HIVE-1953 (Vikram Dixit K via Navis)

[namit] HIVE-4314 Result of mapjoin_test_outer.q is not deterministic
(Navis via namit)

[navis] HIVE-1953 Hive should process comments in CliDriver (Vikram Dixit K via 
Navis)

[navis] HIVE-3308 Mixing avro and snappy gives null values (Bennie Schut via 
Navis)

[hashutosh] HIVE-4311 : DOS line endings in auto_join26.q (Gunther Hagleitner 
via Ashutosh Chauhan)

[hashutosh] HIVE-2340 : optimize orderby followed by a groupby (Navis via 
Ashutosh Chauhan)


Changes for Build #2054
[khorgath] HCATALOG-632 Fixing ORC File usage with HCatalog


Changes for Build #2055
[hashutosh] HIVE-4107 : Update Hive 0.10.0 RELEASE_NOTES.txt (Thejas Nair via 
Ashutosh Chauhan)

[hashutosh] HIVE-4271 : Limit precision of decimal type (Gunther Hagleitner via 
Ashutosh Chauhan)

[hashutosh] HIVE-4319 : Revert changes checked-in as part of 1953 (Vikram Dixit 
via Ashutosh Chauhan)


Changes for Build #2056
[hashutosh] HIVE-4078 : Delay the serialize-deserialize pair in 
CommonJoinTaskDispatcher (Gopal V via Ashutosh Chauhan)

[gangtimliu] HIVE-4337: Update list bucketing test results (Samuel Yuan via 
Gang Tim Liu)

[hashutosh] HIVE-4306 : PTFDeserializer should reconstruct OIs based on InputOI 
passed to PTFOperator (Harish Butani and Prajakta Kalmegh via Ashutosh Chauhan)

[hashutosh] HIVE-4334 : ctas test on hadoop 2 has outdated golden file 

[jira] [Created] (HIVE-4392) Illogical InvalidObjectException throwed when use mulit aggregate functions with star columns

2013-04-21 Thread caofangkun (JIRA)
caofangkun created HIVE-4392:


 Summary: Illogical InvalidObjectException throwed when use mulit 
aggregate functions with star columns 
 Key: HIVE-4392
 URL: https://issues.apache.org/jira/browse/HIVE-4392
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: caofangkun
Priority: Minor



For Example:

hive (default) create table liza_1 as 
   select *, sum(key), sum(value) 
   from new_src;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=number
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=number
In order to set a constant number of reducers:
  set mapred.reduce.tasks=number
Starting Job = job_201304191025_0003, Tracking URL = 
http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0003
Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
job_201304191025_0003
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 1
2013-04-22 11:09:28,017 Stage-1 map = 0%,  reduce = 0%
2013-04-22 11:09:34,054 Stage-1 map = 0%,  reduce = 100%
2013-04-22 11:09:37,074 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201304191025_0003
Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
FAILED: Error in metadata: InvalidObjectException(message:liza_1 is not a valid 
object name)
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask
MapReduce Jobs Launched: 
Job 0: Reduce: 1   HDFS Read: 0 HDFS Write: 12 SUCCESS
Total MapReduce CPU Time Spent: 0 msec

hive (default) create table liza_1 as 
   select *, sum(key), sum(value) 
   from new_src   
   group by key, value;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=number
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=number
In order to set a constant number of reducers:
  set mapred.reduce.tasks=number
Starting Job = job_201304191025_0004, Tracking URL = 
http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0004
Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
job_201304191025_0004
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 1
2013-04-22 11:11:58,945 Stage-1 map = 0%,  reduce = 0%
2013-04-22 11:12:01,964 Stage-1 map = 0%,  reduce = 100%
2013-04-22 11:12:04,982 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201304191025_0004
Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
FAILED: Error in metadata: InvalidObjectException(message:liza_1 is not a valid 
object name)
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask
MapReduce Jobs Launched: 
Job 0: Reduce: 1   HDFS Read: 0 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 0 msec

But the following tow Queries  work:
hive (default) create table liza_1 as select * from new_src;
Total MapReduce jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201304191025_0006, Tracking URL = 
http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0006
Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
job_201304191025_0006
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2013-04-22 11:15:00,681 Stage-1 map = 0%,  reduce = 0%
2013-04-22 11:15:03,697 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201304191025_0006
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Moving data to: 
hdfs://hd17-vm5:9101/user/zongren/hive-scratchdir/hive_2013-04-22_11-14-54_632_6709035018023861094/-ext-10001
Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
Table default.liza_1 stats: [num_partitions: 0, num_files: 0, num_rows: 0, 
total_size: 0, raw_data_size: 0]
MapReduce Jobs Launched: 
Job 0:  HDFS Read: 0 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 0 msec
OK
Time taken: 9.576 seconds

hive (default) create table liza_1 as
   select sum (key), sum(value) 
   from new_test;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=number
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=number
In order to set a constant number of reducers:
  set mapred.reduce.tasks=number
Starting Job = 

[jira] [Updated] (HIVE-4392) Illogical InvalidObjectException throwed when use mulit aggregate functions with star columns

2013-04-21 Thread caofangkun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

caofangkun updated HIVE-4392:
-

Description: 
For Example:

hive (default) create table liza_1 as 
   select *, sum(key), sum(value) 
   from new_src;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=number
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=number
In order to set a constant number of reducers:
  set mapred.reduce.tasks=number
Starting Job = job_201304191025_0003, Tracking URL = 
http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0003
Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
job_201304191025_0003
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 1
2013-04-22 11:09:28,017 Stage-1 map = 0%,  reduce = 0%
2013-04-22 11:09:34,054 Stage-1 map = 0%,  reduce = 100%
2013-04-22 11:09:37,074 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201304191025_0003
Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
FAILED: Error in metadata: InvalidObjectException(message:liza_1 is not a valid 
object name)
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask
MapReduce Jobs Launched: 
Job 0: Reduce: 1   HDFS Read: 0 HDFS Write: 12 SUCCESS
Total MapReduce CPU Time Spent: 0 msec

hive (default) create table liza_1 as 
   select *, sum(key), sum(value) 
   from new_src   
   group by key, value;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=number
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=number
In order to set a constant number of reducers:
  set mapred.reduce.tasks=number
Starting Job = job_201304191025_0004, Tracking URL = 
http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0004
Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
job_201304191025_0004
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 1
2013-04-22 11:11:58,945 Stage-1 map = 0%,  reduce = 0%
2013-04-22 11:12:01,964 Stage-1 map = 0%,  reduce = 100%
2013-04-22 11:12:04,982 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201304191025_0004
Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
FAILED: Error in metadata: InvalidObjectException(message:liza_1 is not a valid 
object name)
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask
MapReduce Jobs Launched: 
Job 0: Reduce: 1   HDFS Read: 0 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 0 msec

But the following tow Queries  work:
hive (default) create table liza_1 as select * from new_src;
Total MapReduce jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201304191025_0006, Tracking URL = 
http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0006
Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
job_201304191025_0006
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2013-04-22 11:15:00,681 Stage-1 map = 0%,  reduce = 0%
2013-04-22 11:15:03,697 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201304191025_0006
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Moving data to: 
hdfs://hd17-vm5:9101/user/zongren/hive-scratchdir/hive_2013-04-22_11-14-54_632_6709035018023861094/-ext-10001
Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
Table default.liza_1 stats: [num_partitions: 0, num_files: 0, num_rows: 0, 
total_size: 0, raw_data_size: 0]
MapReduce Jobs Launched: 
Job 0:  HDFS Read: 0 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 0 msec
OK
Time taken: 9.576 seconds

hive (default) create table liza_1 as
   select sum (key), sum(value) 
   from new_test;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=number
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=number
In order to set a constant number of reducers:
  set mapred.reduce.tasks=number
Starting Job = job_201304191025_0008, Tracking URL = 
http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0008
Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
job_201304191025_0008
Hadoop job information for Stage-1: number 

[jira] [Created] (HIVE-4393) Make the deleteData flag accessable from DropTable/Partition events

2013-04-21 Thread Morgan Phillips (JIRA)
Morgan Phillips created HIVE-4393:
-

 Summary: Make the deleteData flag accessable from 
DropTable/Partition events
 Key: HIVE-4393
 URL: https://issues.apache.org/jira/browse/HIVE-4393
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Reporter: Morgan Phillips
Assignee: Morgan Phillips
Priority: Minor


On occasion, due to some error during a drop, information is removed from the 
metastore but data, which should have been removed, remains intact on the DFS.  
In order to log such events via PreEvent and Event listeners a new method 
'getDeleteData' should be added to (Pre)DropPartitionEvent and 
(Pre)DropTableEvent which returns the deleteData flag's value.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Work stopped] (HIVE-4393) Make the deleteData flag accessable from DropTable/Partition events

2013-04-21 Thread Morgan Phillips (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-4393 stopped by Morgan Phillips.

 Make the deleteData flag accessable from DropTable/Partition events
 ---

 Key: HIVE-4393
 URL: https://issues.apache.org/jira/browse/HIVE-4393
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Reporter: Morgan Phillips
Assignee: Morgan Phillips
Priority: Minor

 On occasion, due to some error during a drop, information is removed from the 
 metastore but data, which should have been removed, remains intact on the 
 DFS.  In order to log such events via PreEvent and Event listeners a new 
 method 'getDeleteData' should be added to (Pre)DropPartitionEvent and 
 (Pre)DropTableEvent which returns the deleteData flag's value.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Work started] (HIVE-4393) Make the deleteData flag accessable from DropTable/Partition events

2013-04-21 Thread Morgan Phillips (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-4393 started by Morgan Phillips.

 Make the deleteData flag accessable from DropTable/Partition events
 ---

 Key: HIVE-4393
 URL: https://issues.apache.org/jira/browse/HIVE-4393
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Reporter: Morgan Phillips
Assignee: Morgan Phillips
Priority: Minor

 On occasion, due to some error during a drop, information is removed from the 
 metastore but data, which should have been removed, remains intact on the 
 DFS.  In order to log such events via PreEvent and Event listeners a new 
 method 'getDeleteData' should be added to (Pre)DropPartitionEvent and 
 (Pre)DropTableEvent which returns the deleteData flag's value.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4300) ant thriftif generated code that is checkedin is not up-to-date

2013-04-21 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637766#comment-13637766
 ] 

Namit Jain commented on HIVE-4300:
--

[~roshan_naik], I ran all the tests for HIVE-4389 (all of them ran fine other 
than leadlag.q,
which is also failing on trunk, for which I have filed a jira 
There, I had just performed:

ant thriftif -Dthrift.home=/usr/local
on my mac -- no local changes.


 ant thriftif  generated code that is checkedin is not up-to-date
 

 Key: HIVE-4300
 URL: https://issues.apache.org/jira/browse/HIVE-4300
 Project: Hive
  Issue Type: Bug
  Components: Thrift API
Affects Versions: 0.10.0
Reporter: Roshan Naik
Assignee: Roshan Naik
 Attachments: HIVE-4300.2.patch, HIVE-4300.patch


 running 'ant thriftif -Dthrift.home=/usr/local'  on a freshly checkedout 
 trunk should be a no-op as per 
 [instructions|https://cwiki.apache.org/Hive/howtocontribute.html#HowToContribute-GeneratingThriftCode]
 However this is not the case. Some of files seem to be have been relocated or 
 the classes in them are now in a different file.
 Below is the git status showing the state after the command is run:
 # On branch trunk
 # Changes not staged for commit:
 #   (use git add/rm file... to update what will be committed)
 #   (use git checkout -- file... to discard changes in working directory)
 #
 # modified:   build.properties
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Database.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/EnvironmentContext.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Index.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Partition.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PrincipalPrivilegeSet.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Schema.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SerDeInfo.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SkewedInfo.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/StorageDescriptor.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Table.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
 # deleted:metastore/src/gen/thrift/gen-php/ThriftHiveMetastore.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore/ThriftHiveMetastore.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore/hive_metastore_constants.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore/hive_metastore_types.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore_constants.php
 # deleted:metastore/src/gen/thrift/gen-php/hive_metastore_types.php
 # modified:   
 metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore-remote
 # deleted:ql/src/gen/thrift/gen-php/queryplan/queryplan_types.php
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde/test/InnerStruct.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde/test/ThriftTestObj.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/Complex.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/IntString.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/MegaStruct.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/MiniStruct.java
 # deleted:serde/src/gen/thrift/gen-php/serde/serde_constants.php
 # deleted:serde/src/gen/thrift/gen-php/serde/serde_types.php
 # deleted:service/src/gen/thrift/gen-php/hive_service/ThriftHive.php
 # deleted:
 service/src/gen/thrift/gen-php/hive_service/hive_service_types.php
 # modified:   service/src/gen/thrift/gen-py/TCLIService/TCLIService-remote
 # modified:   service/src/gen/thrift/gen-py/hive_service/ThriftHive-remote
 #
 # Untracked files:
 #   (use git add file... to include in what will be committed)
 #
 # serde/src/gen/thrift/gen-cpp/complex_constants.cpp
 # serde/src/gen/thrift/gen-cpp/complex_constants.h
 # serde/src/gen/thrift/gen-cpp/complex_types.cpp
 # 

[jira] [Created] (HIVE-4394) test leadlag.q fails

2013-04-21 Thread Namit Jain (JIRA)
Namit Jain created HIVE-4394:


 Summary: test leadlag.q fails
 Key: HIVE-4394
 URL: https://issues.apache.org/jira/browse/HIVE-4394
 Project: Hive
  Issue Type: Bug
Reporter: Namit Jain


ant test -Dtestcase=TestCliDriver -Dqfile=leadlag.q fails.

cc [~rhbutani]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (HIVE-4300) ant thriftif generated code that is checkedin is not up-to-date

2013-04-21 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637766#comment-13637766
 ] 

Namit Jain edited comment on HIVE-4300 at 4/22/13 4:32 AM:
---

[~roshan_naik], I ran all the tests for HIVE-4389 (all of them ran fine other 
than leadlag.q,
which is also failing on trunk, for which I have filed a jira HIVE-4394.

There, I had just performed:

ant thriftif -Dthrift.home=/usr/local
on my mac -- no local changes.


  was (Author: namit):
[~roshan_naik], I ran all the tests for HIVE-4389 (all of them ran fine 
other than leadlag.q,
which is also failing on trunk, for which I have filed a jira 
There, I had just performed:

ant thriftif -Dthrift.home=/usr/local
on my mac -- no local changes.

  
 ant thriftif  generated code that is checkedin is not up-to-date
 

 Key: HIVE-4300
 URL: https://issues.apache.org/jira/browse/HIVE-4300
 Project: Hive
  Issue Type: Bug
  Components: Thrift API
Affects Versions: 0.10.0
Reporter: Roshan Naik
Assignee: Roshan Naik
 Attachments: HIVE-4300.2.patch, HIVE-4300.patch


 running 'ant thriftif -Dthrift.home=/usr/local'  on a freshly checkedout 
 trunk should be a no-op as per 
 [instructions|https://cwiki.apache.org/Hive/howtocontribute.html#HowToContribute-GeneratingThriftCode]
 However this is not the case. Some of files seem to be have been relocated or 
 the classes in them are now in a different file.
 Below is the git status showing the state after the command is run:
 # On branch trunk
 # Changes not staged for commit:
 #   (use git add/rm file... to update what will be committed)
 #   (use git checkout -- file... to discard changes in working directory)
 #
 # modified:   build.properties
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Database.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/EnvironmentContext.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Index.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Partition.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PrincipalPrivilegeSet.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Schema.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SerDeInfo.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SkewedInfo.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/StorageDescriptor.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Table.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
 # deleted:metastore/src/gen/thrift/gen-php/ThriftHiveMetastore.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore/ThriftHiveMetastore.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore/hive_metastore_constants.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore/hive_metastore_types.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore_constants.php
 # deleted:metastore/src/gen/thrift/gen-php/hive_metastore_types.php
 # modified:   
 metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore-remote
 # deleted:ql/src/gen/thrift/gen-php/queryplan/queryplan_types.php
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde/test/InnerStruct.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde/test/ThriftTestObj.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/Complex.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/IntString.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/MegaStruct.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/MiniStruct.java
 # deleted:serde/src/gen/thrift/gen-php/serde/serde_constants.php
 # deleted:serde/src/gen/thrift/gen-php/serde/serde_types.php
 # deleted:service/src/gen/thrift/gen-php/hive_service/ThriftHive.php
 # deleted:
 service/src/gen/thrift/gen-php/hive_service/hive_service_types.php
 # modified:   service/src/gen/thrift/gen-py/TCLIService/TCLIService-remote
 

[jira] [Commented] (HIVE-4300) ant thriftif generated code that is checkedin is not up-to-date

2013-04-21 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637767#comment-13637767
 ] 

Namit Jain commented on HIVE-4300:
--

Did you do anything differently ?

 ant thriftif  generated code that is checkedin is not up-to-date
 

 Key: HIVE-4300
 URL: https://issues.apache.org/jira/browse/HIVE-4300
 Project: Hive
  Issue Type: Bug
  Components: Thrift API
Affects Versions: 0.10.0
Reporter: Roshan Naik
Assignee: Roshan Naik
 Attachments: HIVE-4300.2.patch, HIVE-4300.patch


 running 'ant thriftif -Dthrift.home=/usr/local'  on a freshly checkedout 
 trunk should be a no-op as per 
 [instructions|https://cwiki.apache.org/Hive/howtocontribute.html#HowToContribute-GeneratingThriftCode]
 However this is not the case. Some of files seem to be have been relocated or 
 the classes in them are now in a different file.
 Below is the git status showing the state after the command is run:
 # On branch trunk
 # Changes not staged for commit:
 #   (use git add/rm file... to update what will be committed)
 #   (use git checkout -- file... to discard changes in working directory)
 #
 # modified:   build.properties
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Database.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/EnvironmentContext.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Index.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Partition.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PrincipalPrivilegeSet.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Schema.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SerDeInfo.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SkewedInfo.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/StorageDescriptor.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Table.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
 # deleted:metastore/src/gen/thrift/gen-php/ThriftHiveMetastore.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore/ThriftHiveMetastore.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore/hive_metastore_constants.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore/hive_metastore_types.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore_constants.php
 # deleted:metastore/src/gen/thrift/gen-php/hive_metastore_types.php
 # modified:   
 metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore-remote
 # deleted:ql/src/gen/thrift/gen-php/queryplan/queryplan_types.php
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde/test/InnerStruct.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde/test/ThriftTestObj.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/Complex.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/IntString.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/MegaStruct.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/MiniStruct.java
 # deleted:serde/src/gen/thrift/gen-php/serde/serde_constants.php
 # deleted:serde/src/gen/thrift/gen-php/serde/serde_types.php
 # deleted:service/src/gen/thrift/gen-php/hive_service/ThriftHive.php
 # deleted:
 service/src/gen/thrift/gen-php/hive_service/hive_service_types.php
 # modified:   service/src/gen/thrift/gen-py/TCLIService/TCLIService-remote
 # modified:   service/src/gen/thrift/gen-py/hive_service/ThriftHive-remote
 #
 # Untracked files:
 #   (use git add file... to include in what will be committed)
 #
 # serde/src/gen/thrift/gen-cpp/complex_constants.cpp
 # serde/src/gen/thrift/gen-cpp/complex_constants.h
 # serde/src/gen/thrift/gen-cpp/complex_types.cpp
 # serde/src/gen/thrift/gen-cpp/complex_types.h
 # serde/src/gen/thrift/gen-cpp/megastruct_constants.cpp
 # serde/src/gen/thrift/gen-cpp/megastruct_constants.h
 # serde/src/gen/thrift/gen-cpp/megastruct_types.cpp
 # 

[jira] [Commented] (HIVE-4389) thrift files are re-generated by compiling

2013-04-21 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637768#comment-13637768
 ] 

Namit Jain commented on HIVE-4389:
--

[~ashutoshc], the tests already ran fine for me.
I updated HIVE-4300.


 thrift files are re-generated by compiling
 --

 Key: HIVE-4389
 URL: https://issues.apache.org/jira/browse/HIVE-4389
 Project: Hive
  Issue Type: Bug
Reporter: Namit Jain
Assignee: Namit Jain
 Attachments: hive.4389.1.patch


 I am not sure what is going on, but there seems to be a bunch of thrift 
 changes
 if I perform ant thriftif.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4389) thrift files are re-generated by compiling

2013-04-21 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637769#comment-13637769
 ] 

Namit Jain commented on HIVE-4389:
--

For some reason, the patch on HIVE-4300 did not apply cleanly for me.
Should we just commit this instead ? It should be the same change

 thrift files are re-generated by compiling
 --

 Key: HIVE-4389
 URL: https://issues.apache.org/jira/browse/HIVE-4389
 Project: Hive
  Issue Type: Bug
Reporter: Namit Jain
Assignee: Namit Jain
 Attachments: hive.4389.1.patch


 I am not sure what is going on, but there seems to be a bunch of thrift 
 changes
 if I perform ant thriftif.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4300) ant thriftif generated code that is checkedin is not up-to-date

2013-04-21 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637770#comment-13637770
 ] 

Namit Jain commented on HIVE-4300:
--

I got fewer changes than yours. For eg:

metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Database.java
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/EnvironmentContext.java
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Index.java
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Partition.java
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PrincipalPrivilegeSet.java
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Schema.java
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SerDeInfo.java
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SkewedInfo.java
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/StorageDescriptor.java
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Table.java
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
In each of the above files, only change is inside comments. word 'optional' 
changed to 'required'


none of the above files changed.

 ant thriftif  generated code that is checkedin is not up-to-date
 

 Key: HIVE-4300
 URL: https://issues.apache.org/jira/browse/HIVE-4300
 Project: Hive
  Issue Type: Bug
  Components: Thrift API
Affects Versions: 0.10.0
Reporter: Roshan Naik
Assignee: Roshan Naik
 Attachments: HIVE-4300.2.patch, HIVE-4300.patch


 running 'ant thriftif -Dthrift.home=/usr/local'  on a freshly checkedout 
 trunk should be a no-op as per 
 [instructions|https://cwiki.apache.org/Hive/howtocontribute.html#HowToContribute-GeneratingThriftCode]
 However this is not the case. Some of files seem to be have been relocated or 
 the classes in them are now in a different file.
 Below is the git status showing the state after the command is run:
 # On branch trunk
 # Changes not staged for commit:
 #   (use git add/rm file... to update what will be committed)
 #   (use git checkout -- file... to discard changes in working directory)
 #
 # modified:   build.properties
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Database.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/EnvironmentContext.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Index.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Partition.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PrincipalPrivilegeSet.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Schema.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SerDeInfo.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SkewedInfo.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/StorageDescriptor.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Table.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
 # deleted:metastore/src/gen/thrift/gen-php/ThriftHiveMetastore.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore/ThriftHiveMetastore.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore/hive_metastore_constants.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore/hive_metastore_types.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore_constants.php
 # deleted:metastore/src/gen/thrift/gen-php/hive_metastore_types.php
 # modified:   
 metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore-remote
 # deleted:ql/src/gen/thrift/gen-php/queryplan/queryplan_types.php
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde/test/InnerStruct.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde/test/ThriftTestObj.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/Complex.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/IntString.java
 # modified:   
 

[jira] [Commented] (HIVE-4393) Make the deleteData flag accessable from DropTable/Partition events

2013-04-21 Thread Morgan Phillips (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637779#comment-13637779
 ] 

Morgan Phillips commented on HIVE-4393:
---

Is being reviewed at: https://reviews.facebook.net/D10425

 Make the deleteData flag accessable from DropTable/Partition events
 ---

 Key: HIVE-4393
 URL: https://issues.apache.org/jira/browse/HIVE-4393
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Reporter: Morgan Phillips
Assignee: Morgan Phillips
Priority: Minor

 On occasion, due to some error during a drop, information is removed from the 
 metastore but data, which should have been removed, remains intact on the 
 DFS.  In order to log such events via PreEvent and Event listeners a new 
 method 'getDeleteData' should be added to (Pre)DropPartitionEvent and 
 (Pre)DropTableEvent which returns the deleteData flag's value.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-4395) Support TFetchOrientation.FIRST for HiveServer2 FetchResults

2013-04-21 Thread Prasad Mujumdar (JIRA)
Prasad Mujumdar created HIVE-4395:
-

 Summary: Support TFetchOrientation.FIRST for HiveServer2 
FetchResults
 Key: HIVE-4395
 URL: https://issues.apache.org/jira/browse/HIVE-4395
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2
Affects Versions: 0.11.0
Reporter: Prasad Mujumdar
Assignee: Prasad Mujumdar


Currently HiveServer2 only support fetching next row (TFetchOrientation.NEXT). 
This ticket is to implement support for TFetchOrientation.FIRST that resets the 
fetch position at the begining of the resultset. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira