[jira] [Updated] (HIVE-3925) dependencies of fetch task are not shown by explain

2014-03-05 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-3925:


Attachment: HIVE-3925.5.patch.txt

Addressed comments. Expects many failures from newly added tests.

 dependencies of fetch task are not shown by explain
 ---

 Key: HIVE-3925
 URL: https://issues.apache.org/jira/browse/HIVE-3925
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Namit Jain
Assignee: Navis
 Attachments: HIVE-3925.4.patch.txt, HIVE-3925.5.patch.txt, 
 HIVE-3925.D8577.1.patch, HIVE-3925.D8577.2.patch, HIVE-3925.D8577.3.patch


 A simple query like:
 hive explain select * from src order by key;
 OK
 ABSTRACT SYNTAX TREE:
   (TOK_QUERY (TOK_FROM (TOK_TABREF (TOK_TABNAME src))) (TOK_INSERT 
 (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR 
 TOK_ALLCOLREF)) (TOK_ORDERBY (TOK_TABSORTCOLNAMEASC (TOK_TABLE_OR_COL key)
 STAGE DEPENDENCIES:
   Stage-1 is a root stage
   Stage-0 is a root stage
   Stage: Stage-0
 Fetch Operator
   limit: -1
 Stage-0 is not a root stage and depends on stage-1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-3925) dependencies of fetch task are not shown by explain

2014-03-05 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-3925:


Status: Patch Available  (was: Open)

 dependencies of fetch task are not shown by explain
 ---

 Key: HIVE-3925
 URL: https://issues.apache.org/jira/browse/HIVE-3925
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Namit Jain
Assignee: Navis
 Attachments: HIVE-3925.4.patch.txt, HIVE-3925.5.patch.txt, 
 HIVE-3925.D8577.1.patch, HIVE-3925.D8577.2.patch, HIVE-3925.D8577.3.patch


 A simple query like:
 hive explain select * from src order by key;
 OK
 ABSTRACT SYNTAX TREE:
   (TOK_QUERY (TOK_FROM (TOK_TABREF (TOK_TABNAME src))) (TOK_INSERT 
 (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR 
 TOK_ALLCOLREF)) (TOK_ORDERBY (TOK_TABSORTCOLNAMEASC (TOK_TABLE_OR_COL key)
 STAGE DEPENDENCIES:
   Stage-1 is a root stage
   Stage-0 is a root stage
   Stage: Stage-0
 Fetch Operator
   limit: -1
 Stage-0 is not a root stage and depends on stage-1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-6552) Failed to generate new mapJoin operator by exception : Big Table Alias is null

2014-03-05 Thread Martin Kudlej (JIRA)
Martin Kudlej created HIVE-6552:
---

 Summary: Failed to generate new mapJoin operator by exception : 
Big Table Alias is null
 Key: HIVE-6552
 URL: https://issues.apache.org/jira/browse/HIVE-6552
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.12.0
 Environment: Hive version:
getBranch()  : bigwheel-m16-0.12.0
getBuildVersion(): 0.12.0.2.0.6.1-101 from 
8b1b43ece7c96d3cf38fda84414b23e3b707026e by jenkins source checksum 
1c1e5eb051cefce14af4d621654dc423
getDate(): Wed Jan 8 22:20:16 PST 2014
getRevision(): 8b1b43ece7c96d3cf38fda84414b23e3b707026e
getSrcChecksum() : 1c1e5eb051cefce14af4d621654dc423
getUrl() : 
git://c64-s17/grid/0/workspace/BIGTOP-HDP_RPM_REPO-bigwheel-M16/label/centos6-builds/bigtop-0.5/build/hive/rpm/BUILD/hive-0.12.0.2.0.6.1
getUser(): jenkins
getVersion() : 0.12.0.2.0.6.1-101

OS:  Red Hat Enterprise Linux Server release 6.4 x86_64

JVM: java version 1.6.0_31
Java(TM) SE Runtime Environment (build 1.6.0_31-b04)
Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)

Hadoop:
Hadoop 2.2.0.2.0.6.0-101
Subversion g...@github.com:hortonworks/hadoop.git -r 
b07b2906c36defd389c8b5bd22bebc1bead8115b
Compiled by jenkins on 2014-01-09T05:18Z
Compiled with protoc 2.5.0
From source with checksum 704f1e463ebc4fb89353011407e965
Reporter: Martin Kudlej


I've tried BigTop test for UNIQUEJOIN:
CREATE TABLE T1(key STRING, val STRING) STORED AS TEXTFILE;
CREATE TABLE T2(key STRING, val STRING) STORED AS TEXTFILE; 
  CREATE TABLE T3(key STRING, val STRING) STORED AS 
TEXTFILE;

LOAD DATA LOCAL INPATH 'seed_data_files/T1.txt' INTO TABLE T1;  
  LOAD DATA LOCAL INPATH 'seed_data_files/T2.txt' INTO 
TABLE T2;
LOAD DATA LOCAL INPATH 'seed_data_files/T3.txt' INTO TABLE T3;  
   
FROM UNIQUEJOIN PRESERVE T1 a (a.key), PRESERVE T2 b (b.key), PRESERVE T3 c 
(c.key)
SELECT a.key, b.key, c.key;

where T1.txt is:
111
212
313
717
818
828
and T2.txt is:
222
313
414
515
818
818
and T3.txt is:
212
414
616
717

if hive.auto.convert.join=false it works and result is:
1   NULLNULL
2   2   2
3   3   NULL
NULL4   4
NULL5   NULL
NULLNULL6
7   NULL7
8   8   NULL
8   8   NULL
8   8   NULL
8   8   NULL

but hive.auto.convert.join=true it failed:
 FROM UNIQUEJOIN PRESERVE T1 a (a.key), PRESERVE T2 b (b.key), PRESERVE T3 c 
 (c.key) SELECT a.key, b.key, c.key
org.apache.hadoop.hive.ql.parse.SemanticException: Big Table Alias is null  
at 
org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genMapJoinLocalWork(MapJoinProcessor.java:225)
  at 
org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genLocalWorkForMapJoin(MapJoinProcessor.java:256)
  at 
org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genMapJoinOpAndLocalWork(MapJoinProcessor.java:248)
  at 
org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinTaskDispatcher.convertTaskToMapJoinTask(CommonJoinTaskDispatcher.java:191)
  at 
org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinTaskDispatcher.processCurrentTask(CommonJoinTaskDispatcher.java:480)
  at 
org.apache.hadoop.hive.ql.optimizer.physical.AbstractJoinTaskDispatcher.dispatch(AbstractJoinTaskDispatcher.java:182)
  at 
org.apache.hadoop.hive.ql.lib.TaskGraphWalker.dispatch(TaskGraphWalker.java:111)
   at 
org.apache.hadoop.hive.ql.lib.TaskGraphWalker.walk(TaskGraphWalker.java:194)
  at 
org.apache.hadoop.hive.ql.lib.TaskGraphWalker.startWalking(TaskGraphWalker.java:139)
   at 
org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinResolver.resolve(CommonJoinResolver.java:79)
  at 
org.apache.hadoop.hive.ql.optimizer.physical.PhysicalOptimizer.optimize(PhysicalOptimizer.java:90)
  at 
org.apache.hadoop.hive.ql.parse.MapReduceCompiler.compile(MapReduceCompiler.java:300)
  at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:8410)
  at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:284)
  at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:441)
  at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:342)
  at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1000)
  at org.apache.hadoop.hive.ql.Driver.run(Driver.java:911)
  at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
  at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
  at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
  at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:348)
  at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:446)
  at 

hive script when invoked through a link

2014-03-05 Thread Remus Rusanu
I tried making /usr/bin/hive a link to /usr/lib/hive-0.14/bin/hive . But the 
launch fails because:

bin=`dirname $0`
bin=`cd $bin; pwd`

. $bin/hive-config.sh

So I used a readlink -f first, to locate the proper script home, and then it 
works fine.
My question is: is this kind of problem something we track and open a JIRA 
about, or is this kind of issue left as something for the 
packagers/distributors to worry about and fix (given that the distributions 
vary wildly from the pure trunk build package artifact)?

Thanks,
~Remus



[jira] [Commented] (HIVE-6131) New columns after table alter result in null values despite data

2014-03-05 Thread Ning Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13920691#comment-13920691
 ] 

Ning Zhang commented on HIVE-6131:
--

Verified this issue with Apache Hive 0.10, 0.11, 0.12

The conclusion is: Hive 0.11 and 0.12 have this issue, while Hive 0.10 doesn't. 
That is to say, starting from Hive 0.11, the issue was introduced.

 New columns after table alter result in null values despite data
 

 Key: HIVE-6131
 URL: https://issues.apache.org/jira/browse/HIVE-6131
 Project: Hive
  Issue Type: Bug
Reporter: James Vaughan
Priority: Minor

 Hi folks,
 I found and verified a bug on our CDH 4.0.3 install of Hive when adding 
 columns to tables with Partitions using 'REPLACE COLUMNS'.  I dug through the 
 Jira a little bit and didn't see anything for it so hopefully this isn't just 
 noise on the radar.
 Basically, when you alter a table with partitions and then reupload data to 
 that partition, it doesn't seem to recognize the extra data that actually 
 exists in HDFS- as in, returns NULL values on the new column despite having 
 the data and recognizing the new column in the metadata.
 Here's some steps to reproduce using a basic table:
 1.  Run this hive command:  CREATE TABLE jvaughan_test (col1 string) 
 partitioned by (day string);
 2.  Create a simple file on the system with a couple of entries, something 
 like hi and hi2 separated by newlines.
 3.  Run this hive command, pointing it at the file:  LOAD DATA LOCAL INPATH 
 'FILEDIR' OVERWRITE INTO TABLE jvaughan_test PARTITION (day = '2014-01-02');
 4.  Confirm the data with:  SELECT * FROM jvaughan_test WHERE day = 
 '2014-01-02';
 5.  Alter the column definitions:  ALTER TABLE jvaughan_test REPLACE COLUMNS 
 (col1 string, col2 string);
 6.  Edit your file and add a second column using the default separator 
 (ctrl+v, then ctrl+a in Vim) and add two more entries, such as hi3 on the 
 first row and hi4 on the second
 7.  Run step 3 again
 8.  Check the data again like in step 4
 For me, this is the results that get returned:
 hive select * from jvaughan_test where day = '2014-01-01';
 OK
 hiNULL2014-01-02
 hi2   NULL2014-01-02
 This is despite the fact that there is data in the file stored by the 
 partition in HDFS.
 Let me know if you need any other information.  The only workaround for me 
 currently is to drop partitions for any I'm replacing data in and THEN 
 reupload the new data file.
 Thanks,
 -James



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6432) Remove deprecated methods in HCatalog

2014-03-05 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-6432:
---

Attachment: HIVE-6432.wip.2.patch

Patch updated to remove function endpoints that have been deprecated before 
0.13.

Those functions that were deprecated in 0.13 are still maintained, and comments 
have been added to clarify that they will be gone by 0.15. I think we should 
generally take it as practice from now on that any deprecation markings must 
include version that it is intended to last till. So far, these only exist in 
HCatFieldSchema and HCatInputFormat.

I think I'm mostly done with this patch, this is the last wip candidate, I 
think. I'm going to run a full batch of tests, and once that completes, this 
should be ready to shift to patch-available for 0.14 (trunk)

 Remove deprecated methods in HCatalog
 -

 Key: HIVE-6432
 URL: https://issues.apache.org/jira/browse/HIVE-6432
 Project: Hive
  Issue Type: Task
  Components: HCatalog
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Attachments: HIVE-6432.wip.1.patch, HIVE-6432.wip.2.patch


 There are a lot of methods in HCatalog that have been deprecated in HCatalog 
 0.5, and some that were recently deprecated in Hive 0.11 (joint release with 
 HCatalog).
 The goal for HCatalog deprecation is that in general, after something has 
 been deprecated, it is expected to stay around for 2 releases, which means 
 hive-0.13 will be the last release to ship with all the methods that were 
 deprecated in hive-0.11 (the org.apache.hcatalog.* files should all be 
 removed afterwards), and it is also good for us to clean out and nuke all 
 other older deprecated methods.
 We should take this on early in a dev/release cycle to allow us time to 
 resolve all fallout, so I propose that we remove all HCatalog deprecated 
 methods after we branch out 0.13 and 0.14 becomes trunk.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6486) Support secure Subject.doAs() in HiveServer2 JDBC client.

2014-03-05 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13920709#comment-13920709
 ] 

Vaibhav Gumashta commented on HIVE-6486:


After some thought, I feel your initial suggestion on rb makes much more sense 
(using kerberosAuthType=fromSubject). Like you mentioned, it can be expanded to 
have kerberosAuthType=fromKeyTab etc when we decide to support keytab based 
client login. In my opinion, having a connection string like: 
auth=kerberos;kerberosAuthType=fromSubject/fromKeyTab makes the intent much 
more clear.

[~shivshi] [~prasadm] [~thejas] Let me know what you feel. Thanks!

 Support secure Subject.doAs() in HiveServer2 JDBC client.
 -

 Key: HIVE-6486
 URL: https://issues.apache.org/jira/browse/HIVE-6486
 Project: Hive
  Issue Type: Improvement
  Components: Authentication, HiveServer2, JDBC
Affects Versions: 0.11.0, 0.12.0
Reporter: Shivaraju Gowda
Assignee: Shivaraju Gowda
 Fix For: 0.13.0

 Attachments: HIVE-6486.1.patch, HIVE-6486.2.patch, 
 Hive_011_Support-Subject_doAS.patch, TestHive_SujectDoAs.java


 HIVE-5155 addresses the problem of kerberos authentication in multi-user 
 middleware server using proxy user.  In this mode the principal used by the 
 middle ware server has privileges to impersonate selected users in 
 Hive/Hadoop. 
 This enhancement is to support Subject.doAs() authentication in  Hive JDBC 
 layer so that the end users Kerberos Subject is passed through in the middle 
 ware server. With this improvement there won't be any additional setup in the 
 server to grant proxy privileges to some users and there won't be need to 
 specify a proxy user in the JDBC client. This version should also be more 
 secure since it won't require principals with the privileges to impersonate 
 other users in Hive/Hadoop setup.
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-4293) Predicates following UDTF operator are removed by PPD

2014-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13920728#comment-13920728
 ] 

Hive QA commented on HIVE-4293:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12632763/HIVE-4293.11.patch.txt

{color:red}ERROR:{color} -1 due to 162 failed/errored test(s), 5342 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join20
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join21
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join25
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join29
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join30
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join_filters
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join_nulls
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join_without_localtask
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_smb_mapjoin_14
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_10
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_14
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_9
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketizedhiveinputformat_auto
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin10
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin9
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer14
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_count
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_fetch_aggregation
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_grouping_id2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_grouping_sets3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_grouping_sets5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_multi_single_reducer3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_infer_bucket_sort_convert_join
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_infer_bucket_sort_multi_insert
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input20
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input33
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_compressed
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join20
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join21

[jira] [Updated] (HIVE-6432) Remove deprecated methods in HCatalog

2014-03-05 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-6432:
---

Attachment: hcat.6432.test.out
HIVE-6432.patch

Okay, the patch is good, tests succed, attaching as final version. Also 
attaching test output.

 Remove deprecated methods in HCatalog
 -

 Key: HIVE-6432
 URL: https://issues.apache.org/jira/browse/HIVE-6432
 Project: Hive
  Issue Type: Task
  Components: HCatalog
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Attachments: HIVE-6432.patch, HIVE-6432.wip.1.patch, 
 HIVE-6432.wip.2.patch, hcat.6432.test.out


 There are a lot of methods in HCatalog that have been deprecated in HCatalog 
 0.5, and some that were recently deprecated in Hive 0.11 (joint release with 
 HCatalog).
 The goal for HCatalog deprecation is that in general, after something has 
 been deprecated, it is expected to stay around for 2 releases, which means 
 hive-0.13 will be the last release to ship with all the methods that were 
 deprecated in hive-0.11 (the org.apache.hcatalog.* files should all be 
 removed afterwards), and it is also good for us to clean out and nuke all 
 other older deprecated methods.
 We should take this on early in a dev/release cycle to allow us time to 
 resolve all fallout, so I propose that we remove all HCatalog deprecated 
 methods after we branch out 0.13 and 0.14 becomes trunk.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6486) Support secure Subject.doAs() in HiveServer2 JDBC client.

2014-03-05 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13920775#comment-13920775
 ] 

Thejas M Nair commented on HIVE-6486:
-

Yes,  I think kerberosAuthType=fromSubject is more intuitive/cleaner.


 Support secure Subject.doAs() in HiveServer2 JDBC client.
 -

 Key: HIVE-6486
 URL: https://issues.apache.org/jira/browse/HIVE-6486
 Project: Hive
  Issue Type: Improvement
  Components: Authentication, HiveServer2, JDBC
Affects Versions: 0.11.0, 0.12.0
Reporter: Shivaraju Gowda
Assignee: Shivaraju Gowda
 Fix For: 0.13.0

 Attachments: HIVE-6486.1.patch, HIVE-6486.2.patch, 
 Hive_011_Support-Subject_doAS.patch, TestHive_SujectDoAs.java


 HIVE-5155 addresses the problem of kerberos authentication in multi-user 
 middleware server using proxy user.  In this mode the principal used by the 
 middle ware server has privileges to impersonate selected users in 
 Hive/Hadoop. 
 This enhancement is to support Subject.doAs() authentication in  Hive JDBC 
 layer so that the end users Kerberos Subject is passed through in the middle 
 ware server. With this improvement there won't be any additional setup in the 
 server to grant proxy privileges to some users and there won't be need to 
 specify a proxy user in the JDBC client. This version should also be more 
 secure since it won't require principals with the privileges to impersonate 
 other users in Hive/Hadoop setup.
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6486) Support secure Subject.doAs() in HiveServer2 JDBC client.

2014-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13920788#comment-13920788
 ] 

Hive QA commented on HIVE-6486:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12632697/HIVE-6486.2.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 5355 tests executed
*Failed tests:*
{noformat}
org.apache.hive.beeline.TestSchemaTool.testSchemaInit
org.apache.hive.beeline.TestSchemaTool.testSchemaUpgrade
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1628/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1628/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12632697

 Support secure Subject.doAs() in HiveServer2 JDBC client.
 -

 Key: HIVE-6486
 URL: https://issues.apache.org/jira/browse/HIVE-6486
 Project: Hive
  Issue Type: Improvement
  Components: Authentication, HiveServer2, JDBC
Affects Versions: 0.11.0, 0.12.0
Reporter: Shivaraju Gowda
Assignee: Shivaraju Gowda
 Fix For: 0.13.0

 Attachments: HIVE-6486.1.patch, HIVE-6486.2.patch, 
 Hive_011_Support-Subject_doAS.patch, TestHive_SujectDoAs.java


 HIVE-5155 addresses the problem of kerberos authentication in multi-user 
 middleware server using proxy user.  In this mode the principal used by the 
 middle ware server has privileges to impersonate selected users in 
 Hive/Hadoop. 
 This enhancement is to support Subject.doAs() authentication in  Hive JDBC 
 layer so that the end users Kerberos Subject is passed through in the middle 
 ware server. With this improvement there won't be any additional setup in the 
 server to grant proxy privileges to some users and there won't be need to 
 specify a proxy user in the JDBC client. This version should also be more 
 secure since it won't require principals with the privileges to impersonate 
 other users in Hive/Hadoop setup.
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: hive script when invoked through a link

2014-03-05 Thread Brock Noland
This should be fixed on trunk. I believe we want to use the BAS_SOURCE
environment variable.
On Mar 5, 2014 3:22 AM, Remus Rusanu rem...@microsoft.com wrote:

 I tried making /usr/bin/hive a link to /usr/lib/hive-0.14/bin/hive . But
 the launch fails because:

 bin=`dirname $0`
 bin=`cd $bin; pwd`

 . $bin/hive-config.sh

 So I used a readlink -f first, to locate the proper script home, and then
 it works fine.
 My question is: is this kind of problem something we track and open a JIRA
 about, or is this kind of issue left as something for the
 packagers/distributors to worry about and fix (given that the distributions
 vary wildly from the pure trunk build package artifact)?

 Thanks,
 ~Remus




[jira] [Commented] (HIVE-6325) Enable using multiple concurrent sessions in tez

2014-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13920869#comment-13920869
 ] 

Hive QA commented on HIVE-6325:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12632704/HIVE-6325.11.patch

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 5358 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_bucketizedhiveinputformat
org.apache.hive.beeline.TestSchemaTool.testSchemaInit
org.apache.hive.beeline.TestSchemaTool.testSchemaUpgrade
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1629/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1629/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12632704

 Enable using multiple concurrent sessions in tez
 

 Key: HIVE-6325
 URL: https://issues.apache.org/jira/browse/HIVE-6325
 Project: Hive
  Issue Type: Improvement
  Components: Tez
Affects Versions: 0.13.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Attachments: HIVE-6325.1.patch, HIVE-6325.10.patch, 
 HIVE-6325.11.patch, HIVE-6325.2.patch, HIVE-6325.3.patch, HIVE-6325.4.patch, 
 HIVE-6325.5.patch, HIVE-6325.6.patch, HIVE-6325.7.patch, HIVE-6325.8.patch, 
 HIVE-6325.9.patch


 We would like to enable multiple concurrent sessions in tez via hive server 
 2. This will enable users to make efficient use of the cluster when it has 
 been partitioned using yarn queues.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Proposal to switch to pull requests

2014-03-05 Thread kulkarni.swar...@gmail.com
Hello,

Since we have a nice mirrored git repository for hive[1], any specific
reason why we can't switch to doing pull requests instead of patches? IMHO
pull requests are awesome for peer review plus it is also very easy to keep
track of JIRAs with open pull requests instead of looking for JIRAs in a
Patch Available state. Also since they get updated automatically, it is
also very easy to see if a review comment made by a reviewer was addressed
properly or not.

Thoughts?

Thanks,

[1] https://github.com/apache/hive

-- 
Swarnim


Re: Proposal to switch to pull requests

2014-03-05 Thread Brock Noland
Personally I prefer the Github workflow, but I believe there have been
some challenges with that since the source for apache projects must be
stored in apache source control (git or svn).

Relevent: 
https://blogs.apache.org/infra/entry/improved_integration_between_apache_and

On Wed, Mar 5, 2014 at 9:19 AM, kulkarni.swar...@gmail.com
kulkarni.swar...@gmail.com wrote:
 Hello,

 Since we have a nice mirrored git repository for hive[1], any specific
 reason why we can't switch to doing pull requests instead of patches? IMHO
 pull requests are awesome for peer review plus it is also very easy to keep
 track of JIRAs with open pull requests instead of looking for JIRAs in a
 Patch Available state. Also since they get updated automatically, it is
 also very easy to see if a review comment made by a reviewer was addressed
 properly or not.

 Thoughts?

 Thanks,

 [1] https://github.com/apache/hive

 --
 Swarnim



-- 
Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org


[jira] [Commented] (HIVE-6548) Missing owner name and type fields in schema script for DBS table

2014-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13920983#comment-13920983
 ] 

Hive QA commented on HIVE-6548:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12632711/HIVE-6548.patch

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 5355 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_bucket5
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_bucketmapjoin6
org.apache.hive.beeline.TestSchemaTool.testSchemaInit
org.apache.hive.beeline.TestSchemaTool.testSchemaUpgrade
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1630/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1630/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12632711

 Missing owner name and type fields in schema script for DBS table 
 --

 Key: HIVE-6548
 URL: https://issues.apache.org/jira/browse/HIVE-6548
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.13.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: HIVE-6548.patch


 HIVE-6386 introduced new columns in DBS table, but those are missing from 
 schema scripts.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-6553) hive CLI shell use of `dirname $0` prevents it from being linked from other directories

2014-03-05 Thread Remus Rusanu (JIRA)
Remus Rusanu created HIVE-6553:
--

 Summary: hive CLI shell use of `dirname $0` prevents it from 
being linked from other directories
 Key: HIVE-6553
 URL: https://issues.apache.org/jira/browse/HIVE-6553
 Project: Hive
  Issue Type: Bug
Reporter: Remus Rusanu
Priority: Minor


I tried making /usr/bin/hive a link to /usr/lib/hive-0.14/bin/hive . But the 
launch fails because:
{code}
bin=`dirname $0`
bin=`cd $bin; pwd`
. $bin/hive-config.sh
{code}

So I used a readlink -f first, to locate the proper script home, and then it 
works fine. 




--
This message was sent by Atlassian JIRA
(v6.2#6252)


RE: hive script when invoked through a link

2014-03-05 Thread Remus Rusanu
HIVE-6553 it is then
I'm not sure $BASH_SOURCE makes any diff, is the readlink that is critical. 

-Original Message-
From: Brock Noland [mailto:br...@cloudera.com] 
Sent: Wednesday, March 05, 2014 2:46 PM
To: dev@hive.apache.org
Subject: Re: hive script when invoked through a link

This should be fixed on trunk. I believe we want to use the BAS_SOURCE 
environment variable.
On Mar 5, 2014 3:22 AM, Remus Rusanu rem...@microsoft.com wrote:

 I tried making /usr/bin/hive a link to /usr/lib/hive-0.14/bin/hive . 
 But the launch fails because:

 bin=`dirname $0`
 bin=`cd $bin; pwd`

 . $bin/hive-config.sh

 So I used a readlink -f first, to locate the proper script home, and 
 then it works fine.
 My question is: is this kind of problem something we track and open a 
 JIRA about, or is this kind of issue left as something for the 
 packagers/distributors to worry about and fix (given that the 
 distributions vary wildly from the pure trunk build package artifact)?

 Thanks,
 ~Remus




[jira] [Created] (HIVE-6554) CombineHiveInputFormat should use the underlying InputSplits

2014-03-05 Thread Owen O'Malley (JIRA)
Owen O'Malley created HIVE-6554:
---

 Summary: CombineHiveInputFormat should use the underlying 
InputSplits
 Key: HIVE-6554
 URL: https://issues.apache.org/jira/browse/HIVE-6554
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Reporter: Owen O'Malley
Assignee: Owen O'Malley


Currently CombineHiveInputFormat generates FileSplits without using the 
underlying InputFormat. This leads to a problem when an InputFormat needs a 
InputSplit that isn't exactly a FileSplit, because CombineHiveInputSplit always 
generates FileSplits and then calls the underlying InputFormats getRecordReader.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6060) Define API for RecordUpdater and UpdateReader

2014-03-05 Thread Owen O'Malley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley updated HIVE-6060:


Attachment: HIVE-6060.patch

Re-uploading for jenkins.

 Define API for RecordUpdater and UpdateReader
 -

 Key: HIVE-6060
 URL: https://issues.apache.org/jira/browse/HIVE-6060
 Project: Hive
  Issue Type: Sub-task
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Attachments: HIVE-6060.patch, acid-io.patch, h-5317.patch, 
 h-5317.patch, h-5317.patch, h-6060.patch, h-6060.patch


 We need to define some new APIs for how Hive interacts with the file formats 
 since it needs to be much richer than the current RecordReader and 
 RecordWriter.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: How long will we support Hadoop 0.20.2?

2014-03-05 Thread Ashutosh Chauhan
~6 months have since gone by. Branch 0.13 has now been cut. Is it OK now to
drop 20 shims from trunk. Or, Ed are you still planning to use Hive on
Hadoop 0.20.2,  6 months from now (or whenever 0.14 gets released)

Thanks,
Ashutosh


On Thu, Sep 19, 2013 at 7:11 PM, Edward Capriolo edlinuxg...@gmail.comwrote:

 Remeber that the shim layer will continue to exist.

 This is something I would consider:

 In 6 months from day x hive will no longer default to building against
 hadoop 0.20. Hadoop 0.20.x will not be officially supported, meaning that
 our build bots will no longer test against 0.20.x. Compatibility with
 hadoop 0.20.x will not be a requirement for any patch. The 0.20 shims will
 still remain in trunk. Committers could still accept patches to support
 0.20 as long as they are not detrimental to current supported versions.

 What this would mean is if i checked out hive trunk in 5 months and 29 days
 after day x it would still build , compile , and run on hadoop 0.20 and be
 feature complete. At the 6 month mark i lose that guarantee.

 On Thursday, September 19, 2013, Edward Capriolo edlinuxg...@gmail.com
 wrote:
  +1 to dropping Hadoop 0.20.2 support in Hive 0.13, which given that Hive
  0.12 has just branched means it isn't likely that Hive 0.13 will come out
  in the next 6 months.
 
  LOL
  -1. I was not suggesting we drop 0.20.2 support now, so the next hive
 version 0.13 won't have it. That would essentially mean we are dropping it
 now.
 
  I was suggesting dropping the 0.20.2 support in 6 months, so whatever
 version STARTED won't have it.
 
  On Thu, Sep 19, 2013 at 11:41 AM, Owen O'Malley omal...@apache.org
 wrote:
 
  +1 to dropping Hadoop 0.20.2 support in Hive 0.13, which given that Hive
  0.12 has just branched means it isn't likely that Hive 0.13 will come
 out
  in the next 6 months.
 
  -- Owen
 
 
  On Thu, Sep 19, 2013 at 8:35 AM, Brock Noland br...@cloudera.com
 wrote:
 
   First off, I have to apologize, I didn't know there would be such
   passions on both sides of the 0.20.2 argument!
  
   On Thu, Sep 19, 2013 at 10:11 AM, Edward Capriolo 
 edlinuxg...@gmail.com
   wrote:
That rant being done,
  
   No worries man, Hadoop versions are something worth ranting about.
   IMHO Hadoop has a history of changing API's and breaking end users.
   However, I feel this is improving.
  
we can not and should not support hadoop 0.20.2
forever. Discontinuing hadoop 0.20.2 in say 6 months might be
 reasonable,
but I think dropping it on the floor due to a one line change for a
   missing
convenience constructor is a bit knee-jerk.
  
   Very sorry if I came across with the opinion that we should drop
   0.20.2 now because of the constructor issue. The issue brought up
   0.20.2's age in my mind and the logical next step is to ask how long
   we plan on supporting it! :) I like the time bounding idea and I feel
   6 months is reasonable. FWIW, the 1.X series is stable for my needs.
  
   Brock
  
 
 



[jira] [Commented] (HIVE-6548) Missing owner name and type fields in schema script for DBS table

2014-03-05 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921061#comment-13921061
 ] 

Ashutosh Chauhan commented on HIVE-6548:


Verified that TestSchemaTool is failing on trunk as well. Created, HIVE-6555 to 
track that.

 Missing owner name and type fields in schema script for DBS table 
 --

 Key: HIVE-6548
 URL: https://issues.apache.org/jira/browse/HIVE-6548
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.13.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: HIVE-6548.patch


 HIVE-6386 introduced new columns in DBS table, but those are missing from 
 schema scripts.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-5888) group by after join operation product no result when hive.optimize.skewjoin = true

2014-03-05 Thread Jian Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921063#comment-13921063
 ] 

Jian Fang commented on HIVE-5888:
-

Also, the two reducers often failed because no space left on device. This could 
be a big problem.


java.lang.RuntimeException: org.apache.hadoop.fs.FSError: java.io.IOException: 
No space left on device
at 
org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:278)
at 
org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:528)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:429)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1140)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: org.apache.hadoop.fs.FSError: java.io.IOException: No space left on 
device
at 
org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.write(RawLocalFileSystem.java:200)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at 
org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.writeChunk(ChecksumFileSystem.java:354)
at 
org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunk(FSOutputSummer.java:150)
at 
org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:132)
at 
org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:121)
at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:112)
at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:86)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at 
org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:1115)
at 
org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:1076)
at 
org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat$1.write(HiveSequenceFileOutputFormat.java:77)
at 
org.apache.hadoop.hive.ql.exec.persistence.RowContainer.spillBlock(RowContainer.java:334)
at 
org.apache.hadoop.hive.ql.exec.persistence.RowContainer.add(RowContainer.java:164)
at 
org.apache.hadoop.hive.ql.exec.persistence.RowContainer.add(RowContainer.java:74)
at 
org.apache.hadoop.hive.ql.exec.JoinOperator.processOp(JoinOperator.java:127)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:502)
at 
org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:257)
... 7 more
Caused by: java.io.IOException: No space left on device
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:345)
at 
org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.write(RawLocalFileSystem.java:198)
... 26 more

 group by after join operation product no result when  hive.optimize.skewjoin 
 = true 
 

 Key: HIVE-5888
 URL: https://issues.apache.org/jira/browse/HIVE-5888
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: cyril liao





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-6555) TestSchemaTool is failing on trunk after branching

2014-03-05 Thread Ashutosh Chauhan (JIRA)
Ashutosh Chauhan created HIVE-6555:
--

 Summary: TestSchemaTool is failing on trunk after branching
 Key: HIVE-6555
 URL: https://issues.apache.org/jira/browse/HIVE-6555
 Project: Hive
  Issue Type: Bug
Reporter: Ashutosh Chauhan


This is because version was bumped to 0.14 in pom file and there are no 
metastore scripts for 0.14 yet.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6548) Missing owner name and type fields in schema script for DBS table

2014-03-05 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-6548:
---

   Resolution: Fixed
Fix Version/s: 0.14.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. 
[~rhbutani] Please consider this for inclusion in 0.13 since its a bug which 
results in metastore failure to come up.

 Missing owner name and type fields in schema script for DBS table 
 --

 Key: HIVE-6548
 URL: https://issues.apache.org/jira/browse/HIVE-6548
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.13.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Fix For: 0.14.0

 Attachments: HIVE-6548.patch


 HIVE-6386 introduced new columns in DBS table, but those are missing from 
 schema scripts.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-5888) group by after join operation product no result when hive.optimize.skewjoin = true

2014-03-05 Thread Jian Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921057#comment-13921057
 ] 

Jian Fang commented on HIVE-5888:
-

Thanks a lot. Will try the fix in HIVE-6041. Could you please explain the weird 
behavior when I set  hive.optimize.skewjoin=true and 
hive.auto.convert.join=false? Why almost all keys were distributed to only two 
reducers? Like to make sure there is no other bugs in the skewjoin logic.

 group by after join operation product no result when  hive.optimize.skewjoin 
 = true 
 

 Key: HIVE-5888
 URL: https://issues.apache.org/jira/browse/HIVE-5888
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: cyril liao





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6507) OrcFile table property names are specified as strings

2014-03-05 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921103#comment-13921103
 ] 

Owen O'Malley commented on HIVE-6507:
-

This breaks the API compatibility. You need to leave the strings.

 OrcFile table property names are specified as strings
 -

 Key: HIVE-6507
 URL: https://issues.apache.org/jira/browse/HIVE-6507
 Project: Hive
  Issue Type: Bug
  Components: HCatalog, Serializers/Deserializers
Affects Versions: 0.13.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Attachments: HIVE-6507.patch


 In HIVE-5504, we had to do some special casing in HCatalog to add a 
 particular set of orc table properties from table properties to job 
 properties.
 In doing so, it's obvious that that is a bit cumbersome, and ideally, the 
 list of all orc file table properties should really be an enum, rather than 
 individual loosely tied constant strings. If we were to clean this up, we can 
 clean up other code that references this to reference the entire enum, and 
 avoid future errors when new table properties are introduced, but other 
 referencing code is not updated.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: How long will we support Hadoop 0.20.2?

2014-03-05 Thread Edward Capriolo
It has been a while since we talked about this. How about we start an N
month clock starting from today. N can be 3 months not 6.


On Wed, Mar 5, 2014 at 12:09 PM, Ashutosh Chauhan hashut...@apache.orgwrote:

 ~6 months have since gone by. Branch 0.13 has now been cut. Is it OK now to
 drop 20 shims from trunk. Or, Ed are you still planning to use Hive on
 Hadoop 0.20.2,  6 months from now (or whenever 0.14 gets released)

 Thanks,
 Ashutosh


 On Thu, Sep 19, 2013 at 7:11 PM, Edward Capriolo edlinuxg...@gmail.com
 wrote:

  Remeber that the shim layer will continue to exist.
 
  This is something I would consider:
 
  In 6 months from day x hive will no longer default to building against
  hadoop 0.20. Hadoop 0.20.x will not be officially supported, meaning that
  our build bots will no longer test against 0.20.x. Compatibility with
  hadoop 0.20.x will not be a requirement for any patch. The 0.20 shims
 will
  still remain in trunk. Committers could still accept patches to support
  0.20 as long as they are not detrimental to current supported versions.
 
  What this would mean is if i checked out hive trunk in 5 months and 29
 days
  after day x it would still build , compile , and run on hadoop 0.20 and
 be
  feature complete. At the 6 month mark i lose that guarantee.
 
  On Thursday, September 19, 2013, Edward Capriolo edlinuxg...@gmail.com
  wrote:
   +1 to dropping Hadoop 0.20.2 support in Hive 0.13, which given that
 Hive
   0.12 has just branched means it isn't likely that Hive 0.13 will come
 out
   in the next 6 months.
  
   LOL
   -1. I was not suggesting we drop 0.20.2 support now, so the next hive
  version 0.13 won't have it. That would essentially mean we are dropping
 it
  now.
  
   I was suggesting dropping the 0.20.2 support in 6 months, so whatever
  version STARTED won't have it.
  
   On Thu, Sep 19, 2013 at 11:41 AM, Owen O'Malley omal...@apache.org
  wrote:
  
   +1 to dropping Hadoop 0.20.2 support in Hive 0.13, which given that
 Hive
   0.12 has just branched means it isn't likely that Hive 0.13 will come
  out
   in the next 6 months.
  
   -- Owen
  
  
   On Thu, Sep 19, 2013 at 8:35 AM, Brock Noland br...@cloudera.com
  wrote:
  
First off, I have to apologize, I didn't know there would be such
passions on both sides of the 0.20.2 argument!
   
On Thu, Sep 19, 2013 at 10:11 AM, Edward Capriolo 
  edlinuxg...@gmail.com
wrote:
 That rant being done,
   
No worries man, Hadoop versions are something worth ranting about.
IMHO Hadoop has a history of changing API's and breaking end users.
However, I feel this is improving.
   
 we can not and should not support hadoop 0.20.2
 forever. Discontinuing hadoop 0.20.2 in say 6 months might be
  reasonable,
 but I think dropping it on the floor due to a one line change for
 a
missing
 convenience constructor is a bit knee-jerk.
   
Very sorry if I came across with the opinion that we should drop
0.20.2 now because of the constructor issue. The issue brought up
0.20.2's age in my mind and the logical next step is to ask how long
we plan on supporting it! :) I like the time bounding idea and I
 feel
6 months is reasonable. FWIW, the 1.X series is stable for my needs.
   
Brock
   
  
  
 



Re: Review Request 18757: HIVE-6486 Support secure Subject.doAs() in HiveServer2 JDBC client

2014-03-05 Thread Shivaraju Gowda


 On March 4, 2014, 11:16 p.m., Thejas Nair wrote:
  http://svn.apache.org/repos/asf/hive/trunk/service/src/java/org/apache/hive/service/auth/KerberosSaslHelper.java,
   line 70
  https://reviews.apache.org/r/18757/diff/1/?file=510309#file510309line70
 
  can you take fix this indentation ? is it having tabs instead of spaces 
  ?
 

Yes there is a tab in there, both(this and the one below) of these blocks are 
eclipse indented(cntrl+i). However, the rest of the code has different 
indentation so it looks little odd. I will have to indent the entire file to be 
consistent.


- Shivaraju


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/18757/#review36205
---


On March 4, 2014, 3:14 p.m., Shivaraju Gowda wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/18757/
 ---
 
 (Updated March 4, 2014, 3:14 p.m.)
 
 
 Review request for hive, Kevin Minder, Prasad Mujumdar, Thejas Nair, and 
 Vaibhav Gumashta.
 
 
 Bugs: HIVE-6486
 https://issues.apache.org/jira/browse/HIVE-6486
 
 
 Repository: hive
 
 
 Description
 ---
 
 Support secure Subject.doAs() in HiveServer2 JDBC client.
 
 Original review: https://reviews.apache.org/r/18464/
 
 
 Diffs
 -
 
   
 http://svn.apache.org/repos/asf/hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java
  1574208 
   
 http://svn.apache.org/repos/asf/hive/trunk/service/src/java/org/apache/hive/service/auth/KerberosSaslHelper.java
  1574208 
   
 http://svn.apache.org/repos/asf/hive/trunk/service/src/java/org/apache/hive/service/auth/TSubjectAssumingTransport.java
  PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/18757/diff/
 
 
 Testing
 ---
 
 Manual testing.
 
 
 Thanks,
 
 Shivaraju Gowda
 




[jira] [Updated] (HIVE-6338) Improve exception handling in createDefaultDb() in Metastore

2014-03-05 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-6338:
---

Attachment: HIVE-6338.patch

[~brocknoland] Can you take a look?

 Improve exception handling in createDefaultDb() in Metastore
 

 Key: HIVE-6338
 URL: https://issues.apache.org/jira/browse/HIVE-6338
 Project: Hive
  Issue Type: Task
  Components: Metastore
Affects Versions: 0.8.0, 0.9.0, 0.10.0, 0.11.0, 0.12.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
Priority: Blocker
 Fix For: 0.13.0

 Attachments: HIVE-6338.patch


 There is a suggestion on HIVE-5959 comment list on possible improvements.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6338) Improve exception handling in createDefaultDb() in Metastore

2014-03-05 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-6338:
---

Status: Patch Available  (was: Open)

 Improve exception handling in createDefaultDb() in Metastore
 

 Key: HIVE-6338
 URL: https://issues.apache.org/jira/browse/HIVE-6338
 Project: Hive
  Issue Type: Task
  Components: Metastore
Affects Versions: 0.12.0, 0.11.0, 0.10.0, 0.9.0, 0.8.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
Priority: Blocker
 Fix For: 0.13.0

 Attachments: HIVE-6338.patch


 There is a suggestion on HIVE-5959 comment list on possible improvements.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 18757: HIVE-6486 Support secure Subject.doAs() in HiveServer2 JDBC client

2014-03-05 Thread Shivaraju Gowda

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/18757/
---

(Updated March 5, 2014, 10:30 a.m.)


Review request for hive, Kevin Minder, Prasad Mujumdar, Thejas Nair, and 
Vaibhav Gumashta.


Changes
---

change flag to kerberosAuthType=fromKeyTab and take care of indentation in 
KerberosSaslHelper.


Bugs: HIVE-6486
https://issues.apache.org/jira/browse/HIVE-6486


Repository: hive


Description
---

Support secure Subject.doAs() in HiveServer2 JDBC client.

Original review: https://reviews.apache.org/r/18464/


Diffs (updated)
-

  
http://svn.apache.org/repos/asf/hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java
 1574208 
  
http://svn.apache.org/repos/asf/hive/trunk/service/src/java/org/apache/hive/service/auth/KerberosSaslHelper.java
 1574208 
  
http://svn.apache.org/repos/asf/hive/trunk/service/src/java/org/apache/hive/service/auth/TSubjectAssumingTransport.java
 PRE-CREATION 

Diff: https://reviews.apache.org/r/18757/diff/


Testing
---

Manual testing.


Thanks,

Shivaraju Gowda



[jira] [Commented] (HIVE-6338) Improve exception handling in createDefaultDb() in Metastore

2014-03-05 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921167#comment-13921167
 ] 

Brock Noland commented on HIVE-6338:


+1

 Improve exception handling in createDefaultDb() in Metastore
 

 Key: HIVE-6338
 URL: https://issues.apache.org/jira/browse/HIVE-6338
 Project: Hive
  Issue Type: Task
  Components: Metastore
Affects Versions: 0.8.0, 0.9.0, 0.10.0, 0.11.0, 0.12.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
Priority: Blocker
 Fix For: 0.13.0

 Attachments: HIVE-6338.patch


 There is a suggestion on HIVE-5959 comment list on possible improvements.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6432) Remove deprecated methods in HCatalog

2014-03-05 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-6432:
---

Release Note: 
Removes all org.apache.hcatalog.* components, removes all functions and methods 
that were deprecated on and before hive-0.12.

Warning note : especially removes HBaseHCatStorageHandler altogether.
Hadoop Flags: Incompatible change
  Status: Patch Available  (was: Open)

 Remove deprecated methods in HCatalog
 -

 Key: HIVE-6432
 URL: https://issues.apache.org/jira/browse/HIVE-6432
 Project: Hive
  Issue Type: Task
  Components: HCatalog
Affects Versions: 0.14.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Attachments: HIVE-6432.patch, HIVE-6432.wip.1.patch, 
 HIVE-6432.wip.2.patch, hcat.6432.test.out


 There are a lot of methods in HCatalog that have been deprecated in HCatalog 
 0.5, and some that were recently deprecated in Hive 0.11 (joint release with 
 HCatalog).
 The goal for HCatalog deprecation is that in general, after something has 
 been deprecated, it is expected to stay around for 2 releases, which means 
 hive-0.13 will be the last release to ship with all the methods that were 
 deprecated in hive-0.11 (the org.apache.hcatalog.* files should all be 
 removed afterwards), and it is also good for us to clean out and nuke all 
 other older deprecated methods.
 We should take this on early in a dev/release cycle to allow us time to 
 resolve all fallout, so I propose that we remove all HCatalog deprecated 
 methods after we branch out 0.13 and 0.14 becomes trunk.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6432) Remove deprecated methods in HCatalog

2014-03-05 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-6432:
---

Affects Version/s: 0.14.0

 Remove deprecated methods in HCatalog
 -

 Key: HIVE-6432
 URL: https://issues.apache.org/jira/browse/HIVE-6432
 Project: Hive
  Issue Type: Task
  Components: HCatalog
Affects Versions: 0.14.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Attachments: HIVE-6432.patch, HIVE-6432.wip.1.patch, 
 HIVE-6432.wip.2.patch, hcat.6432.test.out


 There are a lot of methods in HCatalog that have been deprecated in HCatalog 
 0.5, and some that were recently deprecated in Hive 0.11 (joint release with 
 HCatalog).
 The goal for HCatalog deprecation is that in general, after something has 
 been deprecated, it is expected to stay around for 2 releases, which means 
 hive-0.13 will be the last release to ship with all the methods that were 
 deprecated in hive-0.11 (the org.apache.hcatalog.* files should all be 
 removed afterwards), and it is also good for us to clean out and nuke all 
 other older deprecated methods.
 We should take this on early in a dev/release cycle to allow us time to 
 resolve all fallout, so I propose that we remove all HCatalog deprecated 
 methods after we branch out 0.13 and 0.14 becomes trunk.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6338) Improve exception handling in createDefaultDb() in Metastore

2014-03-05 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921184#comment-13921184
 ] 

Sergey Shelukhin commented on HIVE-6338:


+1

 Improve exception handling in createDefaultDb() in Metastore
 

 Key: HIVE-6338
 URL: https://issues.apache.org/jira/browse/HIVE-6338
 Project: Hive
  Issue Type: Task
  Components: Metastore
Affects Versions: 0.8.0, 0.9.0, 0.10.0, 0.11.0, 0.12.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
Priority: Blocker
 Fix For: 0.13.0

 Attachments: HIVE-6338.patch


 There is a suggestion on HIVE-5959 comment list on possible improvements.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6486) Support secure Subject.doAs() in HiveServer2 JDBC client.

2014-03-05 Thread Shivaraju Gowda (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921185#comment-13921185
 ] 

Shivaraju Gowda commented on HIVE-6486:
---

Vaibhav Gumashta: I am OK with the new flag. I have updated the patch 
accordingly and addressed the indentation which Thejas pointed out in the 
review.


 Support secure Subject.doAs() in HiveServer2 JDBC client.
 -

 Key: HIVE-6486
 URL: https://issues.apache.org/jira/browse/HIVE-6486
 Project: Hive
  Issue Type: Improvement
  Components: Authentication, HiveServer2, JDBC
Affects Versions: 0.11.0, 0.12.0
Reporter: Shivaraju Gowda
Assignee: Shivaraju Gowda
 Fix For: 0.13.0

 Attachments: HIVE-6486.1.patch, HIVE-6486.2.patch, 
 Hive_011_Support-Subject_doAS.patch, TestHive_SujectDoAs.java


 HIVE-5155 addresses the problem of kerberos authentication in multi-user 
 middleware server using proxy user.  In this mode the principal used by the 
 middle ware server has privileges to impersonate selected users in 
 Hive/Hadoop. 
 This enhancement is to support Subject.doAs() authentication in  Hive JDBC 
 layer so that the end users Kerberos Subject is passed through in the middle 
 ware server. With this improvement there won't be any additional setup in the 
 server to grant proxy privileges to some users and there won't be need to 
 specify a proxy user in the JDBC client. This version should also be more 
 secure since it won't require principals with the privileges to impersonate 
 other users in Hive/Hadoop setup.
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6432) Remove deprecated methods in HCatalog

2014-03-05 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921189#comment-13921189
 ] 

Ashutosh Chauhan commented on HIVE-6432:


+1

 Remove deprecated methods in HCatalog
 -

 Key: HIVE-6432
 URL: https://issues.apache.org/jira/browse/HIVE-6432
 Project: Hive
  Issue Type: Task
  Components: HCatalog
Affects Versions: 0.14.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Attachments: HIVE-6432.patch, HIVE-6432.wip.1.patch, 
 HIVE-6432.wip.2.patch, hcat.6432.test.out


 There are a lot of methods in HCatalog that have been deprecated in HCatalog 
 0.5, and some that were recently deprecated in Hive 0.11 (joint release with 
 HCatalog).
 The goal for HCatalog deprecation is that in general, after something has 
 been deprecated, it is expected to stay around for 2 releases, which means 
 hive-0.13 will be the last release to ship with all the methods that were 
 deprecated in hive-0.11 (the org.apache.hcatalog.* files should all be 
 removed afterwards), and it is also good for us to clean out and nuke all 
 other older deprecated methods.
 We should take this on early in a dev/release cycle to allow us time to 
 resolve all fallout, so I propose that we remove all HCatalog deprecated 
 methods after we branch out 0.13 and 0.14 becomes trunk.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6541) Need to write documentation for ACID work

2014-03-05 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-6541:
-

Attachment: hive-6541.txt

Here's a .txt draft.  No, it's quite different then the pdf attached to 5317.  
This is intended to be user facing documentation, whereas 5317 is a design doc.

 Need to write documentation for ACID work
 -

 Key: HIVE-6541
 URL: https://issues.apache.org/jira/browse/HIVE-6541
 Project: Hive
  Issue Type: Sub-task
  Components: Documentation
Affects Versions: 0.13.0
Reporter: Alan Gates
Assignee: Alan Gates
 Fix For: 0.13.0

 Attachments: hive-6541.txt


 ACID introduces a number of new config file options, tables in the metastore, 
 keywords in the grammar, and a new interface for use of tools like storm and 
 flume.  These need to be documented.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6522) AVG() failure with decimal type

2014-03-05 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921195#comment-13921195
 ] 

Jason Dere commented on HIVE-6522:
--

Yeah, this looks like it's fixed, thanks Xuefu.

 AVG() failure with decimal type
 ---

 Key: HIVE-6522
 URL: https://issues.apache.org/jira/browse/HIVE-6522
 Project: Hive
  Issue Type: Bug
  Components: UDF
Affects Versions: 0.13.0
Reporter: Jason Dere

 The following test fails:
 {code}
 hive describe dec4;
 OK
 key   string  from deserializer   
 c1string  from deserializer   
 c2decimal(10,2)   from deserializer   
 Time taken: 0.716 seconds, Fetched: 3 row(s)
 hive select * from dec4;
 OK
 484   484 484
 98NULLNULL
 278   NULLNULL
 255   255 255
 409   NULLNULL
 165   165 165
 2727  27
 311   NULLNULL
 86NULLNULL
 238   NULLNULL
 Time taken: 0.262 seconds, Fetched: 10 row(s)
 hive select avg(cast(key as decimal(3,0))) from dec4;
 ...
 Task failed!
 Task ID:
   Stage-1
 Logs:
 /tmp/jdere/hive.log
 FAILED: Execution Error, return code 2 from 
 org.apache.hadoop.hive.ql.exec.mr.MapRedTask
 {code}
 The logs show the following stack trace. 
 {noformat}
 java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: 
 Hive Runtime Error while processing row (tag=0) [Error getting row data with 
 exception java.lang.NumberFormatException: Zero length BigInteger
   at java.math.BigInteger.init(BigInteger.java:171)
   at 
 org.apache.hadoop.hive.serde2.io.HiveDecimalWritable.getHiveDecimal(HiveDecimalWritable.java:85)
   at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableHiveDecimalObjectInspector.getPrimitiveJavaObject(WritableHiveDecimalObjectInspector.java:43)
   at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:322)
   at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:392)
   at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:392)
   at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:392)
   at 
 org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:236)
   at 
 org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:222)
   at 
 org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:265)
   at 
 org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:462)
   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:408)
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:443)
  ]
   at 
 org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:282)
   at 
 org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:462)
   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:408)
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:443)
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
 Error while processing row (tag=0) [Error getting row data with exception 
 java.lang.NumberFormatException: Zero length BigInteger
   at java.math.BigInteger.init(BigInteger.java:171)
   at 
 org.apache.hadoop.hive.serde2.io.HiveDecimalWritable.getHiveDecimal(HiveDecimalWritable.java:85)
   at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableHiveDecimalObjectInspector.getPrimitiveJavaObject(WritableHiveDecimalObjectInspector.java:43)
   at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:322)
   at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:392)
   at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:392)
   at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:392)
   at 
 org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:236)
   at 
 org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:222)
   at 
 org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:265)
   at 
 org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:462)
   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:408)
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:443)
  ]
   at 
 org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:270)
   ... 3 more
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
 java.lang.NumberFormatException: Zero length BigInteger
   at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:808)
   at 
 

[jira] [Resolved] (HIVE-6522) AVG() failure with decimal type

2014-03-05 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere resolved HIVE-6522.
--

   Resolution: Fixed
Fix Version/s: 0.13.0

 AVG() failure with decimal type
 ---

 Key: HIVE-6522
 URL: https://issues.apache.org/jira/browse/HIVE-6522
 Project: Hive
  Issue Type: Bug
  Components: UDF
Affects Versions: 0.13.0
Reporter: Jason Dere
 Fix For: 0.13.0


 The following test fails:
 {code}
 hive describe dec4;
 OK
 key   string  from deserializer   
 c1string  from deserializer   
 c2decimal(10,2)   from deserializer   
 Time taken: 0.716 seconds, Fetched: 3 row(s)
 hive select * from dec4;
 OK
 484   484 484
 98NULLNULL
 278   NULLNULL
 255   255 255
 409   NULLNULL
 165   165 165
 2727  27
 311   NULLNULL
 86NULLNULL
 238   NULLNULL
 Time taken: 0.262 seconds, Fetched: 10 row(s)
 hive select avg(cast(key as decimal(3,0))) from dec4;
 ...
 Task failed!
 Task ID:
   Stage-1
 Logs:
 /tmp/jdere/hive.log
 FAILED: Execution Error, return code 2 from 
 org.apache.hadoop.hive.ql.exec.mr.MapRedTask
 {code}
 The logs show the following stack trace. 
 {noformat}
 java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: 
 Hive Runtime Error while processing row (tag=0) [Error getting row data with 
 exception java.lang.NumberFormatException: Zero length BigInteger
   at java.math.BigInteger.init(BigInteger.java:171)
   at 
 org.apache.hadoop.hive.serde2.io.HiveDecimalWritable.getHiveDecimal(HiveDecimalWritable.java:85)
   at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableHiveDecimalObjectInspector.getPrimitiveJavaObject(WritableHiveDecimalObjectInspector.java:43)
   at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:322)
   at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:392)
   at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:392)
   at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:392)
   at 
 org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:236)
   at 
 org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:222)
   at 
 org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:265)
   at 
 org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:462)
   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:408)
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:443)
  ]
   at 
 org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:282)
   at 
 org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:462)
   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:408)
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:443)
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
 Error while processing row (tag=0) [Error getting row data with exception 
 java.lang.NumberFormatException: Zero length BigInteger
   at java.math.BigInteger.init(BigInteger.java:171)
   at 
 org.apache.hadoop.hive.serde2.io.HiveDecimalWritable.getHiveDecimal(HiveDecimalWritable.java:85)
   at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableHiveDecimalObjectInspector.getPrimitiveJavaObject(WritableHiveDecimalObjectInspector.java:43)
   at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:322)
   at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:392)
   at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:392)
   at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:392)
   at 
 org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:236)
   at 
 org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:222)
   at 
 org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:265)
   at 
 org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:462)
   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:408)
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:443)
  ]
   at 
 org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:270)
   ... 3 more
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
 java.lang.NumberFormatException: Zero length BigInteger
   at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:808)
   at 
 

[jira] [Commented] (HIVE-6486) Support secure Subject.doAs() in HiveServer2 JDBC client.

2014-03-05 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921199#comment-13921199
 ] 

Vaibhav Gumashta commented on HIVE-6486:


[~shivshi] Cool, thanks so much for the effort. New patch looks good to me.

 Support secure Subject.doAs() in HiveServer2 JDBC client.
 -

 Key: HIVE-6486
 URL: https://issues.apache.org/jira/browse/HIVE-6486
 Project: Hive
  Issue Type: Improvement
  Components: Authentication, HiveServer2, JDBC
Affects Versions: 0.11.0, 0.12.0
Reporter: Shivaraju Gowda
Assignee: Shivaraju Gowda
 Fix For: 0.13.0

 Attachments: HIVE-6486.1.patch, HIVE-6486.2.patch, 
 Hive_011_Support-Subject_doAS.patch, TestHive_SujectDoAs.java


 HIVE-5155 addresses the problem of kerberos authentication in multi-user 
 middleware server using proxy user.  In this mode the principal used by the 
 middle ware server has privileges to impersonate selected users in 
 Hive/Hadoop. 
 This enhancement is to support Subject.doAs() authentication in  Hive JDBC 
 layer so that the end users Kerberos Subject is passed through in the middle 
 ware server. With this improvement there won't be any additional setup in the 
 server to grant proxy privileges to some users and there won't be need to 
 specify a proxy user in the JDBC client. This version should also be more 
 secure since it won't require principals with the privileges to impersonate 
 other users in Hive/Hadoop setup.
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6486) Support secure Subject.doAs() in HiveServer2 JDBC client.

2014-03-05 Thread Shivaraju Gowda (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivaraju Gowda updated HIVE-6486:
--

Attachment: HIVE-6486.3.patch

 Support secure Subject.doAs() in HiveServer2 JDBC client.
 -

 Key: HIVE-6486
 URL: https://issues.apache.org/jira/browse/HIVE-6486
 Project: Hive
  Issue Type: Improvement
  Components: Authentication, HiveServer2, JDBC
Affects Versions: 0.11.0, 0.12.0
Reporter: Shivaraju Gowda
Assignee: Shivaraju Gowda
 Fix For: 0.13.0

 Attachments: HIVE-6486.1.patch, HIVE-6486.2.patch, HIVE-6486.3.patch, 
 Hive_011_Support-Subject_doAS.patch, TestHive_SujectDoAs.java


 HIVE-5155 addresses the problem of kerberos authentication in multi-user 
 middleware server using proxy user.  In this mode the principal used by the 
 middle ware server has privileges to impersonate selected users in 
 Hive/Hadoop. 
 This enhancement is to support Subject.doAs() authentication in  Hive JDBC 
 layer so that the end users Kerberos Subject is passed through in the middle 
 ware server. With this improvement there won't be any additional setup in the 
 server to grant proxy privileges to some users and there won't be need to 
 specify a proxy user in the JDBC client. This version should also be more 
 secure since it won't require principals with the privileges to impersonate 
 other users in Hive/Hadoop setup.
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6486) Support secure Subject.doAs() in HiveServer2 JDBC client.

2014-03-05 Thread Shivaraju Gowda (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivaraju Gowda updated HIVE-6486:
--

Status: Open  (was: Patch Available)

 Support secure Subject.doAs() in HiveServer2 JDBC client.
 -

 Key: HIVE-6486
 URL: https://issues.apache.org/jira/browse/HIVE-6486
 Project: Hive
  Issue Type: Improvement
  Components: Authentication, HiveServer2, JDBC
Affects Versions: 0.12.0, 0.11.0
Reporter: Shivaraju Gowda
Assignee: Shivaraju Gowda
 Fix For: 0.13.0

 Attachments: HIVE-6486.1.patch, HIVE-6486.2.patch, HIVE-6486.3.patch, 
 Hive_011_Support-Subject_doAS.patch, TestHive_SujectDoAs.java


 HIVE-5155 addresses the problem of kerberos authentication in multi-user 
 middleware server using proxy user.  In this mode the principal used by the 
 middle ware server has privileges to impersonate selected users in 
 Hive/Hadoop. 
 This enhancement is to support Subject.doAs() authentication in  Hive JDBC 
 layer so that the end users Kerberos Subject is passed through in the middle 
 ware server. With this improvement there won't be any additional setup in the 
 server to grant proxy privileges to some users and there won't be need to 
 specify a proxy user in the JDBC client. This version should also be more 
 secure since it won't require principals with the privileges to impersonate 
 other users in Hive/Hadoop setup.
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6486) Support secure Subject.doAs() in HiveServer2 JDBC client.

2014-03-05 Thread Shivaraju Gowda (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivaraju Gowda updated HIVE-6486:
--

Status: Patch Available  (was: Open)

 Support secure Subject.doAs() in HiveServer2 JDBC client.
 -

 Key: HIVE-6486
 URL: https://issues.apache.org/jira/browse/HIVE-6486
 Project: Hive
  Issue Type: Improvement
  Components: Authentication, HiveServer2, JDBC
Affects Versions: 0.12.0, 0.11.0
Reporter: Shivaraju Gowda
Assignee: Shivaraju Gowda
 Fix For: 0.13.0

 Attachments: HIVE-6486.1.patch, HIVE-6486.2.patch, HIVE-6486.3.patch, 
 Hive_011_Support-Subject_doAS.patch, TestHive_SujectDoAs.java


 HIVE-5155 addresses the problem of kerberos authentication in multi-user 
 middleware server using proxy user.  In this mode the principal used by the 
 middle ware server has privileges to impersonate selected users in 
 Hive/Hadoop. 
 This enhancement is to support Subject.doAs() authentication in  Hive JDBC 
 layer so that the end users Kerberos Subject is passed through in the middle 
 ware server. With this improvement there won't be any additional setup in the 
 server to grant proxy privileges to some users and there won't be need to 
 specify a proxy user in the JDBC client. This version should also be more 
 secure since it won't require principals with the privileges to impersonate 
 other users in Hive/Hadoop setup.
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6507) OrcFile table property names are specified as strings

2014-03-05 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921222#comment-13921222
 ] 

Sushanth Sowmyan commented on HIVE-6507:


Ah, ok, how about if we reintroduce the strings, but mark them deprecated so 
that new parameters are henceforth introduced in the enum instead, as in the 
.2.patch?

 OrcFile table property names are specified as strings
 -

 Key: HIVE-6507
 URL: https://issues.apache.org/jira/browse/HIVE-6507
 Project: Hive
  Issue Type: Bug
  Components: HCatalog, Serializers/Deserializers
Affects Versions: 0.13.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Attachments: HIVE-6507.2.patch, HIVE-6507.patch


 In HIVE-5504, we had to do some special casing in HCatalog to add a 
 particular set of orc table properties from table properties to job 
 properties.
 In doing so, it's obvious that that is a bit cumbersome, and ideally, the 
 list of all orc file table properties should really be an enum, rather than 
 individual loosely tied constant strings. If we were to clean this up, we can 
 clean up other code that references this to reference the entire enum, and 
 avoid future errors when new table properties are introduced, but other 
 referencing code is not updated.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6507) OrcFile table property names are specified as strings

2014-03-05 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-6507:
---

Attachment: HIVE-6507.2.patch

 OrcFile table property names are specified as strings
 -

 Key: HIVE-6507
 URL: https://issues.apache.org/jira/browse/HIVE-6507
 Project: Hive
  Issue Type: Bug
  Components: HCatalog, Serializers/Deserializers
Affects Versions: 0.13.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Attachments: HIVE-6507.2.patch, HIVE-6507.patch


 In HIVE-5504, we had to do some special casing in HCatalog to add a 
 particular set of orc table properties from table properties to job 
 properties.
 In doing so, it's obvious that that is a bit cumbersome, and ideally, the 
 list of all orc file table properties should really be an enum, rather than 
 individual loosely tied constant strings. If we were to clean this up, we can 
 clean up other code that references this to reference the entire enum, and 
 avoid future errors when new table properties are introduced, but other 
 referencing code is not updated.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6486) Support secure Subject.doAs() in HiveServer2 JDBC client.

2014-03-05 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921223#comment-13921223
 ] 

Thejas M Nair commented on HIVE-6486:
-

+1 for updated patch - HIVE-6486.2.patch


 Support secure Subject.doAs() in HiveServer2 JDBC client.
 -

 Key: HIVE-6486
 URL: https://issues.apache.org/jira/browse/HIVE-6486
 Project: Hive
  Issue Type: Improvement
  Components: Authentication, HiveServer2, JDBC
Affects Versions: 0.11.0, 0.12.0
Reporter: Shivaraju Gowda
Assignee: Shivaraju Gowda
 Fix For: 0.13.0

 Attachments: HIVE-6486.1.patch, HIVE-6486.2.patch, HIVE-6486.3.patch, 
 Hive_011_Support-Subject_doAS.patch, TestHive_SujectDoAs.java


 HIVE-5155 addresses the problem of kerberos authentication in multi-user 
 middleware server using proxy user.  In this mode the principal used by the 
 middle ware server has privileges to impersonate selected users in 
 Hive/Hadoop. 
 This enhancement is to support Subject.doAs() authentication in  Hive JDBC 
 layer so that the end users Kerberos Subject is passed through in the middle 
 ware server. With this improvement there won't be any additional setup in the 
 server to grant proxy privileges to some users and there won't be need to 
 specify a proxy user in the JDBC client. This version should also be more 
 secure since it won't require principals with the privileges to impersonate 
 other users in Hive/Hadoop setup.
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6507) OrcFile table property names are specified as strings

2014-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921227#comment-13921227
 ] 

Hive QA commented on HIVE-6507:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12631113/HIVE-6507.patch

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 5354 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_auto_sortmerge_join_16
org.apache.hive.beeline.TestSchemaTool.testSchemaInit
org.apache.hive.beeline.TestSchemaTool.testSchemaUpgrade
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1631/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1631/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12631113

 OrcFile table property names are specified as strings
 -

 Key: HIVE-6507
 URL: https://issues.apache.org/jira/browse/HIVE-6507
 Project: Hive
  Issue Type: Bug
  Components: HCatalog, Serializers/Deserializers
Affects Versions: 0.13.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Attachments: HIVE-6507.2.patch, HIVE-6507.patch


 In HIVE-5504, we had to do some special casing in HCatalog to add a 
 particular set of orc table properties from table properties to job 
 properties.
 In doing so, it's obvious that that is a bit cumbersome, and ideally, the 
 list of all orc file table properties should really be an enum, rather than 
 individual loosely tied constant strings. If we were to clean this up, we can 
 clean up other code that references this to reference the entire enum, and 
 avoid future errors when new table properties are introduced, but other 
 referencing code is not updated.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-5931) SQL std auth - add metastore get_role_participants api - to support DESCRIBE ROLE

2014-03-05 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5931:


Attachment: HIVE-5931.thriftapi.2.patch

 SQL std auth - add metastore get_role_participants api - to support DESCRIBE 
 ROLE
 -

 Key: HIVE-5931
 URL: https://issues.apache.org/jira/browse/HIVE-5931
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Thejas M Nair
 Attachments: HIVE-5931.thriftapi.2.patch, 
 HIVE-5931.thriftapi.followup.patch, HIVE-5931.thriftapi.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 This is necessary for DESCRIBE ROLE role statement. This will list
 all users and roles that participate in a role. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-5931) SQL std auth - add metastore get_role_participants api - to support DESCRIBE ROLE

2014-03-05 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921242#comment-13921242
 ] 

Ashutosh Chauhan commented on HIVE-5931:


Don't see {{ listRoleGrant get_role_grants_for_principal}} in latest patch. 
You want to do that in separate patch?

 SQL std auth - add metastore get_role_participants api - to support DESCRIBE 
 ROLE
 -

 Key: HIVE-5931
 URL: https://issues.apache.org/jira/browse/HIVE-5931
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Thejas M Nair
 Attachments: HIVE-5931.thriftapi.2.patch, 
 HIVE-5931.thriftapi.followup.patch, HIVE-5931.thriftapi.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 This is necessary for DESCRIBE ROLE role statement. This will list
 all users and roles that participate in a role. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-6557) TestSchemaTool tests are failing

2014-03-05 Thread Vikram Dixit K (JIRA)
Vikram Dixit K created HIVE-6557:


 Summary: TestSchemaTool tests are failing
 Key: HIVE-6557
 URL: https://issues.apache.org/jira/browse/HIVE-6557
 Project: Hive
  Issue Type: Bug
  Components: Tests
Affects Versions: 0.14.0
Reporter: Vikram Dixit K






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-6556) bucketizedhiveinputformat.q fails due to difference in golden file output

2014-03-05 Thread Vikram Dixit K (JIRA)
Vikram Dixit K created HIVE-6556:


 Summary: bucketizedhiveinputformat.q fails due to difference in 
golden file output
 Key: HIVE-6556
 URL: https://issues.apache.org/jira/browse/HIVE-6556
 Project: Hive
  Issue Type: Bug
  Components: Tests
Affects Versions: 0.13.0, 0.14.0
Reporter: Vikram Dixit K


Looks like this test needs a golden file update.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6325) Enable using multiple concurrent sessions in tez

2014-03-05 Thread Vikram Dixit K (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921275#comment-13921275
 ] 

Vikram Dixit K commented on HIVE-6325:
--

Unrelated test failures. Created HIVE-6556 for the bucketizedhiveinputformat 
failure and HIVE-6557 for the TestSchemaTool failures. 

 Enable using multiple concurrent sessions in tez
 

 Key: HIVE-6325
 URL: https://issues.apache.org/jira/browse/HIVE-6325
 Project: Hive
  Issue Type: Improvement
  Components: Tez
Affects Versions: 0.13.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Attachments: HIVE-6325.1.patch, HIVE-6325.10.patch, 
 HIVE-6325.11.patch, HIVE-6325.2.patch, HIVE-6325.3.patch, HIVE-6325.4.patch, 
 HIVE-6325.5.patch, HIVE-6325.6.patch, HIVE-6325.7.patch, HIVE-6325.8.patch, 
 HIVE-6325.9.patch


 We would like to enable multiple concurrent sessions in tez via hive server 
 2. This will enable users to make efficient use of the cluster when it has 
 been partitioned using yarn queues.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-5931) SQL std auth - add metastore get_role_participants api - to support DESCRIBE ROLE

2014-03-05 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921280#comment-13921280
 ] 

Thejas M Nair commented on HIVE-5931:
-

Yes, I was planning to do that as part of HIVE-6547 .

 SQL std auth - add metastore get_role_participants api - to support DESCRIBE 
 ROLE
 -

 Key: HIVE-5931
 URL: https://issues.apache.org/jira/browse/HIVE-5931
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Thejas M Nair
 Attachments: HIVE-5931.thriftapi.2.patch, 
 HIVE-5931.thriftapi.followup.patch, HIVE-5931.thriftapi.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 This is necessary for DESCRIBE ROLE role statement. This will list
 all users and roles that participate in a role. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6538) yet another annoying exception in test logs

2014-03-05 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921279#comment-13921279
 ] 

Jason Dere commented on HIVE-6538:
--

Hi [~szehon], given that logic in 
FunctionRegistry.getFunctionInfoFromMetastore() does not propagate the 
exception I did feel like in the general case we should log exceptions, though 
I could see us not doing that for that particular error.  I suppose there could 
be a couple of ways to change getFunctionInfoFromMetastore():
1. Set the logging level to DEBUG (not sure if the message will still show up 
in the test logs).
2. Add a special catch for for NoSuchObjectException, so that particular error 
is not logged.

 yet another annoying exception in test logs
 ---

 Key: HIVE-6538
 URL: https://issues.apache.org/jira/browse/HIVE-6538
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Szehon Ho
Priority: Trivial
 Attachments: HIVE-6538.patch


 Whenever you look at failed q tests you have to go thru this useless 
 exception.
 {noformat}
 2014-03-03 11:22:54,872 ERROR metastore.RetryingHMSHandler 
 (RetryingHMSHandler.java:invoke(143)) - 
 MetaException(message:NoSuchObjectException(message:Function 
 default.qtest_get_java_boolean does not exist))
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newMetaException(HiveMetaStore.java:4575)
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_function(HiveMetaStore.java:4702)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
   at $Proxy8.get_function(Unknown Source)
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getFunction(HiveMetaStoreClient.java:1526)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89)
   at $Proxy9.getFunction(Unknown Source)
   at org.apache.hadoop.hive.ql.metadata.Hive.getFunction(Hive.java:2603)
   at 
 org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfoFromMetastore(FunctionRegistry.java:546)
   at 
 org.apache.hadoop.hive.ql.exec.FunctionRegistry.getQualifiedFunctionInfo(FunctionRegistry.java:578)
   at 
 org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfo(FunctionRegistry.java:599)
   at 
 org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfo(FunctionRegistry.java:606)
   at 
 org.apache.hadoop.hive.ql.parse.FunctionSemanticAnalyzer.analyzeDropFunction(FunctionSemanticAnalyzer.java:94)
   at 
 org.apache.hadoop.hive.ql.parse.FunctionSemanticAnalyzer.analyzeInternal(FunctionSemanticAnalyzer.java:60)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:445)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:345)
   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1078)
   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1121)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1014)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1004)
   at org.apache.hadoop.hive.ql.QTestUtil.runCmd(QTestUtil.java:655)
   at org.apache.hadoop.hive.ql.QTestUtil.createSources(QTestUtil.java:772)
   at 
 org.apache.hadoop.hive.cli.TestCliDriver.clinit(TestCliDriver.java:46)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.internal.runners.SuiteMethod.testFromSuiteMethod(SuiteMethod.java:34)
   at org.junit.internal.runners.SuiteMethod.init(SuiteMethod.java:23)
   at 
 org.junit.internal.builders.SuiteMethodBuilder.runnerForClass(SuiteMethodBuilder.java:14)
   at 
 org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:57)
   at 
 

[jira] [Commented] (HIVE-6541) Need to write documentation for ACID work

2014-03-05 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921278#comment-13921278
 ] 

Lefty Leverenz commented on HIVE-6541:
--

Thanks, I've made a copy and I'll review it this evening.

 Need to write documentation for ACID work
 -

 Key: HIVE-6541
 URL: https://issues.apache.org/jira/browse/HIVE-6541
 Project: Hive
  Issue Type: Sub-task
  Components: Documentation
Affects Versions: 0.13.0
Reporter: Alan Gates
Assignee: Alan Gates
 Fix For: 0.13.0

 Attachments: hive-6541.txt


 ACID introduces a number of new config file options, tables in the metastore, 
 keywords in the grammar, and a new interface for use of tools like storm and 
 flume.  These need to be documented.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HIVE-6557) TestSchemaTool tests are failing

2014-03-05 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan resolved HIVE-6557.


Resolution: Duplicate

Also, tracked in HIVE-6555

 TestSchemaTool tests are failing
 

 Key: HIVE-6557
 URL: https://issues.apache.org/jira/browse/HIVE-6557
 Project: Hive
  Issue Type: Bug
  Components: Tests
Affects Versions: 0.14.0
Reporter: Vikram Dixit K





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-5931) SQL std auth - add metastore get_role_participants api - to support DESCRIBE ROLE

2014-03-05 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921287#comment-13921287
 ] 

Ashutosh Chauhan commented on HIVE-5931:


OK. Current proposal looks good.

 SQL std auth - add metastore get_role_participants api - to support DESCRIBE 
 ROLE
 -

 Key: HIVE-5931
 URL: https://issues.apache.org/jira/browse/HIVE-5931
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Thejas M Nair
 Attachments: HIVE-5931.thriftapi.2.patch, 
 HIVE-5931.thriftapi.followup.patch, HIVE-5931.thriftapi.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 This is necessary for DESCRIBE ROLE role statement. This will list
 all users and roles that participate in a role. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6417) sql std auth - new users in admin role config should get added

2014-03-05 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921294#comment-13921294
 ] 

Ashutosh Chauhan commented on HIVE-6417:


Problem is both role creation and adding users in a role is governed by same 
flag. We need to separate these two tasks such that they are governed by two 
different flags. Patch forthcoming.

 sql std auth - new users in admin role config should get added
 --

 Key: HIVE-6417
 URL: https://issues.apache.org/jira/browse/HIVE-6417
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Thejas M Nair

 if metastore is started with hive.users.in.admin.role=user1, then user1 is 
 added admin role to metastore.
 If the value is changed to hive.users.in.admin.role=user2, then user2 should 
 get added to the role in metastore. Right now, if the admin role exists, new 
 users don't get added.
 A work-around is -  user1 adding user2 to the admin role using grant role 
 statement.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-887) Allow SELECT col without a mapreduce job

2014-03-05 Thread Tim Goodman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921297#comment-13921297
 ] 

Tim Goodman commented on HIVE-887:
--

What is the syntax for this?  I'm using hive 0.10.0, and the default behavior 
still appears to be to trigger a map reduce whenever I specify column names, 
even when I use LIMIT 1.

 Allow SELECT col without a mapreduce job
 --

 Key: HIVE-887
 URL: https://issues.apache.org/jira/browse/HIVE-887
 Project: Hive
  Issue Type: New Feature
 Environment: All
Reporter: Eric Sun
Assignee: Ning Zhang
 Fix For: 0.10.0


 I often find myself needing to take a quick look at a particular column of a 
 Hive table.
 I usually do this by doing a 
 SELECT * from table LIMIT 20;
 from the CLI.  Doing this is pretty fast since it doesn't require a mapreduce 
 job.  However, it's tough to examine just 1 or 2 columns when the table is 
 very wide.
 So, I might do
 SELECT col from table LIMIT 20;
 but it's much slower since it requires a map-reduce.  It'd be really 
 convenient if a map-reduce wasn't necessary.
 Currently a good work around is to do
 hive -e select * from table | cut --key=n
 but it'd be more convenient if it were built in since it alleviates the need 
 for column counting.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Thoughts on new metastore APIs

2014-03-05 Thread Brock Noland
Hi,


There is a ton of great work going into the 0.13 release.
Specifically, we are adding a ton of APIs to the metastore:

https://github.com/apache/hive/blame/trunk/metastore/if/hive_metastore.thrift

Few of these new API's follow the best practice of a single request
and response struct. Some follow this partially by having a single
response object but take no arguments while others return void and
take a single request object.  Still others, mostly related to
authorization, do not even partially follow this pattern.

The single request/response struct model is extremely important as
changing the number of arguments is a backwards incompatible change.
Therefore the only way to change an api is to add *new* methods calls.
This is why we have so many crazy APIs in the hive metastore such as
create_table/create_table_with_environment_context and 12 (yes,
twelve) ways to get partitions.

I would like to suggest that we require all new APIs to follow the
single request/response struct model. That is any new API that would
be committed *after* today.

I have heard the following arguments against this approach which I
believe to be invalid:

*This API will never change (or never return a value or never take
another value)*
We all have been writing code enough that we don't know, there are
unknown unknowns. By following the single request/response struct
model for *all* APIs we can future proof ourselves. Why wouldn't we
want to buy insurance now when it's cheap?

*The performance impact of wrapping an object is too much*
These calls are being made over the network which is orders of
magnitude slower than creating a small, simple, and lightweight object
to wrap method arguments and response values.

Cheers,
Brock


[jira] [Updated] (HIVE-6325) Enable using multiple concurrent sessions in tez

2014-03-05 Thread Vikram Dixit K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Dixit K updated HIVE-6325:
-

Attachment: HIVE-6325-trunk.patch
HIVE-6325-branch-0.13.patch

Patches for respective branches.

 Enable using multiple concurrent sessions in tez
 

 Key: HIVE-6325
 URL: https://issues.apache.org/jira/browse/HIVE-6325
 Project: Hive
  Issue Type: Improvement
  Components: Tez
Affects Versions: 0.13.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Attachments: HIVE-6325-branch-0.13.patch, HIVE-6325-trunk.patch, 
 HIVE-6325.1.patch, HIVE-6325.10.patch, HIVE-6325.11.patch, HIVE-6325.2.patch, 
 HIVE-6325.3.patch, HIVE-6325.4.patch, HIVE-6325.5.patch, HIVE-6325.6.patch, 
 HIVE-6325.7.patch, HIVE-6325.8.patch, HIVE-6325.9.patch


 We would like to enable multiple concurrent sessions in tez via hive server 
 2. This will enable users to make efficient use of the cluster when it has 
 been partitioned using yarn queues.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6538) yet another annoying exception in test logs

2014-03-05 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921309#comment-13921309
 ] 

Sergey Shelukhin commented on HIVE-6538:


1 would still appear in logs

 yet another annoying exception in test logs
 ---

 Key: HIVE-6538
 URL: https://issues.apache.org/jira/browse/HIVE-6538
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Szehon Ho
Priority: Trivial
 Attachments: HIVE-6538.patch


 Whenever you look at failed q tests you have to go thru this useless 
 exception.
 {noformat}
 2014-03-03 11:22:54,872 ERROR metastore.RetryingHMSHandler 
 (RetryingHMSHandler.java:invoke(143)) - 
 MetaException(message:NoSuchObjectException(message:Function 
 default.qtest_get_java_boolean does not exist))
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newMetaException(HiveMetaStore.java:4575)
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_function(HiveMetaStore.java:4702)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
   at $Proxy8.get_function(Unknown Source)
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getFunction(HiveMetaStoreClient.java:1526)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89)
   at $Proxy9.getFunction(Unknown Source)
   at org.apache.hadoop.hive.ql.metadata.Hive.getFunction(Hive.java:2603)
   at 
 org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfoFromMetastore(FunctionRegistry.java:546)
   at 
 org.apache.hadoop.hive.ql.exec.FunctionRegistry.getQualifiedFunctionInfo(FunctionRegistry.java:578)
   at 
 org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfo(FunctionRegistry.java:599)
   at 
 org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfo(FunctionRegistry.java:606)
   at 
 org.apache.hadoop.hive.ql.parse.FunctionSemanticAnalyzer.analyzeDropFunction(FunctionSemanticAnalyzer.java:94)
   at 
 org.apache.hadoop.hive.ql.parse.FunctionSemanticAnalyzer.analyzeInternal(FunctionSemanticAnalyzer.java:60)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:445)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:345)
   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1078)
   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1121)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1014)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1004)
   at org.apache.hadoop.hive.ql.QTestUtil.runCmd(QTestUtil.java:655)
   at org.apache.hadoop.hive.ql.QTestUtil.createSources(QTestUtil.java:772)
   at 
 org.apache.hadoop.hive.cli.TestCliDriver.clinit(TestCliDriver.java:46)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.internal.runners.SuiteMethod.testFromSuiteMethod(SuiteMethod.java:34)
   at org.junit.internal.runners.SuiteMethod.init(SuiteMethod.java:23)
   at 
 org.junit.internal.builders.SuiteMethodBuilder.runnerForClass(SuiteMethodBuilder.java:14)
   at 
 org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:57)
   at 
 org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(AllDefaultPossibilitiesBuilder.java:29)
   at 
 org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:57)
   at 
 org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:24)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:262)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
   at 
 

[jira] [Commented] (HIVE-6538) yet another annoying exception in test logs

2014-03-05 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921314#comment-13921314
 ] 

Jason Dere commented on HIVE-6538:
--

yeah, just do (2)

 yet another annoying exception in test logs
 ---

 Key: HIVE-6538
 URL: https://issues.apache.org/jira/browse/HIVE-6538
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Szehon Ho
Priority: Trivial
 Attachments: HIVE-6538.patch


 Whenever you look at failed q tests you have to go thru this useless 
 exception.
 {noformat}
 2014-03-03 11:22:54,872 ERROR metastore.RetryingHMSHandler 
 (RetryingHMSHandler.java:invoke(143)) - 
 MetaException(message:NoSuchObjectException(message:Function 
 default.qtest_get_java_boolean does not exist))
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newMetaException(HiveMetaStore.java:4575)
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_function(HiveMetaStore.java:4702)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
   at $Proxy8.get_function(Unknown Source)
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getFunction(HiveMetaStoreClient.java:1526)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89)
   at $Proxy9.getFunction(Unknown Source)
   at org.apache.hadoop.hive.ql.metadata.Hive.getFunction(Hive.java:2603)
   at 
 org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfoFromMetastore(FunctionRegistry.java:546)
   at 
 org.apache.hadoop.hive.ql.exec.FunctionRegistry.getQualifiedFunctionInfo(FunctionRegistry.java:578)
   at 
 org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfo(FunctionRegistry.java:599)
   at 
 org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfo(FunctionRegistry.java:606)
   at 
 org.apache.hadoop.hive.ql.parse.FunctionSemanticAnalyzer.analyzeDropFunction(FunctionSemanticAnalyzer.java:94)
   at 
 org.apache.hadoop.hive.ql.parse.FunctionSemanticAnalyzer.analyzeInternal(FunctionSemanticAnalyzer.java:60)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:445)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:345)
   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1078)
   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1121)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1014)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1004)
   at org.apache.hadoop.hive.ql.QTestUtil.runCmd(QTestUtil.java:655)
   at org.apache.hadoop.hive.ql.QTestUtil.createSources(QTestUtil.java:772)
   at 
 org.apache.hadoop.hive.cli.TestCliDriver.clinit(TestCliDriver.java:46)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.internal.runners.SuiteMethod.testFromSuiteMethod(SuiteMethod.java:34)
   at org.junit.internal.runners.SuiteMethod.init(SuiteMethod.java:23)
   at 
 org.junit.internal.builders.SuiteMethodBuilder.runnerForClass(SuiteMethodBuilder.java:14)
   at 
 org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:57)
   at 
 org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(AllDefaultPossibilitiesBuilder.java:29)
   at 
 org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:57)
   at 
 org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:24)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:262)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
   at 
 

[jira] [Commented] (HIVE-887) Allow SELECT col without a mapreduce job

2014-03-05 Thread Tim Goodman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921316#comment-13921316
 ] 

Tim Goodman commented on HIVE-887:
--

Hmm, apparently I have to do:
SET hive.fetch.task.conversion=more;
(default was hive.fetch.task.conversion=minimal)

 Allow SELECT col without a mapreduce job
 --

 Key: HIVE-887
 URL: https://issues.apache.org/jira/browse/HIVE-887
 Project: Hive
  Issue Type: New Feature
 Environment: All
Reporter: Eric Sun
Assignee: Ning Zhang
 Fix For: 0.10.0


 I often find myself needing to take a quick look at a particular column of a 
 Hive table.
 I usually do this by doing a 
 SELECT * from table LIMIT 20;
 from the CLI.  Doing this is pretty fast since it doesn't require a mapreduce 
 job.  However, it's tough to examine just 1 or 2 columns when the table is 
 very wide.
 So, I might do
 SELECT col from table LIMIT 20;
 but it's much slower since it requires a map-reduce.  It'd be really 
 convenient if a map-reduce wasn't necessary.
 Currently a good work around is to do
 hive -e select * from table | cut --key=n
 but it'd be more convenient if it were built in since it alleviates the need 
 for column counting.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6548) Missing owner name and type fields in schema script for DBS table

2014-03-05 Thread Harish Butani (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921320#comment-13921320
 ] 

Harish Butani commented on HIVE-6548:
-

+1

 Missing owner name and type fields in schema script for DBS table 
 --

 Key: HIVE-6548
 URL: https://issues.apache.org/jira/browse/HIVE-6548
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.13.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Fix For: 0.14.0

 Attachments: HIVE-6548.patch


 HIVE-6386 introduced new columns in DBS table, but those are missing from 
 schema scripts.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6538) yet another annoying exception in test logs

2014-03-05 Thread Szehon Ho (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921326#comment-13921326
 ] 

Szehon Ho commented on HIVE-6538:
-

Yea makes sense, let's do (2) to keep it consistent with getting non-existent 
table/partition and keep the logs clean.

 yet another annoying exception in test logs
 ---

 Key: HIVE-6538
 URL: https://issues.apache.org/jira/browse/HIVE-6538
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Szehon Ho
Priority: Trivial
 Attachments: HIVE-6538.patch


 Whenever you look at failed q tests you have to go thru this useless 
 exception.
 {noformat}
 2014-03-03 11:22:54,872 ERROR metastore.RetryingHMSHandler 
 (RetryingHMSHandler.java:invoke(143)) - 
 MetaException(message:NoSuchObjectException(message:Function 
 default.qtest_get_java_boolean does not exist))
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newMetaException(HiveMetaStore.java:4575)
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_function(HiveMetaStore.java:4702)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
   at $Proxy8.get_function(Unknown Source)
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getFunction(HiveMetaStoreClient.java:1526)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89)
   at $Proxy9.getFunction(Unknown Source)
   at org.apache.hadoop.hive.ql.metadata.Hive.getFunction(Hive.java:2603)
   at 
 org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfoFromMetastore(FunctionRegistry.java:546)
   at 
 org.apache.hadoop.hive.ql.exec.FunctionRegistry.getQualifiedFunctionInfo(FunctionRegistry.java:578)
   at 
 org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfo(FunctionRegistry.java:599)
   at 
 org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfo(FunctionRegistry.java:606)
   at 
 org.apache.hadoop.hive.ql.parse.FunctionSemanticAnalyzer.analyzeDropFunction(FunctionSemanticAnalyzer.java:94)
   at 
 org.apache.hadoop.hive.ql.parse.FunctionSemanticAnalyzer.analyzeInternal(FunctionSemanticAnalyzer.java:60)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:445)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:345)
   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1078)
   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1121)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1014)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1004)
   at org.apache.hadoop.hive.ql.QTestUtil.runCmd(QTestUtil.java:655)
   at org.apache.hadoop.hive.ql.QTestUtil.createSources(QTestUtil.java:772)
   at 
 org.apache.hadoop.hive.cli.TestCliDriver.clinit(TestCliDriver.java:46)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.internal.runners.SuiteMethod.testFromSuiteMethod(SuiteMethod.java:34)
   at org.junit.internal.runners.SuiteMethod.init(SuiteMethod.java:23)
   at 
 org.junit.internal.builders.SuiteMethodBuilder.runnerForClass(SuiteMethodBuilder.java:14)
   at 
 org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:57)
   at 
 org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(AllDefaultPossibilitiesBuilder.java:29)
   at 
 org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:57)
   at 
 org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:24)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:262)
   at 
 

[jira] [Commented] (HIVE-6325) Enable using multiple concurrent sessions in tez

2014-03-05 Thread Vikram Dixit K (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921327#comment-13921327
 ] 

Vikram Dixit K commented on HIVE-6325:
--

[~rhbutani] can I commit it to the 0.13 branch.

Thanks
Vikram.

 Enable using multiple concurrent sessions in tez
 

 Key: HIVE-6325
 URL: https://issues.apache.org/jira/browse/HIVE-6325
 Project: Hive
  Issue Type: Improvement
  Components: Tez
Affects Versions: 0.13.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Attachments: HIVE-6325-branch-0.13.patch, HIVE-6325-trunk.patch, 
 HIVE-6325.1.patch, HIVE-6325.10.patch, HIVE-6325.11.patch, HIVE-6325.2.patch, 
 HIVE-6325.3.patch, HIVE-6325.4.patch, HIVE-6325.5.patch, HIVE-6325.6.patch, 
 HIVE-6325.7.patch, HIVE-6325.8.patch, HIVE-6325.9.patch


 We would like to enable multiple concurrent sessions in tez via hive server 
 2. This will enable users to make efficient use of the cluster when it has 
 been partitioned using yarn queues.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-4764) Support Kerberos HTTP authentication for HiveServer2 running in http mode

2014-03-05 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-4764:
---

Status: Patch Available  (was: Open)

 Support Kerberos HTTP authentication for HiveServer2 running in http mode
 -

 Key: HIVE-4764
 URL: https://issues.apache.org/jira/browse/HIVE-4764
 Project: Hive
  Issue Type: Sub-task
  Components: HiveServer2
Affects Versions: 0.13.0
Reporter: Thejas M Nair
Assignee: Vaibhav Gumashta
 Fix For: 0.13.0

 Attachments: HIVE-4764.1.patch, HIVE-4764.2.patch, HIVE-4764.3.patch


 Support Kerberos authentication for HiveServer2 running in http mode.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-4764) Support Kerberos HTTP authentication for HiveServer2 running in http mode

2014-03-05 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-4764:
---

Status: Open  (was: Patch Available)

 Support Kerberos HTTP authentication for HiveServer2 running in http mode
 -

 Key: HIVE-4764
 URL: https://issues.apache.org/jira/browse/HIVE-4764
 Project: Hive
  Issue Type: Sub-task
  Components: HiveServer2
Affects Versions: 0.13.0
Reporter: Thejas M Nair
Assignee: Vaibhav Gumashta
 Fix For: 0.13.0

 Attachments: HIVE-4764.1.patch, HIVE-4764.2.patch, HIVE-4764.3.patch


 Support Kerberos authentication for HiveServer2 running in http mode.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6548) Missing owner name and type fields in schema script for DBS table

2014-03-05 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-6548:
---

Fix Version/s: (was: 0.14.0)
   0.13.0

 Missing owner name and type fields in schema script for DBS table 
 --

 Key: HIVE-6548
 URL: https://issues.apache.org/jira/browse/HIVE-6548
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.13.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Fix For: 0.13.0

 Attachments: HIVE-6548.patch


 HIVE-6386 introduced new columns in DBS table, but those are missing from 
 schema scripts.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6548) Missing owner name and type fields in schema script for DBS table

2014-03-05 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921331#comment-13921331
 ] 

Ashutosh Chauhan commented on HIVE-6548:


Thanks, Harish! I committed to 0.13 branch.

 Missing owner name and type fields in schema script for DBS table 
 --

 Key: HIVE-6548
 URL: https://issues.apache.org/jira/browse/HIVE-6548
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.13.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Fix For: 0.13.0

 Attachments: HIVE-6548.patch


 HIVE-6386 introduced new columns in DBS table, but those are missing from 
 schema scripts.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6325) Enable using multiple concurrent sessions in tez

2014-03-05 Thread Harish Butani (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921333#comment-13921333
 ] 

Harish Butani commented on HIVE-6325:
-

+1

 Enable using multiple concurrent sessions in tez
 

 Key: HIVE-6325
 URL: https://issues.apache.org/jira/browse/HIVE-6325
 Project: Hive
  Issue Type: Improvement
  Components: Tez
Affects Versions: 0.13.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Attachments: HIVE-6325-branch-0.13.patch, HIVE-6325-trunk.patch, 
 HIVE-6325.1.patch, HIVE-6325.10.patch, HIVE-6325.11.patch, HIVE-6325.2.patch, 
 HIVE-6325.3.patch, HIVE-6325.4.patch, HIVE-6325.5.patch, HIVE-6325.6.patch, 
 HIVE-6325.7.patch, HIVE-6325.8.patch, HIVE-6325.9.patch


 We would like to enable multiple concurrent sessions in tez via hive server 
 2. This will enable users to make efficient use of the cluster when it has 
 been partitioned using yarn queues.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Review Request 18793: Bug fix.

2014-03-05 Thread Ashutosh Chauhan

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/18793/
---

Review request for hive and Thejas Nair.


Bugs: HIVE-6417
https://issues.apache.org/jira/browse/HIVE-6417


Repository: hive-git


Description
---

Bug fix.


Diffs
-

  
itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestAdminUser.java
 PRE-CREATION 
  metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java 
ba51b96 

Diff: https://reviews.apache.org/r/18793/diff/


Testing
---

Added new test.


Thanks,

Ashutosh Chauhan



[jira] [Updated] (HIVE-6417) sql std auth - new users in admin role config should get added

2014-03-05 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-6417:
---

Attachment: HIVE-6417.patch

 sql std auth - new users in admin role config should get added
 --

 Key: HIVE-6417
 URL: https://issues.apache.org/jira/browse/HIVE-6417
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Thejas M Nair
 Attachments: HIVE-6417.patch


 if metastore is started with hive.users.in.admin.role=user1, then user1 is 
 added admin role to metastore.
 If the value is changed to hive.users.in.admin.role=user2, then user2 should 
 get added to the role in metastore. Right now, if the admin role exists, new 
 users don't get added.
 A work-around is -  user1 adding user2 to the admin role using grant role 
 statement.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6417) sql std auth - new users in admin role config should get added

2014-03-05 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-6417:
---

Assignee: Ashutosh Chauhan
  Status: Patch Available  (was: Open)

[~thejas] Can you take a look?

 sql std auth - new users in admin role config should get added
 --

 Key: HIVE-6417
 URL: https://issues.apache.org/jira/browse/HIVE-6417
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Ashutosh Chauhan
 Attachments: HIVE-6417.patch


 if metastore is started with hive.users.in.admin.role=user1, then user1 is 
 added admin role to metastore.
 If the value is changed to hive.users.in.admin.role=user2, then user2 should 
 get added to the role in metastore. Right now, if the admin role exists, new 
 users don't get added.
 A work-around is -  user1 adding user2 to the admin role using grant role 
 statement.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6417) sql std auth - new users in admin role config should get added

2014-03-05 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921341#comment-13921341
 ] 

Thejas M Nair commented on HIVE-6417:
-

sure, can you add an rb link ?

 sql std auth - new users in admin role config should get added
 --

 Key: HIVE-6417
 URL: https://issues.apache.org/jira/browse/HIVE-6417
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Ashutosh Chauhan
 Attachments: HIVE-6417.patch


 if metastore is started with hive.users.in.admin.role=user1, then user1 is 
 added admin role to metastore.
 If the value is changed to hive.users.in.admin.role=user2, then user2 should 
 get added to the role in metastore. Right now, if the admin role exists, new 
 users don't get added.
 A work-around is -  user1 adding user2 to the admin role using grant role 
 statement.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-5888) group by after join operation product no result when hive.optimize.skewjoin = true

2014-03-05 Thread Jian Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921355#comment-13921355
 ] 

Jian Fang commented on HIVE-5888:
-

Cherry picked HIVE-6041 patch from branch-0.13 to hive 0.11, but got the 
following errors. Are there any other patches I need to pick to make it work?

Total MapReduce jobs = 9
java.io.FileNotFoundException: java.io.FileNotFoundException: File does not 
exist: 
/mnt/var/lib/hive_0110/tmp/scratch/hive_2014-03-05_20-12-41_114_4395627082352200171/-mr-10002
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95)
at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
at 
org.apache.hadoop.hdfs.DFSClient.getContentSummary(DFSClient.java:1315)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getContentSummary(DistributedFileSystem.java:240)
at 
org.apache.hadoop.hive.ql.plan.ConditionalResolverCommonJoin.resolveMapJoinTask(ConditionalResolverCommonJoin.java:185)
at 
org.apache.hadoop.hive.ql.plan.ConditionalResolverCommonJoin.getTasks(ConditionalResolverCommonJoin.java:117)
at 
org.apache.hadoop.hive.ql.exec.ConditionalTask.execute(ConditionalTask.java:81)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:144)
at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1355)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1139)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:945)
at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:310)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:231)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:466)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:401)
at 
org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:499)
at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:514)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:773)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:674)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:187)
Caused by: org.apache.hadoop.ipc.RemoteException: 
java.io.FileNotFoundException: File does not exist: 
/mnt/var/lib/hive_0110/tmp/scratch/hive_2014-03-05_20-12-41_114_4395627082352200171/-mr-10002
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.getContentSummary(FSDirectory.java:1158)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:2089)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.getContentSummary(NameNode.java:948)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:573)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1140)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)

at org.apache.hadoop.ipc.Client.call(Client.java:1067)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at com.sun.proxy.$Proxy6.getContentSummary(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 

[jira] [Commented] (HIVE-6332) HCatConstants Documentation needed

2014-03-05 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921373#comment-13921373
 ] 

Sushanth Sowmyan commented on HIVE-6332:


Before I created a wiki page for this, I wanted to have the content 
checked/reviewed. [~leftylev], [~ekoifman], could you please go through the 
following and suggest edits/changes? Thanks!

==




HCatalog job properties:


Storage directives:
---


hcat.pig.storer.external.location : An override to specify where HCatStorer 
will write to, defined from pig jobs, either directly by user, or by using 
org.apache.hive.hcatalog.pig.HCatStorerWrapper. HCat will write to this 
specified directory, rather than writing to the table/partition directory 
specified/calculatable by the metadata. This will be used in lieu of the table 
directory if this is a table-level write (unpartitioned table write) or in lieu 
of the partition directory if this is a partition-level write. This parameter 
is used only for non-dynamic-partitioning jobs which have multiple write 
destinations.

hcat.dynamic.partitioning.custom.pattern : For dynamic partitioning jobs, 
simply specifying a custom directory is not good enough, since it writes to 
multiple destinations, and thus, instead of a directory specification, it 
requires a pattern specification. That's where this parameter comes in. For 
example, if one had a table that was partitioned by keys country and state, 
with a root directory location of /apps/hive/warehouse/geo/ , then a dynamic 
partition write into it that writes partitions (country=US,state=CA)  
(country=IN,state=KA) would create two directories: 
/apps/hive/warehouse/geo/country=US/state=CA/ and 
/apps/hive/warehouse/geo/country=IN/state=KA/ . If we wanted a different 
patterned location, and specified 
hcat.dynamic.partitioning.custom.patttern=/ext/geo/${country}-${state}, it 
would create the following two partition dirs: /ext/geo/US-CA and 
/ext/geo/IN-KA . Thus, it allows us to specify a custom dir location pattern 
for all the writes, and will interpolate each variable it sees when attempting 
to create a destination location for the partitions.

Cache behaviour directives:
---

HCatalog maintains a cache of HiveClients to talk to the metastore, managing a 
cache of 1 metastore client per thread, defaulting to an expiry of 120 seconds. 
For people that wish to modify the behaviour of this cache, a few parameters 
are provided:


hcatalog.hive.client.cache.expiry.time : Allows users to override the expiry 
time specified - this is an int, and specifies number of seconds. Default is 
120.
hcatalog.hive.client.cache.disabled : Default is false, allows people to 
disable the cache altogether if they wish to. This is useful in highly 
multithreaded usecases.


Input Split Generation Behaviour:
-

hcat.desired.partition.num.splits : This is a hint/guidance that can be 
provided to HCatalog to pass on to underlying InputFormats, to produce a 
desired number of splits per partition. This is useful when we have a few 
large files and we want to increase parallelism by increasing the number of 
splits generated. It is not yet so useful in cases where we would want to 
reduce the number of splits for a large number of files. It is not at all 
useful, also, in cases where there are a large number of partitions that this 
job will read. Also note that this is merely an optimization hint, and it is 
not guaranteed that the underlying layer will be capable of using this 
optimization. Also, mapreduce parameters mapred.min.split.size and 
mapred.max.split.size can be used in conjunction with this parameter to 
tweak/optimize jobs.


Data Promotion Behaviour:
-

In some cases where a user of HCat (such as some older versions of pig) does 
not support all the datatypes supported by hive, there are a few config 
parameters provided to handle data promotions/conversions to allow them to read 
data through HCatalog. On the write side, it is expected that the user pass in 
valid HCatRecords with data correctly.


hcat.data.convert.boolean.to.integer : promotes boolean to int on read from 
HCatalog, defaults to false.
hcat.data.tiny.small.int.promotion : promotes tinyint/smallint to int on read 
from HCatalog, defaults to false.

HCatRecordReader Error Tolerance Behaviour:
---

While reading, it is understandable that data might contain errors, but we may 
not want to completely abort a task due to a couple of errors. These parameters 
configure how many errors we can accept before we fail the task.

hcat.input.bad.record.threshold : A float parameter, defaults to 0.0001f, which 
means we can deal with 1 error every 10,000 rows, and still not error out. Any 
greater, and we will.
hcat.input.bad.record.min : An int 

[jira] [Commented] (HIVE-6332) HCatConstants Documentation needed

2014-03-05 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921374#comment-13921374
 ] 

Sushanth Sowmyan commented on HIVE-6332:


Ugh, sorry about the formatting above, adding noformat:

{noformat}


HCatalog job properties:


Storage directives:
---


hcat.pig.storer.external.location : An override to specify where HCatStorer 
will write to, defined from pig jobs, either directly by user, or by using 
org.apache.hive.hcatalog.pig.HCatStorerWrapper. HCat will write to this 
specified directory, rather than writing to the table/partition directory 
specified/calculatable by the metadata. This will be used in lieu of the table 
directory if this is a table-level write (unpartitioned table write) or in lieu 
of the partition directory if this is a partition-level write. This parameter 
is used only for non-dynamic-partitioning jobs which have multiple write 
destinations.

hcat.dynamic.partitioning.custom.pattern : For dynamic partitioning jobs, 
simply specifying a custom directory is not good enough, since it writes to 
multiple destinations, and thus, instead of a directory specification, it 
requires a pattern specification. That's where this parameter comes in. For 
example, if one had a table that was partitioned by keys country and state, 
with a root directory location of /apps/hive/warehouse/geo/ , then a dynamic 
partition write into it that writes partitions (country=US,state=CA)  
(country=IN,state=KA) would create two directories: 
/apps/hive/warehouse/geo/country=US/state=CA/ and 
/apps/hive/warehouse/geo/country=IN/state=KA/ . If we wanted a different 
patterned location, and specified 
hcat.dynamic.partitioning.custom.patttern=/ext/geo/${country}-${state}, it 
would create the following two partition dirs: /ext/geo/US-CA and 
/ext/geo/IN-KA . Thus, it allows us to specify a custom dir location pattern 
for all the writes, and will interpolate each variable it sees when attempting 
to create a destination location for the partitions.

Cache behaviour directives:
---

HCatalog maintains a cache of HiveClients to talk to the metastore, managing a 
cache of 1 metastore client per thread, defaulting to an expiry of 120 seconds. 
For people that wish to modify the behaviour of this cache, a few parameters 
are provided:


hcatalog.hive.client.cache.expiry.time : Allows users to override the expiry 
time specified - this is an int, and specifies number of seconds. Default is 
120.
hcatalog.hive.client.cache.disabled : Default is false, allows people to 
disable the cache altogether if they wish to. This is useful in highly 
multithreaded usecases.


Input Split Generation Behaviour:
-

hcat.desired.partition.num.splits : This is a hint/guidance that can be 
provided to HCatalog to pass on to underlying InputFormats, to produce a 
desired number of splits per partition. This is useful when we have a few 
large files and we want to increase parallelism by increasing the number of 
splits generated. It is not yet so useful in cases where we would want to 
reduce the number of splits for a large number of files. It is not at all 
useful, also, in cases where there are a large number of partitions that this 
job will read. Also note that this is merely an optimization hint, and it is 
not guaranteed that the underlying layer will be capable of using this 
optimization. Also, mapreduce parameters mapred.min.split.size and 
mapred.max.split.size can be used in conjunction with this parameter to 
tweak/optimize jobs.


Data Promotion Behaviour:
-

In some cases where a user of HCat (such as some older versions of pig) does 
not support all the datatypes supported by hive, there are a few config 
parameters provided to handle data promotions/conversions to allow them to read 
data through HCatalog. On the write side, it is expected that the user pass in 
valid HCatRecords with data correctly.


hcat.data.convert.boolean.to.integer : promotes boolean to int on read from 
HCatalog, defaults to false.
hcat.data.tiny.small.int.promotion : promotes tinyint/smallint to int on read 
from HCatalog, defaults to false.

HCatRecordReader Error Tolerance Behaviour:
---

While reading, it is understandable that data might contain errors, but we may 
not want to completely abort a task due to a couple of errors. These parameters 
configure how many errors we can accept before we fail the task.

hcat.input.bad.record.threshold : A float parameter, defaults to 0.0001f, which 
means we can deal with 1 error every 10,000 rows, and still not error out. Any 
greater, and we will.
hcat.input.bad.record.min : An int parameter, defaults to 2, which is the 
minimum number of bad records we encounter before applying 
hcat.input.bad.record.threshold 

[jira] [Commented] (HIVE-6325) Enable using multiple concurrent sessions in tez

2014-03-05 Thread Vikram Dixit K (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921375#comment-13921375
 ] 

Vikram Dixit K commented on HIVE-6325:
--

Committed to trunk and branch-0.13. Thanks Gunther and Harish.

 Enable using multiple concurrent sessions in tez
 

 Key: HIVE-6325
 URL: https://issues.apache.org/jira/browse/HIVE-6325
 Project: Hive
  Issue Type: Improvement
  Components: Tez
Affects Versions: 0.13.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Fix For: 0.13.0, 0.14.0

 Attachments: HIVE-6325-branch-0.13.patch, HIVE-6325-trunk.patch, 
 HIVE-6325.1.patch, HIVE-6325.10.patch, HIVE-6325.11.patch, HIVE-6325.2.patch, 
 HIVE-6325.3.patch, HIVE-6325.4.patch, HIVE-6325.5.patch, HIVE-6325.6.patch, 
 HIVE-6325.7.patch, HIVE-6325.8.patch, HIVE-6325.9.patch


 We would like to enable multiple concurrent sessions in tez via hive server 
 2. This will enable users to make efficient use of the cluster when it has 
 been partitioned using yarn queues.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6455) Scalable dynamic partitioning and bucketing optimization

2014-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921377#comment-13921377
 ] 

Hive QA commented on HIVE-6455:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12632762/HIVE-6455.11.patch

{color:red}ERROR:{color} -1 due to 18 failed/errored test(s), 5356 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dynpart_sort_optimization
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_load_dyn_part14
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_merge3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_merge4
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_auto_sortmerge_join_16
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_infer_bucket_sort_dyn_part
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_infer_bucket_sort_map_operators
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_dyn_part_max_per_node
org.apache.hadoop.hive.ql.parse.TestParse.testParse_input1
org.apache.hadoop.hive.ql.parse.TestParse.testParse_input2
org.apache.hadoop.hive.ql.parse.TestParse.testParse_input3
org.apache.hadoop.hive.ql.parse.TestParse.testParse_input6
org.apache.hadoop.hive.ql.parse.TestParse.testParse_input7
org.apache.hadoop.hive.ql.parse.TestParse.testParse_input9
org.apache.hadoop.hive.ql.parse.TestParse.testParse_sample2
org.apache.hadoop.hive.ql.parse.TestParse.testParse_union
org.apache.hive.beeline.TestSchemaTool.testSchemaInit
org.apache.hive.beeline.TestSchemaTool.testSchemaUpgrade
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1632/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1632/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 18 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12632762

 Scalable dynamic partitioning and bucketing optimization
 

 Key: HIVE-6455
 URL: https://issues.apache.org/jira/browse/HIVE-6455
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor
Affects Versions: 0.13.0
Reporter: Prasanth J
Assignee: Prasanth J
  Labels: optimization
 Attachments: HIVE-6455.1.patch, HIVE-6455.1.patch, 
 HIVE-6455.10.patch, HIVE-6455.10.patch, HIVE-6455.11.patch, 
 HIVE-6455.2.patch, HIVE-6455.3.patch, HIVE-6455.4.patch, HIVE-6455.4.patch, 
 HIVE-6455.5.patch, HIVE-6455.6.patch, HIVE-6455.7.patch, HIVE-6455.8.patch, 
 HIVE-6455.9.patch, HIVE-6455.9.patch


 The current implementation of dynamic partition works by keeping at least one 
 record writer open per dynamic partition directory. In case of bucketing 
 there can be multispray file writers which further adds up to the number of 
 open record writers. The record writers of column oriented file format (like 
 ORC, RCFile etc.) keeps some sort of in-memory buffers (value buffer or 
 compression buffers) open all the time to buffer up the rows and compress 
 them before flushing it to disk. Since these buffers are maintained per 
 column basis the amount of constant memory that will required at runtime 
 increases as the number of partitions and number of columns per partition 
 increases. This often leads to OutOfMemory (OOM) exception in mappers or 
 reducers depending on the number of open record writers. Users often tune the 
 JVM heapsize (runtime memory) to get over such OOM issues. 
 With this optimization, the dynamic partition columns and bucketing columns 
 (in case of bucketed tables) are sorted before being fed to the reducers. 
 Since the partitioning and bucketing columns are sorted, each reducers can 
 keep only one record writer open at any time thereby reducing the memory 
 pressure on the reducers. This optimization is highly scalable as the number 
 of partition and number of columns per partition increases at the cost of 
 sorting the columns.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6325) Enable using multiple concurrent sessions in tez

2014-03-05 Thread Vikram Dixit K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Dixit K updated HIVE-6325:
-

   Resolution: Fixed
Fix Version/s: 0.14.0
   0.13.0
   Status: Resolved  (was: Patch Available)

 Enable using multiple concurrent sessions in tez
 

 Key: HIVE-6325
 URL: https://issues.apache.org/jira/browse/HIVE-6325
 Project: Hive
  Issue Type: Improvement
  Components: Tez
Affects Versions: 0.13.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Fix For: 0.13.0, 0.14.0

 Attachments: HIVE-6325-branch-0.13.patch, HIVE-6325-trunk.patch, 
 HIVE-6325.1.patch, HIVE-6325.10.patch, HIVE-6325.11.patch, HIVE-6325.2.patch, 
 HIVE-6325.3.patch, HIVE-6325.4.patch, HIVE-6325.5.patch, HIVE-6325.6.patch, 
 HIVE-6325.7.patch, HIVE-6325.8.patch, HIVE-6325.9.patch


 We would like to enable multiple concurrent sessions in tez via hive server 
 2. This will enable users to make efficient use of the cluster when it has 
 been partitioned using yarn queues.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6507) OrcFile table property names are specified as strings

2014-03-05 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921381#comment-13921381
 ] 

Sushanth Sowmyan commented on HIVE-6507:


Noting that 2 of the test failures reported are due to the upgrade to 0.14, and 
one is disconnected from this issue. Also, the new patch does not change 
behaviour from the previous patch except for adding back in the string 
constants with deprecation notices, and thus, should not change test behaviour.

 OrcFile table property names are specified as strings
 -

 Key: HIVE-6507
 URL: https://issues.apache.org/jira/browse/HIVE-6507
 Project: Hive
  Issue Type: Bug
  Components: HCatalog, Serializers/Deserializers
Affects Versions: 0.13.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Attachments: HIVE-6507.2.patch, HIVE-6507.patch


 In HIVE-5504, we had to do some special casing in HCatalog to add a 
 particular set of orc table properties from table properties to job 
 properties.
 In doing so, it's obvious that that is a bit cumbersome, and ideally, the 
 list of all orc file table properties should really be an enum, rather than 
 individual loosely tied constant strings. If we were to clean this up, we can 
 clean up other code that references this to reference the entire enum, and 
 avoid future errors when new table properties are introduced, but other 
 referencing code is not updated.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6434) Restrict function create/drop to admin roles

2014-03-05 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-6434:
-

Attachment: HIVE-6434.5.patch

Rebase patch with trunk

 Restrict function create/drop to admin roles
 

 Key: HIVE-6434
 URL: https://issues.apache.org/jira/browse/HIVE-6434
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization, UDF
Reporter: Jason Dere
Assignee: Jason Dere
 Attachments: HIVE-6434.1.patch, HIVE-6434.2.patch, HIVE-6434.3.patch, 
 HIVE-6434.4.patch, HIVE-6434.5.patch


 Restrict function create/drop to admin roles, if sql std auth is enabled. 
 This would include temp/permanent functions, as well as macros.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6555) TestSchemaTool is failing on trunk after branching

2014-03-05 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921422#comment-13921422
 ] 

Ashutosh Chauhan commented on HIVE-6555:


Patch forthcoming.

 TestSchemaTool is failing on trunk after branching
 --

 Key: HIVE-6555
 URL: https://issues.apache.org/jira/browse/HIVE-6555
 Project: Hive
  Issue Type: Bug
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan

 This is because version was bumped to 0.14 in pom file and there are no 
 metastore scripts for 0.14 yet.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HIVE-6555) TestSchemaTool is failing on trunk after branching

2014-03-05 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan reassigned HIVE-6555:
--

Assignee: Ashutosh Chauhan

 TestSchemaTool is failing on trunk after branching
 --

 Key: HIVE-6555
 URL: https://issues.apache.org/jira/browse/HIVE-6555
 Project: Hive
  Issue Type: Bug
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan

 This is because version was bumped to 0.14 in pom file and there are no 
 metastore scripts for 0.14 yet.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6511) casting from decimal to tinyint,smallint, int and bigint generates different result when vectorization is on

2014-03-05 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HIVE-6511:
---

Attachment: HIVE-6511.2.patch

 casting from decimal to tinyint,smallint, int and bigint generates different 
 result when vectorization is on
 

 Key: HIVE-6511
 URL: https://issues.apache.org/jira/browse/HIVE-6511
 Project: Hive
  Issue Type: Bug
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Attachments: HIVE-6511.1.patch, HIVE-6511.2.patch


 select dc,cast(dc as int), cast(dc as smallint),cast(dc as tinyint) from 
 vectortab10korc limit 20 generates following result when vectorization is 
 enabled:
 {code}
 4619756289662.078125  -1628520834 -16770  126
 1553532646710.316406  -1245514442 -2762   54
 3367942487288.360352  688127224   -776-8
 4386447830839.337891  1286221623  12087   55
 -3234165331139.458008 -54957251   27453   61
 -488378613475.326172  1247658269  -16099  29
 -493942492598.691406  -21253559   -19895  73
 3101852523586.039062  886135874   23618   66
 2544105595941.381836  1484956709  -23515  37
 -3997512403067.0625   1102149509  30597   -123
 -1183754978977.589355 1655994718  31070   94
 1408783849655.676758  34576568-26440  -72
 -2993175106993.426758 417098319   27215   79
 3004723551798.100586  -1753555402 -8650   54
 1103792083527.786133  -14511544   -28088  72
 469767055288.485352   1615620024  26552   -72
 -1263700791098.294434 -980406074  12486   -58
 -4244889766496.484375 -1462078048 30112   -96
 -3962729491139.782715 1525323068  -27332  60
 NULL  NULLNULLNULL
 {code}
 When vectorization is disabled, result looks like this:
 {code}
 4619756289662.078125  -1628520834 -16770  126
 1553532646710.316406  -1245514442 -2762   54
 3367942487288.360352  688127224   -776-8
 4386447830839.337891  1286221623  12087   55
 -3234165331139.458008 -54957251   27453   61
 -488378613475.326172  1247658269  -16099  29
 -493942492598.691406  -21253558   -19894  74
 3101852523586.039062  886135874   23618   66
 2544105595941.381836  1484956709  -23515  37
 -3997512403067.0625   1102149509  30597   -123
 -1183754978977.589355 1655994719  31071   95
 1408783849655.676758  34576567-26441  -73
 -2993175106993.426758 417098319   27215   79
 3004723551798.100586  -1753555402 -8650   54
 1103792083527.786133  -14511545   -28089  71
 469767055288.485352   1615620024  26552   -72
 -1263700791098.294434 -980406074  12486   -58
 -4244889766496.484375 -1462078048 30112   -96
 -3962729491139.782715 1525323069  -27331  61
 NULL  NULLNULLNULL
 {code}
 This issue is visible only for certain decimal values. In above example, row 
 7,11,12, and 15 generates different results.
 vectortab10korc table schema:
 {code}
 t tinyint from deserializer   
 sismallintfrom deserializer   
 i int from deserializer   
 b bigint  from deserializer   
 f float   from deserializer   
 d double  from deserializer   
 dcdecimal(38,18)  from deserializer   
 boboolean from deserializer   
 s string  from deserializer   
 s2string  from deserializer   
 tstimestamp   from deserializer   

 # Detailed Table Information   
 Database: default  
 Owner:xyz  
 CreateTime:   Tue Feb 25 21:54:28 UTC 2014 
 LastAccessTime:   UNKNOWN  
 Protect Mode: None 
 Retention:0
 Location: 
 hdfs://host1.domain.com:8020/apps/hive/warehouse/vectortab10korc 
 Table Type:   MANAGED_TABLE
 Table Parameters:  
   COLUMN_STATS_ACCURATE   true
   numFiles1   
   numRows 1   
   rawDataSize 0   
   totalSize   344748  
   transient_lastDdlTime   1393365281  

 # Storage Information  
 SerDe Library:org.apache.hadoop.hive.ql.io.orc.OrcSerde
 InputFormat:  org.apache.hadoop.hive.ql.io.orc.OrcInputFormat  
 OutputFormat: 

[jira] [Updated] (HIVE-6511) casting from decimal to tinyint,smallint, int and bigint generates different result when vectorization is on

2014-03-05 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HIVE-6511:
---

Status: Patch Available  (was: Open)

 casting from decimal to tinyint,smallint, int and bigint generates different 
 result when vectorization is on
 

 Key: HIVE-6511
 URL: https://issues.apache.org/jira/browse/HIVE-6511
 Project: Hive
  Issue Type: Bug
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Attachments: HIVE-6511.1.patch, HIVE-6511.2.patch


 select dc,cast(dc as int), cast(dc as smallint),cast(dc as tinyint) from 
 vectortab10korc limit 20 generates following result when vectorization is 
 enabled:
 {code}
 4619756289662.078125  -1628520834 -16770  126
 1553532646710.316406  -1245514442 -2762   54
 3367942487288.360352  688127224   -776-8
 4386447830839.337891  1286221623  12087   55
 -3234165331139.458008 -54957251   27453   61
 -488378613475.326172  1247658269  -16099  29
 -493942492598.691406  -21253559   -19895  73
 3101852523586.039062  886135874   23618   66
 2544105595941.381836  1484956709  -23515  37
 -3997512403067.0625   1102149509  30597   -123
 -1183754978977.589355 1655994718  31070   94
 1408783849655.676758  34576568-26440  -72
 -2993175106993.426758 417098319   27215   79
 3004723551798.100586  -1753555402 -8650   54
 1103792083527.786133  -14511544   -28088  72
 469767055288.485352   1615620024  26552   -72
 -1263700791098.294434 -980406074  12486   -58
 -4244889766496.484375 -1462078048 30112   -96
 -3962729491139.782715 1525323068  -27332  60
 NULL  NULLNULLNULL
 {code}
 When vectorization is disabled, result looks like this:
 {code}
 4619756289662.078125  -1628520834 -16770  126
 1553532646710.316406  -1245514442 -2762   54
 3367942487288.360352  688127224   -776-8
 4386447830839.337891  1286221623  12087   55
 -3234165331139.458008 -54957251   27453   61
 -488378613475.326172  1247658269  -16099  29
 -493942492598.691406  -21253558   -19894  74
 3101852523586.039062  886135874   23618   66
 2544105595941.381836  1484956709  -23515  37
 -3997512403067.0625   1102149509  30597   -123
 -1183754978977.589355 1655994719  31071   95
 1408783849655.676758  34576567-26441  -73
 -2993175106993.426758 417098319   27215   79
 3004723551798.100586  -1753555402 -8650   54
 1103792083527.786133  -14511545   -28089  71
 469767055288.485352   1615620024  26552   -72
 -1263700791098.294434 -980406074  12486   -58
 -4244889766496.484375 -1462078048 30112   -96
 -3962729491139.782715 1525323069  -27331  61
 NULL  NULLNULLNULL
 {code}
 This issue is visible only for certain decimal values. In above example, row 
 7,11,12, and 15 generates different results.
 vectortab10korc table schema:
 {code}
 t tinyint from deserializer   
 sismallintfrom deserializer   
 i int from deserializer   
 b bigint  from deserializer   
 f float   from deserializer   
 d double  from deserializer   
 dcdecimal(38,18)  from deserializer   
 boboolean from deserializer   
 s string  from deserializer   
 s2string  from deserializer   
 tstimestamp   from deserializer   

 # Detailed Table Information   
 Database: default  
 Owner:xyz  
 CreateTime:   Tue Feb 25 21:54:28 UTC 2014 
 LastAccessTime:   UNKNOWN  
 Protect Mode: None 
 Retention:0
 Location: 
 hdfs://host1.domain.com:8020/apps/hive/warehouse/vectortab10korc 
 Table Type:   MANAGED_TABLE
 Table Parameters:  
   COLUMN_STATS_ACCURATE   true
   numFiles1   
   numRows 1   
   rawDataSize 0   
   totalSize   344748  
   transient_lastDdlTime   1393365281  

 # Storage Information  
 SerDe Library:org.apache.hadoop.hive.ql.io.orc.OrcSerde
 InputFormat:  org.apache.hadoop.hive.ql.io.orc.OrcInputFormat  
 OutputFormat: 

Review Request 18808: Casting from decimal to tinyint, smallint, int and bigint generates different result when vectorization is on

2014-03-05 Thread Jitendra Pandey

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/18808/
---

Review request for hive and Eric Hanson.


Bugs: HIVE-6511
https://issues.apache.org/jira/browse/HIVE-6511


Repository: hive-git


Description
---

Casting from decimal to tinyint,smallint, int and bigint generates different 
result when vectorization is on.


Diffs
-

  common/src/java/org/apache/hadoop/hive/common/type/Decimal128.java a5d7399 
  common/src/test/org/apache/hadoop/hive/common/type/TestDecimal128.java 
426c03d 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/CastDecimalToLong.java
 d5f34d5 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/CastDecimalToTimestamp.java
 df7e1ee 

Diff: https://reviews.apache.org/r/18808/diff/


Testing
---


Thanks,

Jitendra Pandey



[jira] [Updated] (HIVE-6511) casting from decimal to tinyint,smallint, int and bigint generates different result when vectorization is on

2014-03-05 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HIVE-6511:
---

Attachment: HIVE-6511.3.patch

Review board: https://reviews.apache.org/r/18808/

 casting from decimal to tinyint,smallint, int and bigint generates different 
 result when vectorization is on
 

 Key: HIVE-6511
 URL: https://issues.apache.org/jira/browse/HIVE-6511
 Project: Hive
  Issue Type: Bug
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Attachments: HIVE-6511.1.patch, HIVE-6511.2.patch, HIVE-6511.3.patch


 select dc,cast(dc as int), cast(dc as smallint),cast(dc as tinyint) from 
 vectortab10korc limit 20 generates following result when vectorization is 
 enabled:
 {code}
 4619756289662.078125  -1628520834 -16770  126
 1553532646710.316406  -1245514442 -2762   54
 3367942487288.360352  688127224   -776-8
 4386447830839.337891  1286221623  12087   55
 -3234165331139.458008 -54957251   27453   61
 -488378613475.326172  1247658269  -16099  29
 -493942492598.691406  -21253559   -19895  73
 3101852523586.039062  886135874   23618   66
 2544105595941.381836  1484956709  -23515  37
 -3997512403067.0625   1102149509  30597   -123
 -1183754978977.589355 1655994718  31070   94
 1408783849655.676758  34576568-26440  -72
 -2993175106993.426758 417098319   27215   79
 3004723551798.100586  -1753555402 -8650   54
 1103792083527.786133  -14511544   -28088  72
 469767055288.485352   1615620024  26552   -72
 -1263700791098.294434 -980406074  12486   -58
 -4244889766496.484375 -1462078048 30112   -96
 -3962729491139.782715 1525323068  -27332  60
 NULL  NULLNULLNULL
 {code}
 When vectorization is disabled, result looks like this:
 {code}
 4619756289662.078125  -1628520834 -16770  126
 1553532646710.316406  -1245514442 -2762   54
 3367942487288.360352  688127224   -776-8
 4386447830839.337891  1286221623  12087   55
 -3234165331139.458008 -54957251   27453   61
 -488378613475.326172  1247658269  -16099  29
 -493942492598.691406  -21253558   -19894  74
 3101852523586.039062  886135874   23618   66
 2544105595941.381836  1484956709  -23515  37
 -3997512403067.0625   1102149509  30597   -123
 -1183754978977.589355 1655994719  31071   95
 1408783849655.676758  34576567-26441  -73
 -2993175106993.426758 417098319   27215   79
 3004723551798.100586  -1753555402 -8650   54
 1103792083527.786133  -14511545   -28089  71
 469767055288.485352   1615620024  26552   -72
 -1263700791098.294434 -980406074  12486   -58
 -4244889766496.484375 -1462078048 30112   -96
 -3962729491139.782715 1525323069  -27331  61
 NULL  NULLNULLNULL
 {code}
 This issue is visible only for certain decimal values. In above example, row 
 7,11,12, and 15 generates different results.
 vectortab10korc table schema:
 {code}
 t tinyint from deserializer   
 sismallintfrom deserializer   
 i int from deserializer   
 b bigint  from deserializer   
 f float   from deserializer   
 d double  from deserializer   
 dcdecimal(38,18)  from deserializer   
 boboolean from deserializer   
 s string  from deserializer   
 s2string  from deserializer   
 tstimestamp   from deserializer   

 # Detailed Table Information   
 Database: default  
 Owner:xyz  
 CreateTime:   Tue Feb 25 21:54:28 UTC 2014 
 LastAccessTime:   UNKNOWN  
 Protect Mode: None 
 Retention:0
 Location: 
 hdfs://host1.domain.com:8020/apps/hive/warehouse/vectortab10korc 
 Table Type:   MANAGED_TABLE
 Table Parameters:  
   COLUMN_STATS_ACCURATE   true
   numFiles1   
   numRows 1   
   rawDataSize 0   
   totalSize   344748  
   transient_lastDdlTime   1393365281  

 # Storage Information  
 SerDe Library:org.apache.hadoop.hive.ql.io.orc.OrcSerde
 InputFormat:  

[jira] [Updated] (HIVE-6511) casting from decimal to tinyint,smallint, int and bigint generates different result when vectorization is on

2014-03-05 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HIVE-6511:
---

Status: Patch Available  (was: Open)

 casting from decimal to tinyint,smallint, int and bigint generates different 
 result when vectorization is on
 

 Key: HIVE-6511
 URL: https://issues.apache.org/jira/browse/HIVE-6511
 Project: Hive
  Issue Type: Bug
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Attachments: HIVE-6511.1.patch, HIVE-6511.2.patch, HIVE-6511.3.patch


 select dc,cast(dc as int), cast(dc as smallint),cast(dc as tinyint) from 
 vectortab10korc limit 20 generates following result when vectorization is 
 enabled:
 {code}
 4619756289662.078125  -1628520834 -16770  126
 1553532646710.316406  -1245514442 -2762   54
 3367942487288.360352  688127224   -776-8
 4386447830839.337891  1286221623  12087   55
 -3234165331139.458008 -54957251   27453   61
 -488378613475.326172  1247658269  -16099  29
 -493942492598.691406  -21253559   -19895  73
 3101852523586.039062  886135874   23618   66
 2544105595941.381836  1484956709  -23515  37
 -3997512403067.0625   1102149509  30597   -123
 -1183754978977.589355 1655994718  31070   94
 1408783849655.676758  34576568-26440  -72
 -2993175106993.426758 417098319   27215   79
 3004723551798.100586  -1753555402 -8650   54
 1103792083527.786133  -14511544   -28088  72
 469767055288.485352   1615620024  26552   -72
 -1263700791098.294434 -980406074  12486   -58
 -4244889766496.484375 -1462078048 30112   -96
 -3962729491139.782715 1525323068  -27332  60
 NULL  NULLNULLNULL
 {code}
 When vectorization is disabled, result looks like this:
 {code}
 4619756289662.078125  -1628520834 -16770  126
 1553532646710.316406  -1245514442 -2762   54
 3367942487288.360352  688127224   -776-8
 4386447830839.337891  1286221623  12087   55
 -3234165331139.458008 -54957251   27453   61
 -488378613475.326172  1247658269  -16099  29
 -493942492598.691406  -21253558   -19894  74
 3101852523586.039062  886135874   23618   66
 2544105595941.381836  1484956709  -23515  37
 -3997512403067.0625   1102149509  30597   -123
 -1183754978977.589355 1655994719  31071   95
 1408783849655.676758  34576567-26441  -73
 -2993175106993.426758 417098319   27215   79
 3004723551798.100586  -1753555402 -8650   54
 1103792083527.786133  -14511545   -28089  71
 469767055288.485352   1615620024  26552   -72
 -1263700791098.294434 -980406074  12486   -58
 -4244889766496.484375 -1462078048 30112   -96
 -3962729491139.782715 1525323069  -27331  61
 NULL  NULLNULLNULL
 {code}
 This issue is visible only for certain decimal values. In above example, row 
 7,11,12, and 15 generates different results.
 vectortab10korc table schema:
 {code}
 t tinyint from deserializer   
 sismallintfrom deserializer   
 i int from deserializer   
 b bigint  from deserializer   
 f float   from deserializer   
 d double  from deserializer   
 dcdecimal(38,18)  from deserializer   
 boboolean from deserializer   
 s string  from deserializer   
 s2string  from deserializer   
 tstimestamp   from deserializer   

 # Detailed Table Information   
 Database: default  
 Owner:xyz  
 CreateTime:   Tue Feb 25 21:54:28 UTC 2014 
 LastAccessTime:   UNKNOWN  
 Protect Mode: None 
 Retention:0
 Location: 
 hdfs://host1.domain.com:8020/apps/hive/warehouse/vectortab10korc 
 Table Type:   MANAGED_TABLE
 Table Parameters:  
   COLUMN_STATS_ACCURATE   true
   numFiles1   
   numRows 1   
   rawDataSize 0   
   totalSize   344748  
   transient_lastDdlTime   1393365281  

 # Storage Information  
 SerDe Library:org.apache.hadoop.hive.ql.io.orc.OrcSerde
 InputFormat:  org.apache.hadoop.hive.ql.io.orc.OrcInputFormat  
 OutputFormat: 

[jira] [Updated] (HIVE-6511) casting from decimal to tinyint,smallint, int and bigint generates different result when vectorization is on

2014-03-05 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HIVE-6511:
---

Status: Open  (was: Patch Available)

 casting from decimal to tinyint,smallint, int and bigint generates different 
 result when vectorization is on
 

 Key: HIVE-6511
 URL: https://issues.apache.org/jira/browse/HIVE-6511
 Project: Hive
  Issue Type: Bug
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Attachments: HIVE-6511.1.patch, HIVE-6511.2.patch, HIVE-6511.3.patch


 select dc,cast(dc as int), cast(dc as smallint),cast(dc as tinyint) from 
 vectortab10korc limit 20 generates following result when vectorization is 
 enabled:
 {code}
 4619756289662.078125  -1628520834 -16770  126
 1553532646710.316406  -1245514442 -2762   54
 3367942487288.360352  688127224   -776-8
 4386447830839.337891  1286221623  12087   55
 -3234165331139.458008 -54957251   27453   61
 -488378613475.326172  1247658269  -16099  29
 -493942492598.691406  -21253559   -19895  73
 3101852523586.039062  886135874   23618   66
 2544105595941.381836  1484956709  -23515  37
 -3997512403067.0625   1102149509  30597   -123
 -1183754978977.589355 1655994718  31070   94
 1408783849655.676758  34576568-26440  -72
 -2993175106993.426758 417098319   27215   79
 3004723551798.100586  -1753555402 -8650   54
 1103792083527.786133  -14511544   -28088  72
 469767055288.485352   1615620024  26552   -72
 -1263700791098.294434 -980406074  12486   -58
 -4244889766496.484375 -1462078048 30112   -96
 -3962729491139.782715 1525323068  -27332  60
 NULL  NULLNULLNULL
 {code}
 When vectorization is disabled, result looks like this:
 {code}
 4619756289662.078125  -1628520834 -16770  126
 1553532646710.316406  -1245514442 -2762   54
 3367942487288.360352  688127224   -776-8
 4386447830839.337891  1286221623  12087   55
 -3234165331139.458008 -54957251   27453   61
 -488378613475.326172  1247658269  -16099  29
 -493942492598.691406  -21253558   -19894  74
 3101852523586.039062  886135874   23618   66
 2544105595941.381836  1484956709  -23515  37
 -3997512403067.0625   1102149509  30597   -123
 -1183754978977.589355 1655994719  31071   95
 1408783849655.676758  34576567-26441  -73
 -2993175106993.426758 417098319   27215   79
 3004723551798.100586  -1753555402 -8650   54
 1103792083527.786133  -14511545   -28089  71
 469767055288.485352   1615620024  26552   -72
 -1263700791098.294434 -980406074  12486   -58
 -4244889766496.484375 -1462078048 30112   -96
 -3962729491139.782715 1525323069  -27331  61
 NULL  NULLNULLNULL
 {code}
 This issue is visible only for certain decimal values. In above example, row 
 7,11,12, and 15 generates different results.
 vectortab10korc table schema:
 {code}
 t tinyint from deserializer   
 sismallintfrom deserializer   
 i int from deserializer   
 b bigint  from deserializer   
 f float   from deserializer   
 d double  from deserializer   
 dcdecimal(38,18)  from deserializer   
 boboolean from deserializer   
 s string  from deserializer   
 s2string  from deserializer   
 tstimestamp   from deserializer   

 # Detailed Table Information   
 Database: default  
 Owner:xyz  
 CreateTime:   Tue Feb 25 21:54:28 UTC 2014 
 LastAccessTime:   UNKNOWN  
 Protect Mode: None 
 Retention:0
 Location: 
 hdfs://host1.domain.com:8020/apps/hive/warehouse/vectortab10korc 
 Table Type:   MANAGED_TABLE
 Table Parameters:  
   COLUMN_STATS_ACCURATE   true
   numFiles1   
   numRows 1   
   rawDataSize 0   
   totalSize   344748  
   transient_lastDdlTime   1393365281  

 # Storage Information  
 SerDe Library:org.apache.hadoop.hive.ql.io.orc.OrcSerde
 InputFormat:  org.apache.hadoop.hive.ql.io.orc.OrcInputFormat  
 OutputFormat: 

[jira] [Commented] (HIVE-6538) yet another annoying exception in test logs

2014-03-05 Thread Szehon Ho (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921469#comment-13921469
 ] 

Szehon Ho commented on HIVE-6538:
-

Added a check to not log for NoSuchObjectException in FunctionRegistry.

 yet another annoying exception in test logs
 ---

 Key: HIVE-6538
 URL: https://issues.apache.org/jira/browse/HIVE-6538
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Szehon Ho
Priority: Trivial
 Attachments: HIVE-6538.2.patch, HIVE-6538.patch


 Whenever you look at failed q tests you have to go thru this useless 
 exception.
 {noformat}
 2014-03-03 11:22:54,872 ERROR metastore.RetryingHMSHandler 
 (RetryingHMSHandler.java:invoke(143)) - 
 MetaException(message:NoSuchObjectException(message:Function 
 default.qtest_get_java_boolean does not exist))
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newMetaException(HiveMetaStore.java:4575)
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_function(HiveMetaStore.java:4702)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
   at $Proxy8.get_function(Unknown Source)
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getFunction(HiveMetaStoreClient.java:1526)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89)
   at $Proxy9.getFunction(Unknown Source)
   at org.apache.hadoop.hive.ql.metadata.Hive.getFunction(Hive.java:2603)
   at 
 org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfoFromMetastore(FunctionRegistry.java:546)
   at 
 org.apache.hadoop.hive.ql.exec.FunctionRegistry.getQualifiedFunctionInfo(FunctionRegistry.java:578)
   at 
 org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfo(FunctionRegistry.java:599)
   at 
 org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfo(FunctionRegistry.java:606)
   at 
 org.apache.hadoop.hive.ql.parse.FunctionSemanticAnalyzer.analyzeDropFunction(FunctionSemanticAnalyzer.java:94)
   at 
 org.apache.hadoop.hive.ql.parse.FunctionSemanticAnalyzer.analyzeInternal(FunctionSemanticAnalyzer.java:60)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:445)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:345)
   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1078)
   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1121)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1014)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1004)
   at org.apache.hadoop.hive.ql.QTestUtil.runCmd(QTestUtil.java:655)
   at org.apache.hadoop.hive.ql.QTestUtil.createSources(QTestUtil.java:772)
   at 
 org.apache.hadoop.hive.cli.TestCliDriver.clinit(TestCliDriver.java:46)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.internal.runners.SuiteMethod.testFromSuiteMethod(SuiteMethod.java:34)
   at org.junit.internal.runners.SuiteMethod.init(SuiteMethod.java:23)
   at 
 org.junit.internal.builders.SuiteMethodBuilder.runnerForClass(SuiteMethodBuilder.java:14)
   at 
 org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:57)
   at 
 org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(AllDefaultPossibilitiesBuilder.java:29)
   at 
 org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:57)
   at 
 org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:24)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:262)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
   at 
 

[jira] [Updated] (HIVE-6538) yet another annoying exception in test logs

2014-03-05 Thread Szehon Ho (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho updated HIVE-6538:


Attachment: HIVE-6538.2.patch

 yet another annoying exception in test logs
 ---

 Key: HIVE-6538
 URL: https://issues.apache.org/jira/browse/HIVE-6538
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Szehon Ho
Priority: Trivial
 Attachments: HIVE-6538.2.patch, HIVE-6538.patch


 Whenever you look at failed q tests you have to go thru this useless 
 exception.
 {noformat}
 2014-03-03 11:22:54,872 ERROR metastore.RetryingHMSHandler 
 (RetryingHMSHandler.java:invoke(143)) - 
 MetaException(message:NoSuchObjectException(message:Function 
 default.qtest_get_java_boolean does not exist))
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newMetaException(HiveMetaStore.java:4575)
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_function(HiveMetaStore.java:4702)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
   at $Proxy8.get_function(Unknown Source)
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getFunction(HiveMetaStoreClient.java:1526)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89)
   at $Proxy9.getFunction(Unknown Source)
   at org.apache.hadoop.hive.ql.metadata.Hive.getFunction(Hive.java:2603)
   at 
 org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfoFromMetastore(FunctionRegistry.java:546)
   at 
 org.apache.hadoop.hive.ql.exec.FunctionRegistry.getQualifiedFunctionInfo(FunctionRegistry.java:578)
   at 
 org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfo(FunctionRegistry.java:599)
   at 
 org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfo(FunctionRegistry.java:606)
   at 
 org.apache.hadoop.hive.ql.parse.FunctionSemanticAnalyzer.analyzeDropFunction(FunctionSemanticAnalyzer.java:94)
   at 
 org.apache.hadoop.hive.ql.parse.FunctionSemanticAnalyzer.analyzeInternal(FunctionSemanticAnalyzer.java:60)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:445)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:345)
   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1078)
   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1121)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1014)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1004)
   at org.apache.hadoop.hive.ql.QTestUtil.runCmd(QTestUtil.java:655)
   at org.apache.hadoop.hive.ql.QTestUtil.createSources(QTestUtil.java:772)
   at 
 org.apache.hadoop.hive.cli.TestCliDriver.clinit(TestCliDriver.java:46)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.internal.runners.SuiteMethod.testFromSuiteMethod(SuiteMethod.java:34)
   at org.junit.internal.runners.SuiteMethod.init(SuiteMethod.java:23)
   at 
 org.junit.internal.builders.SuiteMethodBuilder.runnerForClass(SuiteMethodBuilder.java:14)
   at 
 org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:57)
   at 
 org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(AllDefaultPossibilitiesBuilder.java:29)
   at 
 org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:57)
   at 
 org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:24)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:262)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
   at 
 

[jira] [Updated] (HIVE-6546) WebHCat job submission for pig with -useHCatalog argument fails on Windows

2014-03-05 Thread Eric Hanson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Hanson updated HIVE-6546:
--

Fix Version/s: 0.13.0
 Assignee: Eric Hanson
   Status: Patch Available  (was: Open)

 WebHCat job submission for pig with -useHCatalog argument fails on Windows
 --

 Key: HIVE-6546
 URL: https://issues.apache.org/jira/browse/HIVE-6546
 Project: Hive
  Issue Type: Bug
  Components: WebHCat
Affects Versions: 0.12.0, 0.11.0, 0.13.0
 Environment: HDInsight deploying HDP 1.3:  
 c:\apps\dist\pig-0.11.0.1.3.2.0-05
 Also on Windows HDP 1.3 one-box configuration.
Reporter: Eric Hanson
Assignee: Eric Hanson
 Fix For: 0.13.0

 Attachments: HIVE-6546.01.patch


 On a one-box windows setup, do the following from a powershell prompt:
 cmd /c curl.exe -s `
   -d user.name=hadoop `
   -d arg=-useHCatalog `
   -d execute=emp = load '/data/emp/emp_0.dat'; dump emp; `
   -d statusdir=/tmp/webhcat.output01 `
   'http://localhost:50111/templeton/v1/pig' -v
 The job fails with error code 7, but it should run. 
 I traced this down to the following. In the job configuration for the 
 TempletonJobController, we have templeton.args set to
 cmd,/c,call,C:\\hadooppig-0.11.0.1.3.0.0-0846/bin/pig.cmd,-D__WEBHCAT_TOKEN_FILE_LOCATION__=-useHCatalog,-execute,emp
  = load '/data/emp/emp_0.dat'; dump emp;
 Notice the = sign before -useHCatalog. I think this should be a comma.
 The bad string D__WEBHCAT_TOKEN_FILE_LOCATION__=-useHCatalog gets created 
 in  org.apache.hadoop.util.GenericOptionsParser.preProcessForWindows().
 It happens at line 434:
 {code}
   } else {
   if (i  args.length - 1) {
 prop += = + args[++i];   // RIGHT HERE! at iterations i = 37, 38
   }
 }
 {code}
 Bug is here:
 {code}
   if (prop != null) {
 if (prop.contains(=)) {  // -D__WEBHCAT_TOKEN_FILE_LOCATION__ does 
 not contain equal, so else branch is run and appends =-useHCatalog,
   // everything good
 } else {
   if (i  args.length - 1) {
 prop += = + args[++i];
   }
 }
 newArgs.add(prop);
   }
 {code}
 One possible fix is to change the string constant 
 org.apache.hcatalog.templeton.tool.TempletonControllerJob.TOKEN_FILE_ARG_PLACEHOLDER
  to have an = sign in it. Or, preProcessForWindows() itself could be 
 changed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6546) WebHCat job submission for pig with -useHCatalog argument fails on Windows

2014-03-05 Thread Eric Hanson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Hanson updated HIVE-6546:
--

Attachment: HIVE-6546.01.patch

Changed constant placeholder to include = sign

 WebHCat job submission for pig with -useHCatalog argument fails on Windows
 --

 Key: HIVE-6546
 URL: https://issues.apache.org/jira/browse/HIVE-6546
 Project: Hive
  Issue Type: Bug
  Components: WebHCat
Affects Versions: 0.11.0, 0.12.0, 0.13.0
 Environment: HDInsight deploying HDP 1.3:  
 c:\apps\dist\pig-0.11.0.1.3.2.0-05
 Also on Windows HDP 1.3 one-box configuration.
Reporter: Eric Hanson
 Fix For: 0.13.0

 Attachments: HIVE-6546.01.patch


 On a one-box windows setup, do the following from a powershell prompt:
 cmd /c curl.exe -s `
   -d user.name=hadoop `
   -d arg=-useHCatalog `
   -d execute=emp = load '/data/emp/emp_0.dat'; dump emp; `
   -d statusdir=/tmp/webhcat.output01 `
   'http://localhost:50111/templeton/v1/pig' -v
 The job fails with error code 7, but it should run. 
 I traced this down to the following. In the job configuration for the 
 TempletonJobController, we have templeton.args set to
 cmd,/c,call,C:\\hadooppig-0.11.0.1.3.0.0-0846/bin/pig.cmd,-D__WEBHCAT_TOKEN_FILE_LOCATION__=-useHCatalog,-execute,emp
  = load '/data/emp/emp_0.dat'; dump emp;
 Notice the = sign before -useHCatalog. I think this should be a comma.
 The bad string D__WEBHCAT_TOKEN_FILE_LOCATION__=-useHCatalog gets created 
 in  org.apache.hadoop.util.GenericOptionsParser.preProcessForWindows().
 It happens at line 434:
 {code}
   } else {
   if (i  args.length - 1) {
 prop += = + args[++i];   // RIGHT HERE! at iterations i = 37, 38
   }
 }
 {code}
 Bug is here:
 {code}
   if (prop != null) {
 if (prop.contains(=)) {  // -D__WEBHCAT_TOKEN_FILE_LOCATION__ does 
 not contain equal, so else branch is run and appends =-useHCatalog,
   // everything good
 } else {
   if (i  args.length - 1) {
 prop += = + args[++i];
   }
 }
 newArgs.add(prop);
   }
 {code}
 One possible fix is to change the string constant 
 org.apache.hcatalog.templeton.tool.TempletonControllerJob.TOKEN_FILE_ARG_PLACEHOLDER
  to have an = sign in it. Or, preProcessForWindows() itself could be 
 changed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-6558) HiveServer2 Plain SASL authentication broken after hadoop 2.3 upgrade

2014-03-05 Thread Prasad Mujumdar (JIRA)
Prasad Mujumdar created HIVE-6558:
-

 Summary: HiveServer2 Plain SASL authentication broken after hadoop 
2.3 upgrade
 Key: HIVE-6558
 URL: https://issues.apache.org/jira/browse/HIVE-6558
 Project: Hive
  Issue Type: Bug
  Components: Authentication, HiveServer2
Affects Versions: 0.13.0
Reporter: Prasad Mujumdar
Assignee: Prasad Mujumdar
Priority: Blocker


Java only includes Plain SASL client and not server. Hence HiveServer2 includes 
a Plain SASL server implementation. Now Hadoop has its own Plain SASL server 
[HADOOP-9020|https://issues.apache.org/jira/browse/HADOOP-9020] which is part 
of Hadoop 2.3 
[release|http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-common/releasenotes.html].
The two servers use different Sasl callbacks and the servers are registered in 
java.security.Provider via static code. As a result the HiveServer2 instance 
could be using Hadoop's Plain SASL server which breaks the authentication.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6460) Need new show functionality for transactions

2014-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921543#comment-13921543
 ] 

Hive QA commented on HIVE-6460:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12632753/HIVE-6460.patch

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 5368 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.parse.authorization.plugin.sqlstd.TestOperation2Privilege.checkHiveOperationTypeMatch
org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveOperationType.checkHiveOperationTypeMatch
org.apache.hive.beeline.TestSchemaTool.testSchemaInit
org.apache.hive.beeline.TestSchemaTool.testSchemaUpgrade
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1633/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1633/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12632753

 Need new show functionality for transactions
 --

 Key: HIVE-6460
 URL: https://issues.apache.org/jira/browse/HIVE-6460
 Project: Hive
  Issue Type: Sub-task
  Components: SQL
Reporter: Alan Gates
Assignee: Alan Gates
 Fix For: 0.13.0

 Attachments: 6460.wip.patch, HIVE-6460.patch


 With the addition of transactions and compactions for delta files some new 
 show commands are required.
 * show transactions to show currently open or aborted transactions
 * show compactions to show currently waiting or running compactions
 * show locks needs to work with the new db style of locks as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   >