[jira] [Updated] (HIVE-10307) Support to use number literals in partition column

2015-04-30 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-10307:
--
Labels: TODOC1.2  (was: )

 Support to use number literals in partition column
 --

 Key: HIVE-10307
 URL: https://issues.apache.org/jira/browse/HIVE-10307
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 1.0.0
Reporter: Chaoyu Tang
Assignee: Chaoyu Tang
  Labels: TODOC1.2
 Fix For: 1.2.0

 Attachments: HIVE-10307.1.patch, HIVE-10307.2.patch, 
 HIVE-10307.3.patch, HIVE-10307.4.patch, HIVE-10307.5.patch, 
 HIVE-10307.6.patch, HIVE-10307.patch


 Data types like TinyInt, SmallInt, BigInt or Decimal can be expressed as 
 literals with postfix like Y, S, L, or BD appended to the number. These 
 literals work in most Hive queries, but do not when they are used as 
 partition column value. For a partitioned table like:
 create table partcoltypenum (key int, value string) partitioned by (tint 
 tinyint, sint smallint, bint bigint);
 insert into partcoltypenum partition (tint=100Y, sint=1S, 
 bint=1000L) select key, value from src limit 30;
 Queries like select, describe and drop partition do not work. For an example
 select * from partcoltypenum where tint=100Y and sint=1S and 
 bint=1000L;
 does not return any rows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-5672) Insert with custom separator not supported for non-local directory

2015-04-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521140#comment-14521140
 ] 

Hive QA commented on HIVE-5672:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12729190/HIVE-5672.8.patch

{color:red}ERROR:{color} -1 due to 16 failed/errored test(s), 8827 tests 
executed
*Failed tests:*
{noformat}
TestCustomAuthentication - did not produce a TEST-*.xml file
TestMinimrCliDriver-bucketmapjoin6.q-constprog_partitioner.q-infer_bucket_sort_dyn_part.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-external_table_with_space_in_location_path.q-infer_bucket_sort_merge.q-auto_sortmerge_join_16.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-groupby2.q-import_exported_table.q-bucketizedhiveinputformat.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-index_bitmap3.q-stats_counter_partitioned.q-temp_table_external.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_map_operators.q-join1.q-bucketmapjoin7.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_num_buckets.q-disable_merge_for_bucketing.q-uber_reduce.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_reducers_power_two.q-scriptfile1.q-scriptfile1_win.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-leftsemijoin_mr.q-load_hdfs_file_with_space_in_the_name.q-root_dir_external_table.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-list_bucket_dml_10.q-bucket_num_reducers.q-bucket6.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-load_fs2.q-file_with_header_footer.q-ql_rewrite_gbtoidx_cbo_1.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-parallel_orderby.q-reduce_deduplicate.q-ql_rewrite_gbtoidx_cbo_2.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-ql_rewrite_gbtoidx.q-smb_mapjoin_8.q - did not produce a 
TEST-*.xml file
TestMinimrCliDriver-schemeAuthority2.q-bucket4.q-input16_cc.q-and-1-more - did 
not produce a TEST-*.xml file
org.apache.hive.hcatalog.streaming.TestStreaming.testTransactionBatchEmptyCommit
org.apache.hive.jdbc.TestSSL.testSSLConnectionWithProperty
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3658/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3658/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3658/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 16 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12729190 - PreCommit-HIVE-TRUNK-Build

 Insert with custom separator not supported for non-local directory
 --

 Key: HIVE-5672
 URL: https://issues.apache.org/jira/browse/HIVE-5672
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0, 1.0.0
Reporter: Romain Rigaux
Assignee: Nemon Lou
 Attachments: HIVE-5672.1.patch, HIVE-5672.2.patch, HIVE-5672.3.patch, 
 HIVE-5672.4.patch, HIVE-5672.5.patch, HIVE-5672.5.patch.tar.gz, 
 HIVE-5672.6.patch, HIVE-5672.6.patch.tar.gz, HIVE-5672.7.patch, 
 HIVE-5672.7.patch.tar.gz, HIVE-5672.8.patch, HIVE-5672.8.patch.tar.gz


 https://issues.apache.org/jira/browse/HIVE-3682 is great but non local 
 directory don't seem to be supported:
 {code}
 insert overwrite directory '/tmp/test-02'
 row format delimited
 FIELDS TERMINATED BY ':'
 select description FROM sample_07
 {code}
 {code}
 Error while compiling statement: FAILED: ParseException line 2:0 cannot 
 recognize input near 'row' 'format' 'delimited' in select clause
 {code}
 This works (with 'local'):
 {code}
 insert overwrite local directory '/tmp/test-02'
 row format delimited
 FIELDS TERMINATED BY ':'
 select code, description FROM sample_07
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9004) Reset doesn't work for the default empty value entry

2015-04-30 Thread Nemon Lou (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521160#comment-14521160
 ] 

Nemon Lou commented on HIVE-9004:
-

May be you can unsetlike this :
{code}
 ss.getConf().unset(key);
{code}

 Reset doesn't work for the default empty value entry
 

 Key: HIVE-9004
 URL: https://issues.apache.org/jira/browse/HIVE-9004
 Project: Hive
  Issue Type: Bug
  Components: Configuration
Reporter: Cheng Hao
Assignee: Cheng Hao
 Fix For: 1.2.0

 Attachments: HIVE-9004.patch


 To illustrate that:
 In hive cli:
 hive set hive.table.parameters.default;
 hive.table.parameters.default is undefined
 hive set hive.table.parameters.default=key1=value1;
 hive reset;
 hive set hive.table.parameters.default;
 hive.table.parameters.default=key1=value1
 I think we expect the last output as hive.table.parameters.default is 
 undefined



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10307) Support to use number literals in partition column

2015-04-30 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521083#comment-14521083
 ] 

Lefty Leverenz commented on HIVE-10307:
---

Doc note:  This extends the use of *hive.typecheck.on.insert* and adds a 
parameter description.  The parameter was introduced in 0.12.0 but isn't 
documented in the wiki yet.

The use of number literals in partition columns also needs to be documented.  
Does it belong in the create and alter sections of the DDL doc or in the insert 
section(s) of the DML doc?  Or how about the Select doc?  Or Data Types?

* [Configuration Properties -- Query and DDL Execution | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-QueryandDDLExecution]
* [DDL -- Partitioned Tables | 
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-PartitionedTables]
* [DDL -- Alter Partition | 
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-AlterPartition]
* [DML -- Inserting data into Hive Tables from queries | 
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-InsertingdataintoHiveTablesfromqueries]
* [DML -- Inserting values into tables from SQL | 
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-InsertingvaluesintotablesfromSQL]
* [Select -- Partition Based Queries | 
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Select#LanguageManualSelect-PartitionBasedQueries]
* [Data Types -- Column Types | 
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-ColumnTypes]
* [Data Types -- Literals | 
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-Literals]

 Support to use number literals in partition column
 --

 Key: HIVE-10307
 URL: https://issues.apache.org/jira/browse/HIVE-10307
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 1.0.0
Reporter: Chaoyu Tang
Assignee: Chaoyu Tang
  Labels: TODOC1.2
 Fix For: 1.2.0

 Attachments: HIVE-10307.1.patch, HIVE-10307.2.patch, 
 HIVE-10307.3.patch, HIVE-10307.4.patch, HIVE-10307.5.patch, 
 HIVE-10307.6.patch, HIVE-10307.patch


 Data types like TinyInt, SmallInt, BigInt or Decimal can be expressed as 
 literals with postfix like Y, S, L, or BD appended to the number. These 
 literals work in most Hive queries, but do not when they are used as 
 partition column value. For a partitioned table like:
 create table partcoltypenum (key int, value string) partitioned by (tint 
 tinyint, sint smallint, bint bigint);
 insert into partcoltypenum partition (tint=100Y, sint=1S, 
 bint=1000L) select key, value from src limit 30;
 Queries like select, describe and drop partition do not work. For an example
 select * from partcoltypenum where tint=100Y and sint=1S and 
 bint=1000L;
 does not return any rows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10488) cast DATE as TIMESTAMP returns incorrect values

2015-04-30 Thread Alexander Pivovarov (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521089#comment-14521089
 ] 

Alexander Pivovarov commented on HIVE-10488:


looks like Orc table contains int values instead of date values
e.g. 1996-01-01=1994

I got very similar results as in the description when I removed quotation marks 
wrapping date values:
{code}
--  1996-01-01=1994
hive select cast(1996-01-01 as timestamp);
OK
1969-12-31 16:00:01.994

--  2000-01-01=1998
hive select cast(2000-01-01 as timestamp);
OK
1969-12-31 16:00:01.998

--  2000-12-31=1957
hive select cast(2000-12-31 as timestamp);
OK
1969-12-31 16:00:01.957
{code}
my TimeZone is US/Pacific - this is why time is -8hr from 1970   (1969 4pm)

The description shows 1969 7pm (-5 hours offset from 1970) - So, their timezone 
is US/Eastern

 cast DATE as TIMESTAMP returns incorrect values
 ---

 Key: HIVE-10488
 URL: https://issues.apache.org/jira/browse/HIVE-10488
 Project: Hive
  Issue Type: Bug
  Components: SQL
Affects Versions: 0.13.1
Reporter: N Campbell
Assignee: Chaoyu Tang

 same data in textfile works
 same data loaded into an ORC table does not
 connection property of tez/mr makes no difference.
 select rnum, cdt, cast (cdt as timestamp) from tdt
 0 null  null
 1 1996-01-01  1969-12-31 19:00:09.496
 2 2000-01-01  1969-12-31 19:00:10.957
 3 2000-12-31  1969-12-31 19:00:11.322
 vs
 0 null  null
 1 1996-01-01  1996-01-01 00:00:00.0
 2 2000-01-01  2000-01-01 00:00:00.0
 3 2000-12-31  2000-12-31 00:00:00.0
 create table  if not exists TDT ( RNUM int , CDT date   )
  STORED AS orc  ;
 insert overwrite table TDT select * from  text.TDT;
 0|\N
 1|1996-01-01
 2|2000-01-01
 3|2000-12-31



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10520) LLAP: Must reset small table result columns for Native Vectorization of Map Join

2015-04-30 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-10520:

Attachment: HIVE-10520.03.patch

Rebase to increase odds of tests passing due to long queue.

 LLAP: Must reset small table result columns for Native Vectorization of Map 
 Join
 

 Key: HIVE-10520
 URL: https://issues.apache.org/jira/browse/HIVE-10520
 Project: Hive
  Issue Type: Sub-task
  Components: Vectorization
Affects Versions: 1.2.0
Reporter: Matt McCline
Assignee: Matt McCline
Priority: Blocker
 Fix For: 1.2.0, 1.3.0

 Attachments: HIVE-10520.01.patch, HIVE-10520.02.patch, 
 HIVE-10520.03.patch


 Scratch columns not getting reset by input source, so native vector map join 
 operators must manually reset small table result columns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9681) Extend HiveAuthorizationProvider to support partition-sets.

2015-04-30 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521203#comment-14521203
 ] 

Sushanth Sowmyan commented on HIVE-9681:


+1, tests failures are unrelated.

 Extend HiveAuthorizationProvider to support partition-sets.
 ---

 Key: HIVE-9681
 URL: https://issues.apache.org/jira/browse/HIVE-9681
 Project: Hive
  Issue Type: Bug
  Components: Security
Affects Versions: 0.14.0
Reporter: Mithun Radhakrishnan
Assignee: Mithun Radhakrishnan
 Attachments: HIVE-9681.1.patch, HIVE-9681.2.patch


 {{HiveAuthorizationProvider}} allows only for the authorization of a single 
 partition at a time. For instance, when the {{StorageBasedAuthProvider}} must 
 authorize an operation on a set of partitions (say from a 
 PreDropPartitionEvent), each partition's data-directory needs to be checked 
 individually. For N partitions, this results in N namenode calls.
 I'd like to add {{authorize()}} overloads that accept multiple partitions. 
 This will allow StorageBasedAuthProvider to make batched namenode calls. 
 P.S. There's 2 further optimizations that are possible:
 1. In the ideal case, we'd have a single call in 
 {{org.apache.hadoop.fs.FileSystem}} to check access for an array of Paths, 
 something like:
 {code:title=FileSystem.java|borderStyle=solid}
 @InterfaceAudience.LimitedPrivate({HDFS, Hive})
   public void access(Path [] paths, FsAction mode) throws 
 AccessControlException, FileNotFoundException, IOException 
 {...}
 {code}
 2. We can go one better if we could retrieve partition-locations in DirectSQL 
 and use those for authorization. The EventListener-abstraction behind which 
 the AuthProviders operate make this difficult. I can attempt to solve this 
 using a PartitionSpec and a call-back into the ObjectStore from 
 StorageBasedAuthProvider. I'll save this rigmarole for later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10530) Aggregate stats cache: bug fixes for RDBMS path

2015-04-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521050#comment-14521050
 ] 

Hive QA commented on HIVE-10530:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12729079/HIVE-10530.1.patch

{color:red}ERROR:{color} -1 due to 15 failed/errored test(s), 8827 tests 
executed
*Failed tests:*
{noformat}
TestMinimrCliDriver-bucketmapjoin6.q-constprog_partitioner.q-infer_bucket_sort_dyn_part.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-external_table_with_space_in_location_path.q-infer_bucket_sort_merge.q-auto_sortmerge_join_16.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-groupby2.q-import_exported_table.q-bucketizedhiveinputformat.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-index_bitmap3.q-stats_counter_partitioned.q-temp_table_external.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_map_operators.q-join1.q-bucketmapjoin7.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_num_buckets.q-disable_merge_for_bucketing.q-uber_reduce.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_reducers_power_two.q-scriptfile1.q-scriptfile1_win.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-leftsemijoin_mr.q-load_hdfs_file_with_space_in_the_name.q-root_dir_external_table.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-list_bucket_dml_10.q-bucket_num_reducers.q-bucket6.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-load_fs2.q-file_with_header_footer.q-ql_rewrite_gbtoidx_cbo_1.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-parallel_orderby.q-reduce_deduplicate.q-ql_rewrite_gbtoidx_cbo_2.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-ql_rewrite_gbtoidx.q-smb_mapjoin_8.q - did not produce a 
TEST-*.xml file
TestMinimrCliDriver-schemeAuthority2.q-bucket4.q-input16_cc.q-and-1-more - did 
not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_extrapolate_part_stats_partial
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_extrapolate_part_stats_partial_ndv
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3657/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3657/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3657/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 15 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12729079 - PreCommit-HIVE-TRUNK-Build

 Aggregate stats cache: bug fixes for RDBMS path
 ---

 Key: HIVE-10530
 URL: https://issues.apache.org/jira/browse/HIVE-10530
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 1.2.0
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 1.2.0

 Attachments: HIVE-10530.1.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10071) CBO (Calcite Return Path): Join to MultiJoin rule

2015-04-30 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-10071:

Affects Version/s: 1.2.0

 CBO (Calcite Return Path): Join to MultiJoin rule
 -

 Key: HIVE-10071
 URL: https://issues.apache.org/jira/browse/HIVE-10071
 Project: Hive
  Issue Type: Sub-task
  Components: CBO
Affects Versions: 1.2.0
Reporter: Jesus Camacho Rodriguez
Assignee: Jesus Camacho Rodriguez
 Fix For: 1.2.0

 Attachments: HIVE-10071.patch


 CBO return path: auto_join3.q can be used to reproduce the problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-6852) JDBC client connections hang at TSaslTransport

2015-04-30 Thread Vladimir Kovalchuk (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521165#comment-14521165
 ] 

Vladimir Kovalchuk commented on HIVE-6852:
--

The problem was that HiveServer2 conf file had authentication = NOSASL, but the 
client was not prepared for this situation. Workaround is to add ;auth=noSasl 
to JDBC URL.
I would say it's definitely a bug (at protocol specification level I am afraid, 
needs some re-design), and at least needs to be documentation.

 JDBC client connections hang at TSaslTransport
 --

 Key: HIVE-6852
 URL: https://issues.apache.org/jira/browse/HIVE-6852
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Reporter: jay vyas

 I've noticed that when there is an underlying issue in connecting a client to 
 the JDBC interface of the HiveServer2 to run queries, you get a hang after 
 the thrift portion, at least in certain scenarios: 
 Turning log4j to DEBUG, you can see the following when trying to get a 
 connection using:
 {noformat}
 Connection jdbc = 
 DriverManager.getConnection(this.con,hive,password);
 jdbc:hive2://localhost:1/default,
 {noformat}
 The logs get to here before the hang :
 {noformat}
 0[main] DEBUG org.apache.thrift.transport.TSaslTransport  - opening 
 transport org.apache.thrift.transport.TSaslClientTransport@219ba640
 0 [main] DEBUG org.apache.thrift.transport.TSaslTransport  - opening 
 transport org.apache.thrift.transport.TSaslClientTransport@219ba640
 3[main] DEBUG org.apache.thrift.transport.TSaslClientTransport  - Sending 
 mechanism name PLAIN and initial response of length 14
 3 [main] DEBUG org.apache.thrift.transport.TSaslClientTransport  - Sending 
 mechanism name PLAIN and initial response of length 14
 5[main] DEBUG org.apache.thrift.transport.TSaslTransport  - CLIENT: 
 Writing message with status START and payload length 5
 5 [main] DEBUG org.apache.thrift.transport.TSaslTransport  - CLIENT: Writing 
 message with status START and payload length 5
 5[main] DEBUG org.apache.thrift.transport.TSaslTransport  - CLIENT: 
 Writing message with status COMPLETE and payload length 14
 5 [main] DEBUG org.apache.thrift.transport.TSaslTransport  - CLIENT: Writing 
 message with status COMPLETE and payload length 14
 5[main] DEBUG org.apache.thrift.transport.TSaslTransport  - CLIENT: Start 
 message handled
 5 [main] DEBUG org.apache.thrift.transport.TSaslTransport  - CLIENT: Start 
 message handled
 5[main] DEBUG org.apache.thrift.transport.TSaslTransport  - CLIENT: Main 
 negotiation loop complete
 5 [main] DEBUG org.apache.thrift.transport.TSaslTransport  - CLIENT: Main 
 negotiation loop complete
 6[main] DEBUG org.apache.thrift.transport.TSaslTransport  - CLIENT: SASL 
 Client receiving last message
 6 [main] DEBUG org.apache.thrift.transport.TSaslTransport  - CLIENT: SASL 
 Client receiving last message
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10551) OOM when running query_89 with vectorization on hybridgrace=false

2015-04-30 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HIVE-10551:

Attachment: hive_10551.png

Snapshot shows 680 MB of data already present, and it ends up throwing OOM when 
it tries to allocate 1800+MB for sorter.  

 OOM when running query_89 with vectorization on  hybridgrace=false
 ---

 Key: HIVE-10551
 URL: https://issues.apache.org/jira/browse/HIVE-10551
 Project: Hive
  Issue Type: Bug
Reporter: Rajesh Balamohan
 Attachments: hive_10551.png


 - TPC-DS Query_89 @ 10 TB scale
 - Trunk version of Hive + Tez 0.7.0-SNAPSHOT
 - Additional settings ( hive.vectorized.groupby.maxentries=1024 , 
 tez.runtime.io.sort.factor=200  tez.runtime.io.sort.mb=1800 
 hive.tez.container.size=4096 ,hive.mapjoin.hybridgrace.hashtable=false )
 Will attach the profiler snapshot asap.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8339) Job status not found after 100% succeded mapreduce

2015-04-30 Thread Radim Kubacki (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521279#comment-14521279
 ] 

Radim Kubacki commented on HIVE-8339:
-

This one is basically a duplicate of HIVE-2708 and also reported against Hadoop 
as https://issues.apache.org/jira/browse/MAPREDUCE-6312 If you look at the 
latter you will see that we can reproduce this problem on our cluster with Hive 
0.14.0 and 1.1.0 using YARN scheduler.

There is a patch against 0.14.0 - 
https://github.com/radimk/hive/commit/3e5c7c47af69b27dfae4c1280a89346e843bd5f3
and against 1.1.0 - 
https://github.com/radimk/hive/commit/bf4d047274fb3fddd9bcfe8432154cda222e6582

What do we need to do to get it accepted or to help with any alternate way how 
to fix this issue?

 Job status not found after 100% succeded mapreduce
 ---

 Key: HIVE-8339
 URL: https://issues.apache.org/jira/browse/HIVE-8339
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.13.1
 Environment: Hadoop 2.4.0, Hive 0.13.1.
 Amazon EMR cluster of 9 i2.4xlarge nodes.
 800+GB of data in HDFS.
Reporter: Valera Chevtaev

 According to the logs it seems that the jobs 100% succeed for both map and 
 reduce but then wasn't able to get the status of the job from job history 
 server.
 Hive logs:
 2014-10-03 07:57:26,593 INFO  [main]: exec.Task 
 (SessionState.java:printInfo(536)) - 2014-10-03 07:57:26,593 Stage-1 map = 
 100%, reduce = 99%, Cumulative CPU 872541.02 sec
 2014-10-03 07:57:47,447 INFO  [main]: exec.Task 
 (SessionState.java:printInfo(536)) - 2014-10-03 07:57:47,446 Stage-1 map = 
 100%, reduce = 100%, Cumulative CPU 872566.55 sec
 2014-10-03 07:57:48,710 INFO  [main]: mapred.ClientServiceDelegate 
 (ClientServiceDelegate.java:getProxy(273)) - Application state is completed. 
 FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
 2014-10-03 07:57:48,716 ERROR [main]: exec.Task 
 (SessionState.java:printError(545)) - Ended Job = job_1412263771568_0002 with 
 exception 'java.io.IOException(Could not find status of 
 job:job_1412263771568_0002)'
 java.io.IOException: Could not find status of job:job_1412263771568_0002
at 
 org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:294)
at 
 org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:547)
at 
 org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:426)
at 
 org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:136)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153)
at 
 org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1503)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1270)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1088)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:911)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:901)
at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:275)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:227)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:430)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:366)
at 
 org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:463)
at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:479)
at 
 org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:759)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:697)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:636)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
 2014-10-03 07:57:48,763 ERROR [main]: ql.Driver 
 (SessionState.java:printError(545)) - FAILED: Execution Error, return code 1 
 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9365) The Metastore should take port configuration from hive-site.xml

2015-04-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521336#comment-14521336
 ] 

Hive QA commented on HIVE-9365:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12729250/HIVE-9365.03.patch

{color:red}ERROR:{color} -1 due to 13 failed/errored test(s), 8831 tests 
executed
*Failed tests:*
{noformat}
TestMinimrCliDriver-bucketmapjoin6.q-constprog_partitioner.q-infer_bucket_sort_dyn_part.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-external_table_with_space_in_location_path.q-infer_bucket_sort_merge.q-auto_sortmerge_join_16.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-groupby2.q-import_exported_table.q-bucketizedhiveinputformat.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-index_bitmap3.q-stats_counter_partitioned.q-temp_table_external.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_map_operators.q-join1.q-bucketmapjoin7.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_num_buckets.q-disable_merge_for_bucketing.q-uber_reduce.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_reducers_power_two.q-scriptfile1.q-scriptfile1_win.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-leftsemijoin_mr.q-load_hdfs_file_with_space_in_the_name.q-root_dir_external_table.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-list_bucket_dml_10.q-bucket_num_reducers.q-bucket6.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-load_fs2.q-file_with_header_footer.q-ql_rewrite_gbtoidx_cbo_1.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-parallel_orderby.q-reduce_deduplicate.q-ql_rewrite_gbtoidx_cbo_2.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-ql_rewrite_gbtoidx.q-smb_mapjoin_8.q - did not produce a 
TEST-*.xml file
TestMinimrCliDriver-schemeAuthority2.q-bucket4.q-input16_cc.q-and-1-more - did 
not produce a TEST-*.xml file
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3660/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3660/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3660/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 13 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12729250 - PreCommit-HIVE-TRUNK-Build

 The Metastore should take port configuration from hive-site.xml
 ---

 Key: HIVE-9365
 URL: https://issues.apache.org/jira/browse/HIVE-9365
 Project: Hive
  Issue Type: Improvement
Reporter: Nicolas Thiébaud
Assignee: Reuben Kuhnert
Priority: Minor
  Labels: metastore
 Attachments: HIVE-9365.01.patch, HIVE-9365.02.patch, 
 HIVE-9365.03.patch

   Original Estimate: 3h
  Remaining Estimate: 3h

 As opposed to the cli. Having this configuration in the launcher script 
 create fragmentation and does is not consistent with the way the hive stack 
 is configured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10530) Aggregate stats cache: bug fixes for RDBMS path

2015-04-30 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521410#comment-14521410
 ] 

Vaibhav Gumashta commented on HIVE-10530:
-

The last 2 test failures look related. Looking into them.

 Aggregate stats cache: bug fixes for RDBMS path
 ---

 Key: HIVE-10530
 URL: https://issues.apache.org/jira/browse/HIVE-10530
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 1.2.0
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 1.2.0

 Attachments: HIVE-10530.1.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10529) Remove references to tez task context before storing operator plan in object cache

2015-04-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521239#comment-14521239
 ] 

Hive QA commented on HIVE-10529:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12729137/HIVE-10529.2.patch

{color:red}ERROR:{color} -1 due to 15 failed/errored test(s), 8826 tests 
executed
*Failed tests:*
{noformat}
TestCustomAuthentication - did not produce a TEST-*.xml file
TestMinimrCliDriver-bucketmapjoin6.q-constprog_partitioner.q-infer_bucket_sort_dyn_part.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-external_table_with_space_in_location_path.q-infer_bucket_sort_merge.q-auto_sortmerge_join_16.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-groupby2.q-import_exported_table.q-bucketizedhiveinputformat.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-index_bitmap3.q-stats_counter_partitioned.q-temp_table_external.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_map_operators.q-join1.q-bucketmapjoin7.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_num_buckets.q-disable_merge_for_bucketing.q-uber_reduce.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_reducers_power_two.q-scriptfile1.q-scriptfile1_win.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-leftsemijoin_mr.q-load_hdfs_file_with_space_in_the_name.q-root_dir_external_table.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-list_bucket_dml_10.q-bucket_num_reducers.q-bucket6.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-load_fs2.q-file_with_header_footer.q-ql_rewrite_gbtoidx_cbo_1.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-parallel_orderby.q-reduce_deduplicate.q-ql_rewrite_gbtoidx_cbo_2.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-ql_rewrite_gbtoidx.q-smb_mapjoin_8.q - did not produce a 
TEST-*.xml file
TestMinimrCliDriver-schemeAuthority2.q-bucket4.q-input16_cc.q-and-1-more - did 
not produce a TEST-*.xml file
org.apache.hive.jdbc.TestSSL.testSSLConnectionWithProperty
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3659/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3659/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3659/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 15 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12729137 - PreCommit-HIVE-TRUNK-Build

 Remove references to tez task context before storing operator plan in object 
 cache
 --

 Key: HIVE-10529
 URL: https://issues.apache.org/jira/browse/HIVE-10529
 Project: Hive
  Issue Type: Bug
Reporter: Rajesh Balamohan
Assignee: Rajesh Balamohan
 Attachments: HIVE-10529.1.patch, HIVE-10529.2.patch, 
 hive_hashtable_loader.png






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10552) hive 1.1.0 rename column fails: Invalid method name: 'alter_table_with_cascade'

2015-04-30 Thread Chaoyu Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521512#comment-14521512
 ] 

Chaoyu Tang commented on HIVE-10552:


As I know, Cloudera 5.3.3 is Hive 13.1 and does not support 
alter_table_with_cascade. Alter table .. Cascade (HIVE-8839) is in Hive 1.1 and 
the alter_table_with_cascade is a new HMS API. I wonder if it is the problem 
from your mixing of the Cloudera5.3.3 with Hive 1.1? Could you detail more 
about how you did?

 hive 1.1.0 rename column fails: Invalid method name: 
 'alter_table_with_cascade'
 ---

 Key: HIVE-10552
 URL: https://issues.apache.org/jira/browse/HIVE-10552
 Project: Hive
  Issue Type: Bug
  Components: Database/Schema
Affects Versions: 1.1.0
 Environment: centos 6.6, cloudera 5.3.3
Reporter: David Watzke
Priority: Blocker

 Hi,
 we're trying out hive 1.1.0 with cloudera 5.3.3 and since hive 1.0.0 there's 
 (what appears to be) a regression.
 This ALTER command that renames a table column used to work fine in older 
 versions but in hive 1.1.0 it does throws this error:
 hive CREATE TABLE test_change (a int, b int, c int);
 OK
 Time taken: 2.303 seconds
 hive ALTER TABLE test_change CHANGE a a1 INT;
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. Unable to alter table. Invalid method 
 name: 'alter_table_with_cascade'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10453) HS2 leaking open file descriptors when using UDFs

2015-04-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521545#comment-14521545
 ] 

Hive QA commented on HIVE-10453:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12729275/HIVE-10453.1.patch

{color:red}ERROR:{color} -1 due to 74 failed/errored test(s), 7959 tests 
executed
*Failed tests:*
{noformat}
TestMiniSparkOnYarnCliDriver - did not produce a TEST-*.xml file
TestMiniTezCliDriver-auto_join30.q-vector_data_types.q-filter_join_breaktask.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-auto_sortmerge_join_13.q-alter_merge_2_orc.q-insert_values_dynamic_partitioned.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-dynpart_sort_optimization2.q-tez_bmj_schema_evolution.q-orc_merge5.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-enforce_order.q-auto_join1.q-vector_decimal_2.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-orc_vectorization_ppd.q-custom_input_output_format.q-vector_groupby_reduce.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-script_pipe.q-insert_values_non_partitioned.q-subquery_in.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-tez_union2.q-unionDistinct_1.q-auto_sortmerge_join_8.q-and-8-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-update_orig_table.q-union2.q-vectorized_bucketmapjoin1.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-vector_coalesce.q-auto_sortmerge_join_7.q-dynamic_partition_pruning.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-vector_decimal_10_0.q-vector_decimal_trailing.q-lvj_mapjoin.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-vector_decimal_round.q-cbo_windowing.q-tez_schema_evolution.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-vector_distinct_2.q-load_dyn_part2.q-join1.q-and-12-more - 
did not produce a TEST-*.xml file
TestMiniTezCliDriver-vector_interval_2.q-orc_merge6.q-mapreduce1.q-and-12-more 
- did not produce a TEST-*.xml file
TestMiniTezCliDriver-vector_partition_diff_num_cols.q-tez_joins_explain.q-vector_decimal_aggregate.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-vectorization_10.q-vector_partitioned_date_time.q-vector_non_string_partition.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-vectorization_13.q-update_after_multiple_inserts.q-mapreduce2.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-vectorization_16.q-mapjoin_mapjoin.q-groupby2.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-vectorized_parquet.q-vector_char_mapjoin1.q-tez_insert_overwrite_local_directory_1.q-and-12-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-bucketmapjoin6.q-constprog_partitioner.q-infer_bucket_sort_dyn_part.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-external_table_with_space_in_location_path.q-infer_bucket_sort_merge.q-auto_sortmerge_join_16.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-groupby2.q-import_exported_table.q-bucketizedhiveinputformat.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-index_bitmap3.q-stats_counter_partitioned.q-temp_table_external.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_map_operators.q-join1.q-bucketmapjoin7.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_num_buckets.q-disable_merge_for_bucketing.q-uber_reduce.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_reducers_power_two.q-scriptfile1.q-scriptfile1_win.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-leftsemijoin_mr.q-load_hdfs_file_with_space_in_the_name.q-root_dir_external_table.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-list_bucket_dml_10.q-bucket_num_reducers.q-bucket6.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-load_fs2.q-file_with_header_footer.q-ql_rewrite_gbtoidx_cbo_1.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-parallel_orderby.q-reduce_deduplicate.q-ql_rewrite_gbtoidx_cbo_2.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-ql_rewrite_gbtoidx.q-smb_mapjoin_8.q - did not produce a 
TEST-*.xml file
TestMinimrCliDriver-schemeAuthority2.q-bucket4.q-input16_cc.q-and-1-more - did 
not produce a TEST-*.xml file
TestSparkCliDriver-auto_join24.q-vector_decimal_aggregate.q-bucketmapjoin_negative.q-and-12-more
 - did not produce a TEST-*.xml file
TestSparkCliDriver-auto_join30.q-vector_data_types.q-filter_join_breaktask.q-and-12-more
 - did not produce a TEST-*.xml file

[jira] [Commented] (HIVE-10061) HiveConf Should not be used as part of the HS2 client side code

2015-04-30 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521569#comment-14521569
 ] 

Vaibhav Gumashta commented on HIVE-10061:
-

+1

 HiveConf Should not be used as part of the HS2 client side code
 ---

 Key: HIVE-10061
 URL: https://issues.apache.org/jira/browse/HIVE-10061
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2, JDBC
Affects Versions: 1.3.0
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
 Fix For: 1.3.0

 Attachments: HIVE-10061.1.patch


 HiveConf crept in to the JDBC driver via the  embedded mode check. 
 if (isEmbeddedMode) {
   EmbeddedThriftBinaryCLIService embeddedClient = new 
 EmbeddedThriftBinaryCLIService();
   embeddedClient.init(new HiveConf());
   client = embeddedClient;
 } else {
 
 Ideally we'd like to keep driver code free of these dependencies. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10061) HiveConf Should not be used as part of the HS2 client side code

2015-04-30 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-10061:

Component/s: JDBC
 HiveServer2

 HiveConf Should not be used as part of the HS2 client side code
 ---

 Key: HIVE-10061
 URL: https://issues.apache.org/jira/browse/HIVE-10061
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2, JDBC
Affects Versions: 1.3.0
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
 Fix For: 1.3.0

 Attachments: HIVE-10061.1.patch


 HiveConf crept in to the JDBC driver via the  embedded mode check. 
 if (isEmbeddedMode) {
   EmbeddedThriftBinaryCLIService embeddedClient = new 
 EmbeddedThriftBinaryCLIService();
   embeddedClient.init(new HiveConf());
   client = embeddedClient;
 } else {
 
 Ideally we'd like to keep driver code free of these dependencies. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10140) Window boundary is not compared correctly

2015-04-30 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521531#comment-14521531
 ] 

Aihua Xu commented on HIVE-10140:
-

Yes, we have some issues in there. It should work like the oracle analytic 
windows functions as follows.

http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions004.htm#SQLRF06174
http://www.orafaq.com/node/55

 Window boundary is not compared correctly
 -

 Key: HIVE-10140
 URL: https://issues.apache.org/jira/browse/HIVE-10140
 Project: Hive
  Issue Type: Bug
  Components: PTF-Windowing
Affects Versions: 1.0.0
Reporter: Yi Zhang
Assignee: Aihua Xu
Priority: Minor

 “ROWS between 10 preceding and 2 preceding” is not handled correctly.
 Underlying error: Window range invalid, start boundary is greater than end 
 boundary: window(start=range(10 PRECEDING), end=range(2 PRECEDING))
 If I change it to “2 preceding and 10 preceding”, the syntax works but the 
 results are 0 of course.
 Reason for the function: during analysis, it is sometimes desired to design 
 the window to filter the most recent events, in the case of the events' 
 responses are not available yet. There is a workaround for this, but it is 
 better/more proper to fix the bug. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10061) HiveConf Should not be used as part of the HS2 client side code

2015-04-30 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-10061:

Fix Version/s: 1.3.0

 HiveConf Should not be used as part of the HS2 client side code
 ---

 Key: HIVE-10061
 URL: https://issues.apache.org/jira/browse/HIVE-10061
 Project: Hive
  Issue Type: Bug
Affects Versions: 1.3.0
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
 Fix For: 1.3.0

 Attachments: HIVE-10061.1.patch


 HiveConf crept in to the JDBC driver via the  embedded mode check. 
 if (isEmbeddedMode) {
   EmbeddedThriftBinaryCLIService embeddedClient = new 
 EmbeddedThriftBinaryCLIService();
   embeddedClient.init(new HiveConf());
   client = embeddedClient;
 } else {
 
 Ideally we'd like to keep driver code free of these dependencies. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9392) JoinStatsRule miscalculates join cardinality as incorrect NDV is used due to column names having duplicated fqColumnName

2015-04-30 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521792#comment-14521792
 ] 

Pengcheng Xiong commented on HIVE-9392:
---

[~prasanth_j] and [~mohammedmostafa], could u plz let [~jpullokkaran] and I 
know if it is resolved already or it needs further efforts? thanks.

 JoinStatsRule miscalculates join cardinality as incorrect NDV is used due to 
 column names having duplicated fqColumnName
 

 Key: HIVE-9392
 URL: https://issues.apache.org/jira/browse/HIVE-9392
 Project: Hive
  Issue Type: Bug
  Components: Physical Optimizer
Affects Versions: 0.14.0
Reporter: Mostafa Mokhtar
Assignee: Prasanth Jayachandran
Priority: Critical
 Attachments: HIVE-9392.1.patch, HIVE-9392.2.patch


 In JoinStatsRule.process the join column statistics are stored in HashMap  
 joinedColStats, the key used which is the ColStatistics.fqColName is 
 duplicated between join column in the same vertex, as a result distinctVals 
 ends up having duplicated values which negatively affects the join 
 cardinality estimation.
 The duplicate keys are usually named KEY.reducesinkkey0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10403) Add n-way join support for Hybrid Grace Hash Join

2015-04-30 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-10403:
-
Attachment: HIVE-10403.08.patch

Replace patch 08 with fix

 Add n-way join support for Hybrid Grace Hash Join
 -

 Key: HIVE-10403
 URL: https://issues.apache.org/jira/browse/HIVE-10403
 Project: Hive
  Issue Type: Improvement
Affects Versions: 1.2.0
Reporter: Wei Zheng
Assignee: Wei Zheng
 Attachments: HIVE-10403.01.patch, HIVE-10403.02.patch, 
 HIVE-10403.03.patch, HIVE-10403.04.patch, HIVE-10403.06.patch, 
 HIVE-10403.07.patch, HIVE-10403.08.patch


 Currently Hybrid Grace Hash Join only supports 2-way join (one big table and 
 one small table). This task will enable n-way join (one big table and 
 multiple small tables).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10502) Cannot specify log4j.properties file location in Beeline

2015-04-30 Thread Szehon Ho (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521970#comment-14521970
 ] 

Szehon Ho commented on HIVE-10502:
--

Thanks for the confirmation, Chaoyu.  The first part is quite unfortunate, 
seems it will affect all the hive scripts and lead to 'hadoop version' printing 
out a debug message in a random log4j location (whatever it picks up).

So the second part, I tracked it down to HIVE-8772.  [~thejas], the JIRA makes 
it so that the log4j properties is hard-coded as 'beeline-log4j.properties' in 
conf file, which I don't feel is well-known, at this point would it be valuable 
to have it configurable by initializing Log4j in Beeline java code to have the 
same user experience as other hive components (--hiveconf hive.log4j.file)?

 Cannot specify log4j.properties file location in Beeline
 

 Key: HIVE-10502
 URL: https://issues.apache.org/jira/browse/HIVE-10502
 Project: Hive
  Issue Type: Bug
  Components: Beeline
Affects Versions: 1.1.0
Reporter: Szehon Ho
Assignee: Chaoyu Tang

 In HiveCLI, HiveServer2, HMS, etc, the following is called early in the 
 startup to initialize log4j logging: LogUtils.initHiveLog4j().
 However, seems like this is not the case in Beeline, which also needs log4j 
 like as follows:
 {noformat}
   at org.apache.log4j.LogManager.clinit(LogManager.java:127)
   at org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:66)
   at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:270)
   at 
 org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:156)
   at 
 org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:132)
   at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:657)
   at org.apache.hadoop.util.VersionInfo.clinit(VersionInfo.java:37)
 {noformat}
 It would be good to specify it, so it doesn't pick the first one in the 
 classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10403) Add n-way join support for Hybrid Grace Hash Join

2015-04-30 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-10403:
-
Attachment: (was: HIVE-10403.08.patch)

 Add n-way join support for Hybrid Grace Hash Join
 -

 Key: HIVE-10403
 URL: https://issues.apache.org/jira/browse/HIVE-10403
 Project: Hive
  Issue Type: Improvement
Affects Versions: 1.2.0
Reporter: Wei Zheng
Assignee: Wei Zheng
 Attachments: HIVE-10403.01.patch, HIVE-10403.02.patch, 
 HIVE-10403.03.patch, HIVE-10403.04.patch, HIVE-10403.06.patch, 
 HIVE-10403.07.patch, HIVE-10403.08.patch


 Currently Hybrid Grace Hash Join only supports 2-way join (one big table and 
 one small table). This task will enable n-way join (one big table and 
 multiple small tables).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10140) Window boundary is not compared correctly

2015-04-30 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-10140:

Attachment: HIVE-10140.patch

 Window boundary is not compared correctly
 -

 Key: HIVE-10140
 URL: https://issues.apache.org/jira/browse/HIVE-10140
 Project: Hive
  Issue Type: Bug
  Components: PTF-Windowing
Affects Versions: 1.0.0
Reporter: Yi Zhang
Assignee: Aihua Xu
Priority: Minor
 Attachments: HIVE-10140.patch


 “ROWS between 10 preceding and 2 preceding” is not handled correctly.
 Underlying error: Window range invalid, start boundary is greater than end 
 boundary: window(start=range(10 PRECEDING), end=range(2 PRECEDING))
 If I change it to “2 preceding and 10 preceding”, the syntax works but the 
 results are 0 of course.
 Reason for the function: during analysis, it is sometimes desired to design 
 the window to filter the most recent events, in the case of the events' 
 responses are not available yet. There is a workaround for this, but it is 
 better/more proper to fix the bug. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9392) JoinStatsRule miscalculates join cardinality as incorrect NDV is used due to column names having duplicated fqColumnName

2015-04-30 Thread Mostafa Mokhtar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521841#comment-14521841
 ] 

Mostafa Mokhtar commented on HIVE-9392:
---

[~pxiong]
No, this is not fixed.

 JoinStatsRule miscalculates join cardinality as incorrect NDV is used due to 
 column names having duplicated fqColumnName
 

 Key: HIVE-9392
 URL: https://issues.apache.org/jira/browse/HIVE-9392
 Project: Hive
  Issue Type: Bug
  Components: Physical Optimizer
Affects Versions: 0.14.0
Reporter: Mostafa Mokhtar
Assignee: Prasanth Jayachandran
Priority: Critical
 Attachments: HIVE-9392.1.patch, HIVE-9392.2.patch


 In JoinStatsRule.process the join column statistics are stored in HashMap  
 joinedColStats, the key used which is the ColStatistics.fqColName is 
 duplicated between join column in the same vertex, as a result distinctVals 
 ends up having duplicated values which negatively affects the join 
 cardinality estimation.
 The duplicate keys are usually named KEY.reducesinkkey0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9508) MetaStore client socket connection should have a lifetime

2015-04-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521808#comment-14521808
 ] 

Hive QA commented on HIVE-9508:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12729294/HIVE-9508.4.patch

{color:red}ERROR:{color} -1 due to 13 failed/errored test(s), 8832 tests 
executed
*Failed tests:*
{noformat}
TestMinimrCliDriver-bucketmapjoin6.q-constprog_partitioner.q-infer_bucket_sort_dyn_part.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-external_table_with_space_in_location_path.q-infer_bucket_sort_merge.q-auto_sortmerge_join_16.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-groupby2.q-import_exported_table.q-bucketizedhiveinputformat.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-index_bitmap3.q-stats_counter_partitioned.q-temp_table_external.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_map_operators.q-join1.q-bucketmapjoin7.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_num_buckets.q-disable_merge_for_bucketing.q-uber_reduce.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_reducers_power_two.q-scriptfile1.q-scriptfile1_win.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-leftsemijoin_mr.q-load_hdfs_file_with_space_in_the_name.q-root_dir_external_table.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-list_bucket_dml_10.q-bucket_num_reducers.q-bucket6.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-load_fs2.q-file_with_header_footer.q-ql_rewrite_gbtoidx_cbo_1.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-parallel_orderby.q-reduce_deduplicate.q-ql_rewrite_gbtoidx_cbo_2.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-ql_rewrite_gbtoidx.q-smb_mapjoin_8.q - did not produce a 
TEST-*.xml file
TestMinimrCliDriver-schemeAuthority2.q-bucket4.q-input16_cc.q-and-1-more - did 
not produce a TEST-*.xml file
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3663/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3663/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3663/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 13 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12729294 - PreCommit-HIVE-TRUNK-Build

 MetaStore client socket connection should have a lifetime
 -

 Key: HIVE-9508
 URL: https://issues.apache.org/jira/browse/HIVE-9508
 Project: Hive
  Issue Type: Sub-task
  Components: CLI, Metastore
Reporter: Thiruvel Thirumoolan
Assignee: Thiruvel Thirumoolan
  Labels: metastore, rolling_upgrade
 Fix For: 1.2.0

 Attachments: HIVE-9508.1.patch, HIVE-9508.2.patch, HIVE-9508.3.patch, 
 HIVE-9508.4.patch


 Currently HiveMetaStoreClient (or SessionHMSC) is connected to one Metastore 
 server until the connection is closed or there is a problem. I would like to 
 introduce the concept of a MetaStore client socket life time. The MS client 
 will reconnect if the socket lifetime is reached. This will help during 
 rolling upgrade of Metastore.
 When there are multiple Metastore servers behind a VIP (load balancer), it is 
 easy to take one server out of rotation and wait for 10+ mins for all 
 existing connections will die down (if the lifetime is 5mins say) and the 
 server can be updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10120) Disallow create table with dot/colon in column name

2015-04-30 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-10120:

Component/s: Parser

 Disallow create table with dot/colon in column name
 ---

 Key: HIVE-10120
 URL: https://issues.apache.org/jira/browse/HIVE-10120
 Project: Hive
  Issue Type: Improvement
  Components: Parser
Reporter: Pengcheng Xiong
Assignee: Pengcheng Xiong
 Fix For: 1.2.0

 Attachments: HIVE-10120.01.patch, HIVE-10120.02.patch


 Since we don't allow users to query column names with dot in the middle such 
 as emp.no, don't allow users to create tables with such columns that cannot 
 be queried. Fix the documentation to reflect this fix.
 Here is an example. Consider this table:
 {code}
 CREATE TABLE a (`emp.no` string);
 select `emp.no` from a; fails with this message:
 FAILED: RuntimeException java.lang.RuntimeException: cannot find field emp 
 from [0:emp.no]
 {code}
 The hive documentation needs to be fixed:
 {code}
  (https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL) seems 
 to  indicate that any Unicode character can go between the backticks in the 
 select statement, but it doesn’t like the dot/colon or even select * when 
 there is a column that has a dot/colon. 
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9582) HCatalog should use IMetaStoreClient interface

2015-04-30 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-9582:
---
Attachment: HIVE-9582.6.patch

Attaching rebased version of patch with corrections that I asked for in my 
review earlier. [~thejas], could you please do a backup-review?

 HCatalog should use IMetaStoreClient interface
 --

 Key: HIVE-9582
 URL: https://issues.apache.org/jira/browse/HIVE-9582
 Project: Hive
  Issue Type: Sub-task
  Components: HCatalog, Metastore
Affects Versions: 0.14.0, 0.13.1
Reporter: Thiruvel Thirumoolan
Assignee: Thiruvel Thirumoolan
  Labels: hcatalog, metastore, rolling_upgrade
 Attachments: HIVE-9582.1.patch, HIVE-9582.2.patch, HIVE-9582.3.patch, 
 HIVE-9582.4.patch, HIVE-9582.5.patch, HIVE-9582.6.patch, HIVE-9583.1.patch


 Hive uses IMetaStoreClient and it makes using RetryingMetaStoreClient easy. 
 Hence during a failure, the client retries and possibly succeeds. But 
 HCatalog has long been using HiveMetaStoreClient directly and hence failures 
 are costly, especially if they are during the commit stage of a job. Its also 
 not possible to do rolling upgrade of MetaStore Server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-9712) Row count and data size are set to LONG.MAX when source table has 0 rows

2015-04-30 Thread Mostafa Mokhtar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mostafa Mokhtar resolved HIVE-9712.
---
Resolution: Cannot Reproduce

Metastore mismatch.
{code}
Map 12
Map Operator Tree:
TableScan
  alias: ship_mode
  filterExpr: ((sm_carrier) IN ('DIAMOND', 'AIRBORNE') and 
sm_ship_mode_sk is not null) (type: boolean)
  Statistics: Num rows: 0 Data size: 45 Basic stats: PARTIAL 
Column stats: NONE
  Filter Operator
predicate: ((sm_carrier) IN ('DIAMOND', 'AIRBORNE') and 
sm_ship_mode_sk is not null) (type: boolean)
Statistics: Num rows: 0 Data size: 0 Basic stats: NONE 
Column stats: NONE
Select Operator
  expressions: sm_ship_mode_sk (type: int)
  outputColumnNames: _col0
  Statistics: Num rows: 0 Data size: 0 Basic stats: NONE 
Column stats: NONE
  Reduce Output Operator
key expressions: _col0 (type: int)
sort order: +
Map-reduce partition columns: _col0 (type: int)
Statistics: Num rows: 0 Data size: 0 Basic stats: NONE 
Column stats: NONE
Execution mode: vectorized
{code}

 Row count and data size are set to LONG.MAX when source table has 0 rows
 

 Key: HIVE-9712
 URL: https://issues.apache.org/jira/browse/HIVE-9712
 Project: Hive
  Issue Type: Bug
  Components: Physical Optimizer
Affects Versions: 0.14.0
Reporter: Mostafa Mokhtar
Assignee: Prasanth Jayachandran

 TPC-DS Q66 generates and in-efficient plan because cardinality estimate of 
 dimension table gets set to 9223372036854775807.
 {code}
 Map 10 
 Map Operator Tree:
 TableScan
   alias: ship_mode
   filterExpr: ((sm_carrier) IN ('DIAMOND', 'AIRBORNE') and 
 sm_ship_mode_sk is not null) (type: boolean)
   Statistics: Num rows: 0 Data size: 47 Basic stats: PARTIAL 
 Column stats: COMPLETE
   Filter Operator
 predicate: ((sm_carrier) IN ('DIAMOND', 'AIRBORNE') and 
 sm_ship_mode_sk is not null) (type: boolean)
 Statistics: Num rows: 9223372036854775807 Data size: 
 9223372036854775807 Basic stats: COMPLETE Column stats: COMPLETE
 Select Operator
   expressions: sm_ship_mode_sk (type: int)
   outputColumnNames: _col0
   Statistics: Num rows: 9223372036854775807 Data size: 
 9223372036854775807 Basic stats: COMPLETE Column stats: COMPLETE
   Reduce Output Operator
 key expressions: _col0 (type: int)
 sort order: +
 Map-reduce partition columns: _col0 (type: int)
 Statistics: Num rows: 9223372036854775807 Data size: 
 9223372036854775807 Basic stats: COMPLETE Column stats: COMPLETE
 Execution mode: vectorized
 {code}
 Full plan 
 {code}
 explain  
 select   
  w_warehouse_name
   ,w_warehouse_sq_ft
   ,w_city
   ,w_county
   ,w_state
   ,w_country
 ,ship_carriers
 ,year
   ,sum(jan_sales) as jan_sales
   ,sum(feb_sales) as feb_sales
   ,sum(mar_sales) as mar_sales
   ,sum(apr_sales) as apr_sales
   ,sum(may_sales) as may_sales
   ,sum(jun_sales) as jun_sales
   ,sum(jul_sales) as jul_sales
   ,sum(aug_sales) as aug_sales
   ,sum(sep_sales) as sep_sales
   ,sum(oct_sales) as oct_sales
   ,sum(nov_sales) as nov_sales
   ,sum(dec_sales) as dec_sales
   ,sum(jan_sales/w_warehouse_sq_ft) as jan_sales_per_sq_foot
   ,sum(feb_sales/w_warehouse_sq_ft) as feb_sales_per_sq_foot
   ,sum(mar_sales/w_warehouse_sq_ft) as mar_sales_per_sq_foot
   ,sum(apr_sales/w_warehouse_sq_ft) as apr_sales_per_sq_foot
   ,sum(may_sales/w_warehouse_sq_ft) as may_sales_per_sq_foot
   ,sum(jun_sales/w_warehouse_sq_ft) as jun_sales_per_sq_foot
   ,sum(jul_sales/w_warehouse_sq_ft) as jul_sales_per_sq_foot
   ,sum(aug_sales/w_warehouse_sq_ft) as aug_sales_per_sq_foot
   ,sum(sep_sales/w_warehouse_sq_ft) as sep_sales_per_sq_foot
   ,sum(oct_sales/w_warehouse_sq_ft) as oct_sales_per_sq_foot
   ,sum(nov_sales/w_warehouse_sq_ft) as nov_sales_per_sq_foot
   ,sum(dec_sales/w_warehouse_sq_ft) as dec_sales_per_sq_foot
   ,sum(jan_net) as jan_net
   ,sum(feb_net) as feb_net
   ,sum(mar_net) as mar_net
   ,sum(apr_net) as apr_net
   ,sum(may_net) as may_net
   ,sum(jun_net) as jun_net
   

[jira] [Commented] (HIVE-10502) Cannot specify log4j.properties file location in Beeline

2015-04-30 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522210#comment-14522210
 ] 

Thejas M Nair commented on HIVE-10502:
--

The idea was that almost all users would not want to get that noise from 
zookeeper code. WARN level logging in beeline seems appropriate, but INFO level 
looks appropriate for HS2, metastore etc. So a different beeline specific log 
file I feel is useful.
I am not against making log file name configurable, I am just trying to 
understand what you expect users to point it to.


 Cannot specify log4j.properties file location in Beeline
 

 Key: HIVE-10502
 URL: https://issues.apache.org/jira/browse/HIVE-10502
 Project: Hive
  Issue Type: Bug
  Components: Beeline
Affects Versions: 1.1.0
Reporter: Szehon Ho
Assignee: Chaoyu Tang

 In HiveCLI, HiveServer2, HMS, etc, the following is called early in the 
 startup to initialize log4j logging: LogUtils.initHiveLog4j().
 However, seems like this is not the case in Beeline, which also needs log4j 
 like as follows:
 {noformat}
   at org.apache.log4j.LogManager.clinit(LogManager.java:127)
   at org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:66)
   at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:270)
   at 
 org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:156)
   at 
 org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:132)
   at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:657)
   at org.apache.hadoop.util.VersionInfo.clinit(VersionInfo.java:37)
 {noformat}
 It would be good to specify it, so it doesn't pick the first one in the 
 classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10470) LLAP: NPE in IO when returning 0 rows with no projection

2015-04-30 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522247#comment-14522247
 ] 

Prasanth Jayachandran commented on HIVE-10470:
--

Actually its not an issue, the 0 size in q.out is a result of newly added test 
case. Anyways I will put up another test case to verify a simple count(*) case 
with no projection. 

 LLAP: NPE in IO when returning 0 rows with no projection
 

 Key: HIVE-10470
 URL: https://issues.apache.org/jira/browse/HIVE-10470
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergey Shelukhin
Assignee: Prasanth Jayachandran
 Fix For: llap

 Attachments: HIVE-10470.1.patch


 Looks like a trivial fix, unless I'm missing something. I may do it later if 
 you don't ;)
 {noformat}
 aused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.hive.ql.io.orc.EncodedTreeReaderFactory.createEncodedTreeReader(EncodedTreeReaderFactory.java:1764)
   at 
 org.apache.hadoop.hive.llap.io.decode.OrcEncodedDataConsumer.decodeBatch(OrcEncodedDataConsumer.java:92)
   at 
 org.apache.hadoop.hive.llap.io.decode.OrcEncodedDataConsumer.decodeBatch(OrcEncodedDataConsumer.java:39)
   at 
 org.apache.hadoop.hive.llap.io.decode.EncodedDataConsumer.consumeData(EncodedDataConsumer.java:116)
   at 
 org.apache.hadoop.hive.llap.io.decode.EncodedDataConsumer.consumeData(EncodedDataConsumer.java:36)
   at 
 org.apache.hadoop.hive.ql.io.orc.EncodedReaderImpl.readEncodedColumns(EncodedReaderImpl.java:329)
   at 
 org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:299)
   at 
 org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:55)
   at 
 org.apache.hadoop.hive.common.CallableWithNdc.call(CallableWithNdc.java:37)
   ... 4 more
 {noformat}
 Running q file
 {noformat}
 SET hive.vectorized.execution.enabled=true;
 SET hive.llap.io.enabled=false;
 SET hive.exec.orc.default.row.index.stride=1000;
 SET hive.optimize.index.filter=true;
 DROP TABLE orc_llap;
 CREATE TABLE orc_llap(
 ctinyint TINYINT,
 csmallint SMALLINT,
 cint INT,
 cbigint BIGINT,
 cfloat FLOAT,
 cdouble DOUBLE,
 cstring1 STRING,
 cstring2 STRING,
 ctimestamp1 TIMESTAMP,
 ctimestamp2 TIMESTAMP,
 cboolean1 BOOLEAN,
 cboolean2 BOOLEAN)
 STORED AS ORC tblproperties (orc.compress=ZLIB);
 insert into table orc_llap
 select ctinyint, csmallint, cint, cbigint, cfloat, cdouble, cstring1, 
 cstring2, ctimestamp1, ctimestamp2, cboolean1, cboolean2
 from alltypesorc limit 10;
 SET hive.llap.io.enabled=true;
 select count(*) from orc_llap where cint  6000;
 DROP TABLE orc_llap;
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10502) Cannot specify log4j.properties file location in Beeline

2015-04-30 Thread Szehon Ho (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522191#comment-14522191
 ] 

Szehon Ho commented on HIVE-10502:
--

I thought you had started to see some zookeeper logs for service discovery mode 
in the original JIRA (HIVE-8772) from beeline that users might want to 
customize?

 Cannot specify log4j.properties file location in Beeline
 

 Key: HIVE-10502
 URL: https://issues.apache.org/jira/browse/HIVE-10502
 Project: Hive
  Issue Type: Bug
  Components: Beeline
Affects Versions: 1.1.0
Reporter: Szehon Ho
Assignee: Chaoyu Tang

 In HiveCLI, HiveServer2, HMS, etc, the following is called early in the 
 startup to initialize log4j logging: LogUtils.initHiveLog4j().
 However, seems like this is not the case in Beeline, which also needs log4j 
 like as follows:
 {noformat}
   at org.apache.log4j.LogManager.clinit(LogManager.java:127)
   at org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:66)
   at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:270)
   at 
 org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:156)
   at 
 org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:132)
   at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:657)
   at org.apache.hadoop.util.VersionInfo.clinit(VersionInfo.java:37)
 {noformat}
 It would be good to specify it, so it doesn't pick the first one in the 
 classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10286) SARGs: Type Safety via PredicateLeaf.type

2015-04-30 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-10286:
-
Affects Version/s: 1.3.0
   1.2.0

 SARGs: Type Safety via PredicateLeaf.type
 -

 Key: HIVE-10286
 URL: https://issues.apache.org/jira/browse/HIVE-10286
 Project: Hive
  Issue Type: Bug
  Components: File Formats, Serializers/Deserializers
Affects Versions: 1.2.0, 1.3.0
Reporter: Gopal V
Assignee: Prasanth Jayachandran
 Attachments: HIVE-10286.1.patch, HIVE-10286.2.patch, 
 HIVE-10286.4.patch


 The Sargs impl today converts the statsObj to the type of the predicate 
 object before doing any comparisons.
 To satisfy the PPD requirements, the conversion has to be coerced to the type 
 specified in PredicateLeaf.type.
 The type conversions in Hive are standard and have a fixed promotion order.
 Therefore the PredicateLeaf has to do type changes which match the exact 
 order of type coercions offered by the FilterOperator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9152) Dynamic Partition Pruning [Spark Branch]

2015-04-30 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HIVE-9152:
---
Attachment: HIVE-9152.6-spark.patch

 Dynamic Partition Pruning [Spark Branch]
 

 Key: HIVE-9152
 URL: https://issues.apache.org/jira/browse/HIVE-9152
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Affects Versions: spark-branch
Reporter: Brock Noland
Assignee: Chao Sun
 Attachments: HIVE-9152.1-spark.patch, HIVE-9152.2-spark.patch, 
 HIVE-9152.3-spark.patch, HIVE-9152.4-spark.patch, HIVE-9152.5-spark.patch, 
 HIVE-9152.6-spark.patch


 Tez implemented dynamic partition pruning in HIVE-7826. This is a nice 
 optimization and we should implement the same in HOS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10454) Query against partitioned table in strict mode failed with No partition predicate found even if partition predicate is specified.

2015-04-30 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522169#comment-14522169
 ] 

Aihua Xu commented on HIVE-10454:
-

The failures seems unrelated to the patch.

 Query against partitioned table in strict mode failed with No partition 
 predicate found even if partition predicate is specified.
 ---

 Key: HIVE-10454
 URL: https://issues.apache.org/jira/browse/HIVE-10454
 Project: Hive
  Issue Type: Bug
Reporter: Aihua Xu
Assignee: Aihua Xu
 Attachments: HIVE-10454.patch


 The following queries fail:
 {noformat}
 create table t1 (c1 int) PARTITIONED BY (c2 string);
 set hive.mapred.mode=strict;
 select * from t1 where t1.c2  to_date(date_add(from_unixtime( 
 unix_timestamp() ),1));
 {noformat}
 The query failed with No partition predicate found for alias t1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-9582) HCatalog should use IMetaStoreClient interface

2015-04-30 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522183#comment-14522183
 ] 

Thejas M Nair edited comment on HIVE-9582 at 4/30/15 8:20 PM:
--

Looks good . +1 pending tests. 
Just a minor nit -  Regarding the deprecation annotation, it would be useful to 
add the alternate function as part of deprecation message (its much quicker to 
parse than entire function description).

{noformat}
Example -
@deprecated use {@link #new()} instead. 

{noformat}


was (Author: thejas):

Looks good . +1 pending tests. 
Just a minor nit -  Regarding the deprecation annotation, it would be useful to 
add the alternate function as part of deprecation message (its much quicker to 
parse than entire function description).

Example -
@deprecated use {@link #new()} instead. 

 HCatalog should use IMetaStoreClient interface
 --

 Key: HIVE-9582
 URL: https://issues.apache.org/jira/browse/HIVE-9582
 Project: Hive
  Issue Type: Sub-task
  Components: HCatalog, Metastore
Affects Versions: 0.14.0, 0.13.1
Reporter: Thiruvel Thirumoolan
Assignee: Thiruvel Thirumoolan
  Labels: hcatalog, metastore, rolling_upgrade
 Attachments: HIVE-9582.1.patch, HIVE-9582.2.patch, HIVE-9582.3.patch, 
 HIVE-9582.4.patch, HIVE-9582.5.patch, HIVE-9582.6.patch, HIVE-9583.1.patch


 Hive uses IMetaStoreClient and it makes using RetryingMetaStoreClient easy. 
 Hence during a failure, the client retries and possibly succeeds. But 
 HCatalog has long been using HiveMetaStoreClient directly and hence failures 
 are costly, especially if they are during the commit stage of a job. Its also 
 not possible to do rolling upgrade of MetaStore Server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9582) HCatalog should use IMetaStoreClient interface

2015-04-30 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522183#comment-14522183
 ] 

Thejas M Nair commented on HIVE-9582:
-


Looks good . +1 pending tests. 
Just a minor nit -  Regarding the deprecation annotation, it would be useful to 
add the alternate function as part of deprecation message (its much quicker to 
parse than entire function description).

Example -
@deprecated use {@link #new()} instead. 

 HCatalog should use IMetaStoreClient interface
 --

 Key: HIVE-9582
 URL: https://issues.apache.org/jira/browse/HIVE-9582
 Project: Hive
  Issue Type: Sub-task
  Components: HCatalog, Metastore
Affects Versions: 0.14.0, 0.13.1
Reporter: Thiruvel Thirumoolan
Assignee: Thiruvel Thirumoolan
  Labels: hcatalog, metastore, rolling_upgrade
 Attachments: HIVE-9582.1.patch, HIVE-9582.2.patch, HIVE-9582.3.patch, 
 HIVE-9582.4.patch, HIVE-9582.5.patch, HIVE-9582.6.patch, HIVE-9583.1.patch


 Hive uses IMetaStoreClient and it makes using RetryingMetaStoreClient easy. 
 Hence during a failure, the client retries and possibly succeeds. But 
 HCatalog has long been using HiveMetaStoreClient directly and hence failures 
 are costly, especially if they are during the commit stage of a job. Its also 
 not possible to do rolling upgrade of MetaStore Server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10519) Move TestGenericUDF classes to udf.generic package

2015-04-30 Thread Alexander Pivovarov (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522208#comment-14522208
 ] 

Alexander Pivovarov commented on HIVE-10519:


Jason, thank you for your review.

RBT diff link does not show the changes for 3 files correctly.
But RBT diff/raw link shows all the changes correctly
https://reviews.apache.org/r/33637/diff/raw/


 Move TestGenericUDF classes to udf.generic package
 --

 Key: HIVE-10519
 URL: https://issues.apache.org/jira/browse/HIVE-10519
 Project: Hive
  Issue Type: Bug
  Components: UDF
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
Priority: Trivial
 Attachments: HIVE-10519.1.patch, HIVE-10519.2.patch


 The following TestGenericUDF classes are located in udf package instead of 
 udf.generic.
 {code}
 TestGenericUDFDate.java
 TestGenericUDFDateAdd.java
 TestGenericUDFDateDiff.java
 TestGenericUDFDateSub.java
 TestGenericUDFUtils.java
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10539) set default value of hive.repl.task.factory

2015-04-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1454#comment-1454
 ] 

Hive QA commented on HIVE-10539:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12729297/HIVE-10539.1.patch

{color:red}ERROR:{color} -1 due to 15 failed/errored test(s), 8829 tests 
executed
*Failed tests:*
{noformat}
TestMinimrCliDriver-bucketmapjoin6.q-constprog_partitioner.q-infer_bucket_sort_dyn_part.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-external_table_with_space_in_location_path.q-infer_bucket_sort_merge.q-auto_sortmerge_join_16.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-groupby2.q-import_exported_table.q-bucketizedhiveinputformat.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-index_bitmap3.q-stats_counter_partitioned.q-temp_table_external.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_map_operators.q-join1.q-bucketmapjoin7.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_num_buckets.q-disable_merge_for_bucketing.q-uber_reduce.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_reducers_power_two.q-scriptfile1.q-scriptfile1_win.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-leftsemijoin_mr.q-load_hdfs_file_with_space_in_the_name.q-root_dir_external_table.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-list_bucket_dml_10.q-bucket_num_reducers.q-bucket6.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-load_fs2.q-file_with_header_footer.q-ql_rewrite_gbtoidx_cbo_1.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-parallel_orderby.q-reduce_deduplicate.q-ql_rewrite_gbtoidx_cbo_2.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-ql_rewrite_gbtoidx.q-smb_mapjoin_8.q - did not produce a 
TEST-*.xml file
TestMinimrCliDriver-schemeAuthority2.q-bucket4.q-input16_cc.q-and-1-more - did 
not produce a TEST-*.xml file
org.apache.hive.hcatalog.api.repl.TestReplicationTask.testCreate
org.apache.hive.jdbc.TestSSL.testSSLConnectionWithProperty
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3666/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3666/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3666/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 15 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12729297 - PreCommit-HIVE-TRUNK-Build

 set default value of hive.repl.task.factory
 ---

 Key: HIVE-10539
 URL: https://issues.apache.org/jira/browse/HIVE-10539
 Project: Hive
  Issue Type: Bug
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-10539.1.patch


 hive.repl.task.factory does not have a default value set. It should be set to 
 org.apache.hive.hcatalog.api.repl.exim.EximReplicationTaskFactory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-10470) LLAP: NPE in IO when returning 0 rows with no projection

2015-04-30 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran resolved HIVE-10470.
--
Resolution: Fixed

 LLAP: NPE in IO when returning 0 rows with no projection
 

 Key: HIVE-10470
 URL: https://issues.apache.org/jira/browse/HIVE-10470
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergey Shelukhin
Assignee: Prasanth Jayachandran
 Fix For: llap

 Attachments: HIVE-10470.1.patch


 Looks like a trivial fix, unless I'm missing something. I may do it later if 
 you don't ;)
 {noformat}
 aused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.hive.ql.io.orc.EncodedTreeReaderFactory.createEncodedTreeReader(EncodedTreeReaderFactory.java:1764)
   at 
 org.apache.hadoop.hive.llap.io.decode.OrcEncodedDataConsumer.decodeBatch(OrcEncodedDataConsumer.java:92)
   at 
 org.apache.hadoop.hive.llap.io.decode.OrcEncodedDataConsumer.decodeBatch(OrcEncodedDataConsumer.java:39)
   at 
 org.apache.hadoop.hive.llap.io.decode.EncodedDataConsumer.consumeData(EncodedDataConsumer.java:116)
   at 
 org.apache.hadoop.hive.llap.io.decode.EncodedDataConsumer.consumeData(EncodedDataConsumer.java:36)
   at 
 org.apache.hadoop.hive.ql.io.orc.EncodedReaderImpl.readEncodedColumns(EncodedReaderImpl.java:329)
   at 
 org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:299)
   at 
 org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:55)
   at 
 org.apache.hadoop.hive.common.CallableWithNdc.call(CallableWithNdc.java:37)
   ... 4 more
 {noformat}
 Running q file
 {noformat}
 SET hive.vectorized.execution.enabled=true;
 SET hive.llap.io.enabled=false;
 SET hive.exec.orc.default.row.index.stride=1000;
 SET hive.optimize.index.filter=true;
 DROP TABLE orc_llap;
 CREATE TABLE orc_llap(
 ctinyint TINYINT,
 csmallint SMALLINT,
 cint INT,
 cbigint BIGINT,
 cfloat FLOAT,
 cdouble DOUBLE,
 cstring1 STRING,
 cstring2 STRING,
 ctimestamp1 TIMESTAMP,
 ctimestamp2 TIMESTAMP,
 cboolean1 BOOLEAN,
 cboolean2 BOOLEAN)
 STORED AS ORC tblproperties (orc.compress=ZLIB);
 insert into table orc_llap
 select ctinyint, csmallint, cint, cbigint, cfloat, cdouble, cstring1, 
 cstring2, ctimestamp1, ctimestamp2, cboolean1, cboolean2
 from alltypesorc limit 10;
 SET hive.llap.io.enabled=true;
 select count(*) from orc_llap where cint  6000;
 DROP TABLE orc_llap;
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-7428) OrcSplit fails to account for columnar projections in its size estimates

2015-04-30 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-7428:
--
Attachment: HIVE-7428.2.patch

 OrcSplit fails to account for columnar projections in its size estimates
 

 Key: HIVE-7428
 URL: https://issues.apache.org/jira/browse/HIVE-7428
 Project: Hive
  Issue Type: Bug
Affects Versions: 1.2.0, 1.3.0
Reporter: Gopal V
Assignee: Gopal V
 Attachments: HIVE-7428.1.patch, HIVE-7428.2.patch


 Currently, ORC generates splits based on stripe offset + stripe length.
 This means that the splits for all columnar projections are exactly the same 
 size, despite reading the footer which gives the estimated sizes for each 
 column.
 This is a hold-out from FileSplit which uses getLen() as the I/O cost of 
 reading a file in a map-task.
 RCFile didn't have a footer with column statistics information, but for ORC 
 this would be extremely useful to reduce task overheads when processing 
 extremely wide tables with highly selective column projections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-7428) OrcSplit fails to account for columnar projections in its size estimates

2015-04-30 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-7428:
--
Affects Version/s: 1.3.0
   1.2.0

 OrcSplit fails to account for columnar projections in its size estimates
 

 Key: HIVE-7428
 URL: https://issues.apache.org/jira/browse/HIVE-7428
 Project: Hive
  Issue Type: Bug
Affects Versions: 1.2.0, 1.3.0
Reporter: Gopal V
Assignee: Gopal V
 Attachments: HIVE-7428.1.patch, HIVE-7428.2.patch


 Currently, ORC generates splits based on stripe offset + stripe length.
 This means that the splits for all columnar projections are exactly the same 
 size, despite reading the footer which gives the estimated sizes for each 
 column.
 This is a hold-out from FileSplit which uses getLen() as the I/O cost of 
 reading a file in a map-task.
 RCFile didn't have a footer with column statistics information, but for ORC 
 this would be extremely useful to reduce task overheads when processing 
 extremely wide tables with highly selective column projections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10491) Refactor HBaseStorageHandler::configureJobConf() and configureTableJobProperties

2015-04-30 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522160#comment-14522160
 ] 

Sergey Shelukhin commented on HIVE-10491:
-

Please make sure to test this with Tez too, it's very fragile code :(

 Refactor HBaseStorageHandler::configureJobConf() and 
 configureTableJobProperties
 

 Key: HIVE-10491
 URL: https://issues.apache.org/jira/browse/HIVE-10491
 Project: Hive
  Issue Type: Bug
  Components: HBase Handler
Reporter: Ashutosh Chauhan

 3 tasks as a part of this refactor:
 * Bump hbase version to 1.x
 * Remove HIVE-6356 hack for counter class from configureJobConf()
 * Make use of TableMapReduceUtil.initTableSnapshotMapperJob() instead of 
 manually doing steps done in that method in configureTableJobProperties()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-10558) LLAP: Add test case for all row groups selection with no column projection

2015-04-30 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran resolved HIVE-10558.
--
Resolution: Fixed

Committed to llap branch.

 LLAP: Add test case for all row groups selection with no column projection
 --

 Key: HIVE-10558
 URL: https://issues.apache.org/jira/browse/HIVE-10558
 Project: Hive
  Issue Type: Sub-task
Affects Versions: llap
Reporter: Prasanth Jayachandran
Assignee: Prasanth Jayachandran
 Attachments: HIVE-10558.patch


 There is test case for no row group + filter column only projection. Add test 
 case for all row group + no column projection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10558) LLAP: Add test case for all row groups selection with no column projection

2015-04-30 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-10558:
-
Attachment: HIVE-10558.patch

 LLAP: Add test case for all row groups selection with no column projection
 --

 Key: HIVE-10558
 URL: https://issues.apache.org/jira/browse/HIVE-10558
 Project: Hive
  Issue Type: Sub-task
Affects Versions: llap
Reporter: Prasanth Jayachandran
Assignee: Prasanth Jayachandran
 Attachments: HIVE-10558.patch


 There is test case for no row group + filter column only projection. Add test 
 case for all row group + no column projection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10384) RetryingMetaStoreClient does not retry wrapped TTransportExceptions

2015-04-30 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522105#comment-14522105
 ] 

Sushanth Sowmyan commented on HIVE-10384:
-

And also ported to 1.2. Thanks Chaoyu  Szehon

 RetryingMetaStoreClient does not retry wrapped TTransportExceptions
 ---

 Key: HIVE-10384
 URL: https://issues.apache.org/jira/browse/HIVE-10384
 Project: Hive
  Issue Type: Bug
  Components: Clients
Reporter: Eric Liang
Assignee: Chaoyu Tang
 Fix For: 1.3.0

 Attachments: HIVE-10384.1.patch, HIVE-10384.patch


 This bug is very similar to HIVE-9436, in that a TTransportException wrapped 
 in a MetaException will not be retried. RetryingMetaStoreClient has a block 
 of code above the MetaException handler that retries thrift exceptions, but 
 this doesn't work when the exception is wrapped.
 {code}
 if ((e.getCause() instanceof TApplicationException) ||
 (e.getCause() instanceof TProtocolException) ||
 (e.getCause() instanceof TTransportException)) {
   caughtException = (TException) e.getCause();
 } else if ((e.getCause() instanceof MetaException) 
 
 e.getCause().getMessage().matches((?s).*JDO[a-zA-Z]*Exception.*)) {
   caughtException = (MetaException) e.getCause();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10444) HIVE-10223 breaks hadoop-1 build

2015-04-30 Thread Alexander Pivovarov (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522159#comment-14522159
 ] 

Alexander Pivovarov commented on HIVE-10444:


+1

 HIVE-10223 breaks hadoop-1 build
 

 Key: HIVE-10444
 URL: https://issues.apache.org/jira/browse/HIVE-10444
 Project: Hive
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: Prasanth Jayachandran
Assignee: Chris Nauroth
 Attachments: HIVE-10444.1.patch, HIVE-10444.2.patch


 FileStatus.isFile() and FileStatus.isDirectory() methods added in HIVE-10223 
 are not present in hadoop 1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-6684) Beeline does not accept comments that are preceded by spaces

2015-04-30 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-6684:

Assignee: Jeremy Beard

 Beeline does not accept comments that are preceded by spaces
 

 Key: HIVE-6684
 URL: https://issues.apache.org/jira/browse/HIVE-6684
 Project: Hive
  Issue Type: Bug
  Components: CLI
Affects Versions: 0.10.0
Reporter: Jeremy Beard
Assignee: Jeremy Beard
  Labels: TODOC14
 Fix For: 0.14.0

 Attachments: HIVE-6684.1.patch, HIVE-6684.2.patch


 Beeline throws an error if single-line comments are indented with spaces. 
 This works in the embedded Hive CLI.
 For example:
 SELECT
-- this is the field we want
field
 FROM
table;
 Error: Error while processing statement: FAILED: ParseException line 1:71 
 cannot recognize input near 'EOF' 'EOF' 'EOF' in select clause 
 (state=42000,code=4)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10543) improve error message in MetaStoreAuthzAPIAuthorizerEmbedOnly

2015-04-30 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522237#comment-14522237
 ] 

Thejas M Nair commented on HIVE-10543:
--

[~sushanth] Can you please review this ?
This is a minor change, but a useful improvement in error message. It would be 
good to have in 1.2.0.


 improve error message in MetaStoreAuthzAPIAuthorizerEmbedOnly
 -

 Key: HIVE-10543
 URL: https://issues.apache.org/jira/browse/HIVE-10543
 Project: Hive
  Issue Type: Bug
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-10543.1.patch


 From the error message in MetaStoreAuthzAPIAuthorizerEmbedOnly , it is not 
 clear that the command needs to be run via HS2 using embedded metastore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-10482) LLAP: AsertionError cannot allocate when reading from orc

2015-04-30 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin reassigned HIVE-10482:
---

Assignee: Sergey Shelukhin

 LLAP: AsertionError cannot allocate when reading from orc
 -

 Key: HIVE-10482
 URL: https://issues.apache.org/jira/browse/HIVE-10482
 Project: Hive
  Issue Type: Sub-task
Reporter: Siddharth Seth
Assignee: Sergey Shelukhin
 Fix For: llap


 This was from a run of tpch query 1. [~sershe] - not sure if you've already 
 seen this. Creating a jira so that it doesn't get lost.
 {code}
 2015-04-24 13:11:54,180 
 [TezTaskRunner_attempt_1429683757595_0326_4_00_000199_0(container_1_0326_01_003216_sseth_20150424131137_8ec6200c-77c8-43ea-a6a3-a0ab1da6e1ac:4_Map
  1_199_0)] ERROR org.apache.hadoop.hive.ql.exec.tez.TezProcessor: 
 org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: 
 java.io.IOException: java.lang.AssertionError: Cannot allocate
 at 
 org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:74)
 at 
 org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:314)
 at 
 org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:148)
 at 
 org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:137)
 at 
 org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:329)
 at 
 org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:180)
 at 
 org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:172)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
 at 
 org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:172)
 at 
 org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:168)
 at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: java.io.IOException: java.io.IOException: 
 java.lang.AssertionError: Cannot allocate
 at 
 org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
 at 
 org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
 at 
 org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:355)
 at 
 org.apache.hadoop.hive.ql.io.HiveRecordReader.doNext(HiveRecordReader.java:79)
 at 
 org.apache.hadoop.hive.ql.io.HiveRecordReader.doNext(HiveRecordReader.java:33)
 at 
 org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:116)
 at 
 org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:137)
 at 
 org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:113)
 at 
 org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:62)
 ... 16 more
 Caused by: java.io.IOException: java.lang.AssertionError: Cannot allocate
 at 
 org.apache.hadoop.hive.llap.io.api.impl.LlapInputFormat$LlapRecordReader.rethrowErrorIfAny(LlapInputFormat.java:257)
 at 
 org.apache.hadoop.hive.llap.io.api.impl.LlapInputFormat$LlapRecordReader.nextCvb(LlapInputFormat.java:209)
 at 
 org.apache.hadoop.hive.llap.io.api.impl.LlapInputFormat$LlapRecordReader.next(LlapInputFormat.java:147)
 at 
 org.apache.hadoop.hive.llap.io.api.impl.LlapInputFormat$LlapRecordReader.next(LlapInputFormat.java:97)
 at 
 org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:350)
 ... 22 more
 Caused by: java.lang.AssertionError: Cannot allocate
 at 
 org.apache.hadoop.hive.ql.io.orc.InStream.readEncodedStream(InStream.java:761)
 at 
 org.apache.hadoop.hive.ql.io.orc.EncodedReaderImpl.readEncodedColumns(EncodedReaderImpl.java:441)
 at 
 org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:294)
 at 
 

[jira] [Commented] (HIVE-10482) LLAP: AsertionError cannot allocate when reading from orc

2015-04-30 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522282#comment-14522282
 ] 

Siddharth Seth commented on HIVE-10482:
---

The default. 1 GB I believe.

 LLAP: AsertionError cannot allocate when reading from orc
 -

 Key: HIVE-10482
 URL: https://issues.apache.org/jira/browse/HIVE-10482
 Project: Hive
  Issue Type: Sub-task
Reporter: Siddharth Seth
Assignee: Sergey Shelukhin
 Fix For: llap


 This was from a run of tpch query 1. [~sershe] - not sure if you've already 
 seen this. Creating a jira so that it doesn't get lost.
 {code}
 2015-04-24 13:11:54,180 
 [TezTaskRunner_attempt_1429683757595_0326_4_00_000199_0(container_1_0326_01_003216_sseth_20150424131137_8ec6200c-77c8-43ea-a6a3-a0ab1da6e1ac:4_Map
  1_199_0)] ERROR org.apache.hadoop.hive.ql.exec.tez.TezProcessor: 
 org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: 
 java.io.IOException: java.lang.AssertionError: Cannot allocate
 at 
 org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:74)
 at 
 org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:314)
 at 
 org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:148)
 at 
 org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:137)
 at 
 org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:329)
 at 
 org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:180)
 at 
 org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:172)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
 at 
 org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:172)
 at 
 org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:168)
 at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: java.io.IOException: java.io.IOException: 
 java.lang.AssertionError: Cannot allocate
 at 
 org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
 at 
 org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
 at 
 org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:355)
 at 
 org.apache.hadoop.hive.ql.io.HiveRecordReader.doNext(HiveRecordReader.java:79)
 at 
 org.apache.hadoop.hive.ql.io.HiveRecordReader.doNext(HiveRecordReader.java:33)
 at 
 org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:116)
 at 
 org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:137)
 at 
 org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:113)
 at 
 org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:62)
 ... 16 more
 Caused by: java.io.IOException: java.lang.AssertionError: Cannot allocate
 at 
 org.apache.hadoop.hive.llap.io.api.impl.LlapInputFormat$LlapRecordReader.rethrowErrorIfAny(LlapInputFormat.java:257)
 at 
 org.apache.hadoop.hive.llap.io.api.impl.LlapInputFormat$LlapRecordReader.nextCvb(LlapInputFormat.java:209)
 at 
 org.apache.hadoop.hive.llap.io.api.impl.LlapInputFormat$LlapRecordReader.next(LlapInputFormat.java:147)
 at 
 org.apache.hadoop.hive.llap.io.api.impl.LlapInputFormat$LlapRecordReader.next(LlapInputFormat.java:97)
 at 
 org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:350)
 ... 22 more
 Caused by: java.lang.AssertionError: Cannot allocate
 at 
 org.apache.hadoop.hive.ql.io.orc.InStream.readEncodedStream(InStream.java:761)
 at 
 org.apache.hadoop.hive.ql.io.orc.EncodedReaderImpl.readEncodedColumns(EncodedReaderImpl.java:441)
 at 
 org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:294)
   

[jira] [Updated] (HIVE-10559) IndexOutOfBoundsException with RemoveDynamicPruningBySize

2015-04-30 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-10559:
-
Attachment: q85.q

 IndexOutOfBoundsException with RemoveDynamicPruningBySize
 -

 Key: HIVE-10559
 URL: https://issues.apache.org/jira/browse/HIVE-10559
 Project: Hive
  Issue Type: Bug
  Components: Tez
Affects Versions: 1.2.0
Reporter: Wei Zheng
Assignee: Wei Zheng
 Attachments: q85.q


 The problem can be reproduced by running the script attached.
 Backtrace
 {code}
 2015-04-29 10:34:36,390 ERROR [main]: ql.Driver 
 (SessionState.java:printError(956)) - FAILED: IndexOutOfBoundsException 
 Index: 0, Size: 0
 java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
   at java.util.ArrayList.rangeCheck(ArrayList.java:635)
   at java.util.ArrayList.get(ArrayList.java:411)
   at 
 org.apache.hadoop.hive.ql.optimizer.RemoveDynamicPruningBySize.process(RemoveDynamicPruningBySize.java:61)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:95)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:79)
   at 
 org.apache.hadoop.hive.ql.lib.ForwardWalker.walk(ForwardWalker.java:77)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:110)
   at 
 org.apache.hadoop.hive.ql.parse.TezCompiler.runStatsDependentOptimizations(TezCompiler.java:281)
   at 
 org.apache.hadoop.hive.ql.parse.TezCompiler.optimizeOperatorPlan(TezCompiler.java:123)
   at 
 org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:102)
   at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10092)
   at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:9932)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:227)
   at 
 org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:74)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:227)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:424)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:308)
   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1122)
   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1170)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
   at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)
   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:311)
   at 
 org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1026)
   at 
 org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:1000)
   at 
 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.runTest(TestMiniTezCliDriver.java:139)
   at 
 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_q85(TestMiniTezCliDriver.java:123)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at junit.framework.TestCase.runTest(TestCase.java:176)
   at junit.framework.TestCase.runBare(TestCase.java:141)
   at junit.framework.TestResult$1.protect(TestResult.java:122)
   at junit.framework.TestResult.runProtected(TestResult.java:142)
   at junit.framework.TestResult.run(TestResult.java:125)
   at junit.framework.TestCase.run(TestCase.java:129)
   at junit.framework.TestSuite.runTest(TestSuite.java:255)
   at junit.framework.TestSuite.run(TestSuite.java:250)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
   at 
 

[jira] [Updated] (HIVE-7428) OrcSplit fails to account for columnar projections in its size estimates

2015-04-30 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-7428:

Attachment: HIVE-7428.1.patch

Same trunk patch from HIVE-10397. I might need to rebase it to master. 

 OrcSplit fails to account for columnar projections in its size estimates
 

 Key: HIVE-7428
 URL: https://issues.apache.org/jira/browse/HIVE-7428
 Project: Hive
  Issue Type: Bug
Reporter: Gopal V
Assignee: Gopal V
 Attachments: HIVE-7428.1.patch


 Currently, ORC generates splits based on stripe offset + stripe length.
 This means that the splits for all columnar projections are exactly the same 
 size, despite reading the footer which gives the estimated sizes for each 
 column.
 This is a hold-out from FileSplit which uses getLen() as the I/O cost of 
 reading a file in a map-task.
 RCFile didn't have a footer with column statistics information, but for ORC 
 this would be extremely useful to reduce task overheads when processing 
 extremely wide tables with highly selective column projections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10502) Cannot specify log4j.properties file location in Beeline

2015-04-30 Thread Szehon Ho (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522253#comment-14522253
 ] 

Szehon Ho commented on HIVE-10502:
--

OK its fine for now, I was just trying to understand the motivation, thanks.

 Cannot specify log4j.properties file location in Beeline
 

 Key: HIVE-10502
 URL: https://issues.apache.org/jira/browse/HIVE-10502
 Project: Hive
  Issue Type: Bug
  Components: Beeline
Affects Versions: 1.1.0
Reporter: Szehon Ho
Assignee: Chaoyu Tang

 In HiveCLI, HiveServer2, HMS, etc, the following is called early in the 
 startup to initialize log4j logging: LogUtils.initHiveLog4j().
 However, seems like this is not the case in Beeline, which also needs log4j 
 like as follows:
 {noformat}
   at org.apache.log4j.LogManager.clinit(LogManager.java:127)
   at org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:66)
   at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:270)
   at 
 org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:156)
   at 
 org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:132)
   at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:657)
   at org.apache.hadoop.util.VersionInfo.clinit(VersionInfo.java:37)
 {noformat}
 It would be good to specify it, so it doesn't pick the first one in the 
 classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10541) Beeline requires newline at the end of each query in a file

2015-04-30 Thread Chaoyu Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522262#comment-14522262
 ] 

Chaoyu Tang commented on HIVE-10541:


Thanks [~thejas]. I will look into adding a test case for that.

 Beeline requires newline at the end of each query in a file
 ---

 Key: HIVE-10541
 URL: https://issues.apache.org/jira/browse/HIVE-10541
 Project: Hive
  Issue Type: Bug
  Components: Beeline
Affects Versions: 0.13.1
Reporter: Chaoyu Tang
Assignee: Chaoyu Tang
Priority: Minor
 Attachments: HIVE-10541.patch


 Beeline requires newline at the end of each query in a file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10497) Upgrade hive branch to latest Tez

2015-04-30 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-10497:
---
Attachment: HIVE-10497.1.patch

Test report has been lost.

 Upgrade hive branch to latest Tez
 -

 Key: HIVE-10497
 URL: https://issues.apache.org/jira/browse/HIVE-10497
 Project: Hive
  Issue Type: Improvement
  Components: Tez
Affects Versions: 1.2.0
Reporter: Gopal V
Assignee: Gopal V
 Attachments: HIVE-10497.1.patch, HIVE-10497.1.patch


 Upgrade hive to the upcoming tez-0.7 release 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-7475) Beeline requires newline at the end of each query in a file

2015-04-30 Thread Chaoyu Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522076#comment-14522076
 ] 

Chaoyu Tang commented on HIVE-7475:
---

[~thejas], [~navis] Looks like the issue still exists in Hive 1.2. and is due 
to a jline2 bug (https://github.com/jline/jline/issues/10). I had a work it 
around in beeline, please review that in HIVE-10541, Thanks.

 Beeline requires newline at the end of each query in a file
 ---

 Key: HIVE-7475
 URL: https://issues.apache.org/jira/browse/HIVE-7475
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.13.1
Reporter: thomas norden
Priority: Trivial
 Fix For: 0.14.0


 When using the -f option on beeline its required to have a newline at the end 
 of each query otherwise the connection is closed before the query is run.
 {code}
 $ cat test.hql
 show databases;%
 $ beeline -u jdbc:hive2://localhost:1 --incremental=true -f test.hql
 scan complete in 3ms
 Connecting to jdbc:hive2://localhost:1
 Connected to: Apache Hive (version 0.13.1)
 Driver: Hive JDBC (version 0.13.1)
 Transaction isolation: TRANSACTION_REPEATABLE_READ
 Beeline version 0.13.1 by Apache Hive
 0: jdbc:hive2://localhost:1 show databases;Closing: 0: 
 jdbc:hive2://localhost:1
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9508) MetaStore client socket connection should have a lifetime

2015-04-30 Thread Thiruvel Thirumoolan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thiruvel Thirumoolan updated HIVE-9508:
---
Attachment: HIVE-9508.5.patch

Uploading patch with connection lifetime disabled by default.

 MetaStore client socket connection should have a lifetime
 -

 Key: HIVE-9508
 URL: https://issues.apache.org/jira/browse/HIVE-9508
 Project: Hive
  Issue Type: Sub-task
  Components: CLI, Metastore
Reporter: Thiruvel Thirumoolan
Assignee: Thiruvel Thirumoolan
  Labels: metastore, rolling_upgrade
 Fix For: 1.2.0

 Attachments: HIVE-9508.1.patch, HIVE-9508.2.patch, HIVE-9508.3.patch, 
 HIVE-9508.4.patch, HIVE-9508.5.patch


 Currently HiveMetaStoreClient (or SessionHMSC) is connected to one Metastore 
 server until the connection is closed or there is a problem. I would like to 
 introduce the concept of a MetaStore client socket life time. The MS client 
 will reconnect if the socket lifetime is reached. This will help during 
 rolling upgrade of Metastore.
 When there are multiple Metastore servers behind a VIP (load balancer), it is 
 easy to take one server out of rotation and wait for 10+ mins for all 
 existing connections will die down (if the lifetime is 5mins say) and the 
 server can be updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10470) LLAP: NPE in IO when returning 0 rows with no projection

2015-04-30 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522102#comment-14522102
 ] 

Sergey Shelukhin commented on HIVE-10470:
-

This patch appears to have broken the out file, the result changed for PPD with 
no projection. Can you please revert?
I think fix needs to be on the other side.

 LLAP: NPE in IO when returning 0 rows with no projection
 

 Key: HIVE-10470
 URL: https://issues.apache.org/jira/browse/HIVE-10470
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergey Shelukhin
Assignee: Prasanth Jayachandran
 Fix For: llap

 Attachments: HIVE-10470.1.patch


 Looks like a trivial fix, unless I'm missing something. I may do it later if 
 you don't ;)
 {noformat}
 aused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.hive.ql.io.orc.EncodedTreeReaderFactory.createEncodedTreeReader(EncodedTreeReaderFactory.java:1764)
   at 
 org.apache.hadoop.hive.llap.io.decode.OrcEncodedDataConsumer.decodeBatch(OrcEncodedDataConsumer.java:92)
   at 
 org.apache.hadoop.hive.llap.io.decode.OrcEncodedDataConsumer.decodeBatch(OrcEncodedDataConsumer.java:39)
   at 
 org.apache.hadoop.hive.llap.io.decode.EncodedDataConsumer.consumeData(EncodedDataConsumer.java:116)
   at 
 org.apache.hadoop.hive.llap.io.decode.EncodedDataConsumer.consumeData(EncodedDataConsumer.java:36)
   at 
 org.apache.hadoop.hive.ql.io.orc.EncodedReaderImpl.readEncodedColumns(EncodedReaderImpl.java:329)
   at 
 org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:299)
   at 
 org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:55)
   at 
 org.apache.hadoop.hive.common.CallableWithNdc.call(CallableWithNdc.java:37)
   ... 4 more
 {noformat}
 Running q file
 {noformat}
 SET hive.vectorized.execution.enabled=true;
 SET hive.llap.io.enabled=false;
 SET hive.exec.orc.default.row.index.stride=1000;
 SET hive.optimize.index.filter=true;
 DROP TABLE orc_llap;
 CREATE TABLE orc_llap(
 ctinyint TINYINT,
 csmallint SMALLINT,
 cint INT,
 cbigint BIGINT,
 cfloat FLOAT,
 cdouble DOUBLE,
 cstring1 STRING,
 cstring2 STRING,
 ctimestamp1 TIMESTAMP,
 ctimestamp2 TIMESTAMP,
 cboolean1 BOOLEAN,
 cboolean2 BOOLEAN)
 STORED AS ORC tblproperties (orc.compress=ZLIB);
 insert into table orc_llap
 select ctinyint, csmallint, cint, cbigint, cfloat, cdouble, cstring1, 
 cstring2, ctimestamp1, ctimestamp2, cboolean1, cboolean2
 from alltypesorc limit 10;
 SET hive.llap.io.enabled=true;
 select count(*) from orc_llap where cint  6000;
 DROP TABLE orc_llap;
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-5672) Insert with custom separator not supported for non-local directory

2015-04-30 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522099#comment-14522099
 ] 

Sushanth Sowmyan commented on HIVE-5672:


I see that the .8.patch has both the changes, and has passed all the tests. I'm 
+1 on .8.patch.

 Insert with custom separator not supported for non-local directory
 --

 Key: HIVE-5672
 URL: https://issues.apache.org/jira/browse/HIVE-5672
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0, 1.0.0
Reporter: Romain Rigaux
Assignee: Nemon Lou
 Attachments: HIVE-5672.1.patch, HIVE-5672.2.patch, HIVE-5672.3.patch, 
 HIVE-5672.4.patch, HIVE-5672.5.patch, HIVE-5672.5.patch.tar.gz, 
 HIVE-5672.6.patch, HIVE-5672.6.patch.tar.gz, HIVE-5672.7.patch, 
 HIVE-5672.7.patch.tar.gz, HIVE-5672.8.patch, HIVE-5672.8.patch.tar.gz


 https://issues.apache.org/jira/browse/HIVE-3682 is great but non local 
 directory don't seem to be supported:
 {code}
 insert overwrite directory '/tmp/test-02'
 row format delimited
 FIELDS TERMINATED BY ':'
 select description FROM sample_07
 {code}
 {code}
 Error while compiling statement: FAILED: ParseException line 2:0 cannot 
 recognize input near 'row' 'format' 'delimited' in select clause
 {code}
 This works (with 'local'):
 {code}
 insert overwrite local directory '/tmp/test-02'
 row format delimited
 FIELDS TERMINATED BY ':'
 select code, description FROM sample_07
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HIVE-10470) LLAP: NPE in IO when returning 0 rows with no projection

2015-04-30 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin reopened HIVE-10470:
-

 LLAP: NPE in IO when returning 0 rows with no projection
 

 Key: HIVE-10470
 URL: https://issues.apache.org/jira/browse/HIVE-10470
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergey Shelukhin
Assignee: Prasanth Jayachandran
 Fix For: llap

 Attachments: HIVE-10470.1.patch


 Looks like a trivial fix, unless I'm missing something. I may do it later if 
 you don't ;)
 {noformat}
 aused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.hive.ql.io.orc.EncodedTreeReaderFactory.createEncodedTreeReader(EncodedTreeReaderFactory.java:1764)
   at 
 org.apache.hadoop.hive.llap.io.decode.OrcEncodedDataConsumer.decodeBatch(OrcEncodedDataConsumer.java:92)
   at 
 org.apache.hadoop.hive.llap.io.decode.OrcEncodedDataConsumer.decodeBatch(OrcEncodedDataConsumer.java:39)
   at 
 org.apache.hadoop.hive.llap.io.decode.EncodedDataConsumer.consumeData(EncodedDataConsumer.java:116)
   at 
 org.apache.hadoop.hive.llap.io.decode.EncodedDataConsumer.consumeData(EncodedDataConsumer.java:36)
   at 
 org.apache.hadoop.hive.ql.io.orc.EncodedReaderImpl.readEncodedColumns(EncodedReaderImpl.java:329)
   at 
 org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:299)
   at 
 org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:55)
   at 
 org.apache.hadoop.hive.common.CallableWithNdc.call(CallableWithNdc.java:37)
   ... 4 more
 {noformat}
 Running q file
 {noformat}
 SET hive.vectorized.execution.enabled=true;
 SET hive.llap.io.enabled=false;
 SET hive.exec.orc.default.row.index.stride=1000;
 SET hive.optimize.index.filter=true;
 DROP TABLE orc_llap;
 CREATE TABLE orc_llap(
 ctinyint TINYINT,
 csmallint SMALLINT,
 cint INT,
 cbigint BIGINT,
 cfloat FLOAT,
 cdouble DOUBLE,
 cstring1 STRING,
 cstring2 STRING,
 ctimestamp1 TIMESTAMP,
 ctimestamp2 TIMESTAMP,
 cboolean1 BOOLEAN,
 cboolean2 BOOLEAN)
 STORED AS ORC tblproperties (orc.compress=ZLIB);
 insert into table orc_llap
 select ctinyint, csmallint, cint, cbigint, cfloat, cdouble, cstring1, 
 cstring2, ctimestamp1, ctimestamp2, cboolean1, cboolean2
 from alltypesorc limit 10;
 SET hive.llap.io.enabled=true;
 select count(*) from orc_llap where cint  6000;
 DROP TABLE orc_llap;
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10541) Beeline requires newline at the end of each query in a file

2015-04-30 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522197#comment-14522197
 ] 

Thejas M Nair commented on HIVE-10541:
--

[~ctang.ma] thanks for the patch!
The fix looks good to me, but I think it could use a simple test case in 
TestBeeLineWithArgs as well.


 Beeline requires newline at the end of each query in a file
 ---

 Key: HIVE-10541
 URL: https://issues.apache.org/jira/browse/HIVE-10541
 Project: Hive
  Issue Type: Bug
  Components: Beeline
Affects Versions: 0.13.1
Reporter: Chaoyu Tang
Assignee: Chaoyu Tang
Priority: Minor
 Attachments: HIVE-10541.patch


 Beeline requires newline at the end of each query in a file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10519) Move TestGenericUDF classes to udf.generic package

2015-04-30 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522194#comment-14522194
 ] 

Jason Dere commented on HIVE-10519:
---

+1

 Move TestGenericUDF classes to udf.generic package
 --

 Key: HIVE-10519
 URL: https://issues.apache.org/jira/browse/HIVE-10519
 Project: Hive
  Issue Type: Bug
  Components: UDF
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
Priority: Trivial
 Attachments: HIVE-10519.1.patch, HIVE-10519.2.patch


 The following TestGenericUDF classes are located in udf package instead of 
 udf.generic.
 {code}
 TestGenericUDFDate.java
 TestGenericUDFDateAdd.java
 TestGenericUDFDateDiff.java
 TestGenericUDFDateSub.java
 TestGenericUDFUtils.java
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10544) Beeline/Hive JDBC Driver fails in HTTP mode on Windows with java.lang.NoSuchFieldError: INSTANCE

2015-04-30 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1457#comment-1457
 ] 

Thejas M Nair commented on HIVE-10544:
--

+1 for 2.patch


 Beeline/Hive JDBC Driver fails in HTTP mode on Windows with 
 java.lang.NoSuchFieldError: INSTANCE
 

 Key: HIVE-10544
 URL: https://issues.apache.org/jira/browse/HIVE-10544
 Project: Hive
  Issue Type: Bug
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
 Attachments: HIVE-10544.1.patch, HIVE-10544.2.patch


 NO PRECOMMIT TESTS
 This appears to be caused by a dependency version mispatch with httpcore on 
 Beeline's classpath.
 We need to change beeline.cmd as well I guess to include the equivalent of 
 export HADOOP_USER_CLASSPATH_FIRST=true.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10556) ORC PPD schema on read related changes

2015-04-30 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-10556:
-
Affects Version/s: 1.2.0

 ORC PPD schema on read related changes
 --

 Key: HIVE-10556
 URL: https://issues.apache.org/jira/browse/HIVE-10556
 Project: Hive
  Issue Type: Bug
Affects Versions: 1.2.0, 1.3.0
Reporter: Prasanth Jayachandran
Assignee: Gopal V

 Follow up for HIVE-10286. Some fixes needs to be done for schema on read. 
 Like Predicate.STRING with value 15 and integer min/max stats of 10,100 
 should return YES_NO truth value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10455) CBO (Calcite Return Path): Different data types at Reducer before JoinOp

2015-04-30 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522081#comment-14522081
 ] 

Pengcheng Xiong commented on HIVE-10455:


As per [~jpullokkaran]'s request, [~ashutoshc], could you plz review it? Thanks.

 CBO (Calcite Return Path): Different data types at Reducer before JoinOp
 

 Key: HIVE-10455
 URL: https://issues.apache.org/jira/browse/HIVE-10455
 Project: Hive
  Issue Type: Sub-task
  Components: CBO
Reporter: Pengcheng Xiong
Assignee: Pengcheng Xiong
 Fix For: 1.2.0

 Attachments: HIVE-10455.01.patch, HIVE-10455.02.patch


 The following error occured for cbo_subq_not_in.q 
 {code}
 java.lang.Exception: java.lang.RuntimeException: 
 org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error: Unable 
 to deserialize reduce input key from x1x128x0x0x1 with properties 
 {columns=reducesinkkey0, 
 serialization.lib=org.apache.hadoop.hive.serde2.binarysortable.BinarySortableSerDe,
  serialization.sort.order=+, columns.types=double}
 at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
 at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529)
 {code}
 A more easier way to reproduce is 
 {code}
 set hive.cbo.enable=true;
 set hive.exec.check.crossproducts=false;
 set hive.stats.fetch.column.stats=true;
 set hive.auto.convert.join=false;
 select p_size, src.key
 from 
 part join src
 on p_size=key;
 {code}
 As you can see, p_size is integer while src.key is string. Both of them 
 should be cast to double when they join. When return path is off, this will 
 happen before Join, at RS. However, when return path is on, this will be 
 considered as an expression in Join. Thus, when reducer is collecting 
 different types of keys from different join branches, it throws exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10384) RetryingMetaStoreClient does not retry wrapped TTransportExceptions

2015-04-30 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-10384:

Fix Version/s: (was: 1.3.0)
   1.2.0

 RetryingMetaStoreClient does not retry wrapped TTransportExceptions
 ---

 Key: HIVE-10384
 URL: https://issues.apache.org/jira/browse/HIVE-10384
 Project: Hive
  Issue Type: Bug
  Components: Clients
Reporter: Eric Liang
Assignee: Chaoyu Tang
 Fix For: 1.2.0

 Attachments: HIVE-10384.1.patch, HIVE-10384.patch


 This bug is very similar to HIVE-9436, in that a TTransportException wrapped 
 in a MetaException will not be retried. RetryingMetaStoreClient has a block 
 of code above the MetaException handler that retries thrift exceptions, but 
 this doesn't work when the exception is wrapped.
 {code}
 if ((e.getCause() instanceof TApplicationException) ||
 (e.getCause() instanceof TProtocolException) ||
 (e.getCause() instanceof TTransportException)) {
   caughtException = (TException) e.getCause();
 } else if ((e.getCause() instanceof MetaException) 
 
 e.getCause().getMessage().matches((?s).*JDO[a-zA-Z]*Exception.*)) {
   caughtException = (MetaException) e.getCause();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10502) Cannot specify log4j.properties file location in Beeline

2015-04-30 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522214#comment-14522214
 ] 

Thejas M Nair commented on HIVE-10502:
--

In general, I feel the fewer knobs the better. But if you strongly feel a need 
for that knob, feel free to add it.


 Cannot specify log4j.properties file location in Beeline
 

 Key: HIVE-10502
 URL: https://issues.apache.org/jira/browse/HIVE-10502
 Project: Hive
  Issue Type: Bug
  Components: Beeline
Affects Versions: 1.1.0
Reporter: Szehon Ho
Assignee: Chaoyu Tang

 In HiveCLI, HiveServer2, HMS, etc, the following is called early in the 
 startup to initialize log4j logging: LogUtils.initHiveLog4j().
 However, seems like this is not the case in Beeline, which also needs log4j 
 like as follows:
 {noformat}
   at org.apache.log4j.LogManager.clinit(LogManager.java:127)
   at org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:66)
   at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:270)
   at 
 org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:156)
   at 
 org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:132)
   at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:657)
   at org.apache.hadoop.util.VersionInfo.clinit(VersionInfo.java:37)
 {noformat}
 It would be good to specify it, so it doesn't pick the first one in the 
 classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10565) LLAP: Native Vector Map Join doesn't handle filtering and matching on LEFT OUTER JOIN repeated key correctly

2015-04-30 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-10565:

Attachment: HIVE-10565.01.patch

 LLAP: Native Vector Map Join doesn't handle filtering and matching on LEFT 
 OUTER JOIN repeated key correctly
 

 Key: HIVE-10565
 URL: https://issues.apache.org/jira/browse/HIVE-10565
 Project: Hive
  Issue Type: Sub-task
  Components: Hive
Affects Versions: 1.2.0
Reporter: Matt McCline
Assignee: Matt McCline
Priority: Critical
 Fix For: 1.2.0, 1.3.0

 Attachments: HIVE-10565.01.patch


 Filtering can knock out some of the rows for a repeated key, but those 
 knocked out rows need to be included in the LEFT OUTER JOIN result and are 
 currently not when only some rows are filtered out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-10455) CBO (Calcite Return Path): Different data types at Reducer before JoinOp

2015-04-30 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan resolved HIVE-10455.
-
Resolution: Fixed

Committed to 1.2  master. Thanks, Pengcheng!

 CBO (Calcite Return Path): Different data types at Reducer before JoinOp
 

 Key: HIVE-10455
 URL: https://issues.apache.org/jira/browse/HIVE-10455
 Project: Hive
  Issue Type: Sub-task
  Components: CBO
Reporter: Pengcheng Xiong
Assignee: Pengcheng Xiong
 Fix For: 1.2.0

 Attachments: HIVE-10455.01.patch, HIVE-10455.02.patch, 
 HIVE-10455.03.patch


 The following error occured for cbo_subq_not_in.q 
 {code}
 java.lang.Exception: java.lang.RuntimeException: 
 org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error: Unable 
 to deserialize reduce input key from x1x128x0x0x1 with properties 
 {columns=reducesinkkey0, 
 serialization.lib=org.apache.hadoop.hive.serde2.binarysortable.BinarySortableSerDe,
  serialization.sort.order=+, columns.types=double}
 at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
 at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529)
 {code}
 A more easier way to reproduce is 
 {code}
 set hive.cbo.enable=true;
 set hive.exec.check.crossproducts=false;
 set hive.stats.fetch.column.stats=true;
 set hive.auto.convert.join=false;
 select p_size, src.key
 from 
 part join src
 on p_size=key;
 {code}
 As you can see, p_size is integer while src.key is string. Both of them 
 should be cast to double when they join. When return path is off, this will 
 happen before Join, at RS. However, when return path is on, this will be 
 considered as an expression in Join. Thus, when reducer is collecting 
 different types of keys from different join branches, it throws exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10565) LLAP: Native Vector Map Join doesn't handle filtering and matching on LEFT OUTER JOIN repeated key correctly

2015-04-30 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-10565:

Summary: LLAP: Native Vector Map Join doesn't handle filtering and matching 
on LEFT OUTER JOIN repeated key correctly  (was: LLAP: Native Vector Map Join 
doesn't handle filtering and matching on repeated key correctly)

 LLAP: Native Vector Map Join doesn't handle filtering and matching on LEFT 
 OUTER JOIN repeated key correctly
 

 Key: HIVE-10565
 URL: https://issues.apache.org/jira/browse/HIVE-10565
 Project: Hive
  Issue Type: Sub-task
  Components: Hive
Affects Versions: 1.2.0
Reporter: Matt McCline
Assignee: Matt McCline
Priority: Critical
 Fix For: 1.2.0, 1.3.0


 Filtering can knock out some of the rows for a repeated key, but those 
 knocked out rows need to be included in the LEFT OUTER JOIN result and are 
 currently not when only some rows are filtered out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10516) Measure Hive CLI's performance difference before and after implementation is switched

2015-04-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522801#comment-14522801
 ] 

Hive QA commented on HIVE-10516:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12729000/HIVE-10516.patch

{color:red}ERROR:{color} -1 due to 14 failed/errored test(s), 8830 tests 
executed
*Failed tests:*
{noformat}
TestMinimrCliDriver-bucketmapjoin6.q-constprog_partitioner.q-infer_bucket_sort_dyn_part.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-external_table_with_space_in_location_path.q-infer_bucket_sort_merge.q-auto_sortmerge_join_16.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-groupby2.q-import_exported_table.q-bucketizedhiveinputformat.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-index_bitmap3.q-stats_counter_partitioned.q-temp_table_external.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_map_operators.q-join1.q-bucketmapjoin7.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_num_buckets.q-disable_merge_for_bucketing.q-uber_reduce.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_reducers_power_two.q-scriptfile1.q-scriptfile1_win.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-leftsemijoin_mr.q-load_hdfs_file_with_space_in_the_name.q-root_dir_external_table.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-list_bucket_dml_10.q-bucket_num_reducers.q-bucket6.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-load_fs2.q-file_with_header_footer.q-ql_rewrite_gbtoidx_cbo_1.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-parallel_orderby.q-reduce_deduplicate.q-ql_rewrite_gbtoidx_cbo_2.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-ql_rewrite_gbtoidx.q-smb_mapjoin_8.q - did not produce a 
TEST-*.xml file
TestMinimrCliDriver-schemeAuthority2.q-bucket4.q-input16_cc.q-and-1-more - did 
not produce a TEST-*.xml file
org.apache.hive.jdbc.TestSSL.testSSLConnectionWithProperty
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3675/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3675/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3675/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 14 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12729000 - PreCommit-HIVE-TRUNK-Build

 Measure Hive CLI's performance difference before and after implementation is 
 switched
 -

 Key: HIVE-10516
 URL: https://issues.apache.org/jira/browse/HIVE-10516
 Project: Hive
  Issue Type: Sub-task
  Components: CLI
Affects Versions: 0.10.0
Reporter: Xuefu Zhang
Assignee: Ferdinand Xu
 Attachments: HIVE-10516.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10520) LLAP: Must reset small table result columns for Native Vectorization of Map Join

2015-04-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522751#comment-14522751
 ] 

Hive QA commented on HIVE-10520:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12729662/HIVE-10520.04.patch

{color:red}ERROR:{color} -1 due to 16 failed/errored test(s), 8828 tests 
executed
*Failed tests:*
{noformat}
TestMinimrCliDriver-bucketmapjoin6.q-constprog_partitioner.q-infer_bucket_sort_dyn_part.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-external_table_with_space_in_location_path.q-infer_bucket_sort_merge.q-auto_sortmerge_join_16.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-groupby2.q-import_exported_table.q-bucketizedhiveinputformat.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-index_bitmap3.q-stats_counter_partitioned.q-temp_table_external.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_map_operators.q-join1.q-bucketmapjoin7.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_num_buckets.q-disable_merge_for_bucketing.q-uber_reduce.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_reducers_power_two.q-scriptfile1.q-scriptfile1_win.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-leftsemijoin_mr.q-load_hdfs_file_with_space_in_the_name.q-root_dir_external_table.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-list_bucket_dml_10.q-bucket_num_reducers.q-bucket6.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-load_fs2.q-file_with_header_footer.q-ql_rewrite_gbtoidx_cbo_1.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-parallel_orderby.q-reduce_deduplicate.q-ql_rewrite_gbtoidx_cbo_2.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-ql_rewrite_gbtoidx.q-smb_mapjoin_8.q - did not produce a 
TEST-*.xml file
TestMinimrCliDriver-schemeAuthority2.q-bucket4.q-input16_cc.q-and-1-more - did 
not produce a TEST-*.xml file
org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler.org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler
org.apache.hive.jdbc.TestSSL.testSSLConnectionWithProperty
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3674/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3674/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3674/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 16 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12729662 - PreCommit-HIVE-TRUNK-Build

 LLAP: Must reset small table result columns for Native Vectorization of Map 
 Join
 

 Key: HIVE-10520
 URL: https://issues.apache.org/jira/browse/HIVE-10520
 Project: Hive
  Issue Type: Sub-task
  Components: Vectorization
Affects Versions: 1.2.0
Reporter: Matt McCline
Assignee: Matt McCline
Priority: Blocker
 Fix For: 1.2.0, 1.3.0

 Attachments: HIVE-10520.01.patch, HIVE-10520.02.patch, 
 HIVE-10520.03.patch, HIVE-10520.04.patch


 Scratch columns not getting reset by input source, so native vector map join 
 operators must manually reset small table result columns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9152) Dynamic Partition Pruning [Spark Branch]

2015-04-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522316#comment-14522316
 ] 

Hive QA commented on HIVE-9152:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12729595/HIVE-9152.6-spark.patch

{color:red}ERROR:{color} -1 due to 38 failed/errored test(s), 8723 tests 
executed
*Failed tests:*
{noformat}
TestMinimrCliDriver-bucket6.q-scriptfile1_win.q-quotedid_smb.q-and-1-more - did 
not produce a TEST-*.xml file
TestMinimrCliDriver-bucketizedhiveinputformat.q-empty_dir_in_table.q - did not 
produce a TEST-*.xml file
TestMinimrCliDriver-groupby2.q-infer_bucket_sort_map_operators.q-load_hdfs_file_with_space_in_the_name.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-import_exported_table.q-truncate_column_buckets.q-bucket_num_reducers2.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-index_bitmap3.q-infer_bucket_sort_num_buckets.q-parallel_orderby.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_reducers_power_two.q-join1.q-infer_bucket_sort_bucketed_table.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-leftsemijoin_mr.q-bucket5.q-infer_bucket_sort_merge.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-list_bucket_dml_10.q-input16_cc.q-temp_table_external.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-ql_rewrite_gbtoidx.q-bucket_num_reducers.q-scriptfile1.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-ql_rewrite_gbtoidx_cbo_2.q-bucketmapjoin6.q-bucket4.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-reduce_deduplicate.q-infer_bucket_sort_dyn_part.q-udf_using.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-schemeAuthority2.q-uber_reduce.q-ql_rewrite_gbtoidx_cbo_1.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-stats_counter_partitioned.q-external_table_with_space_in_location_path.q-disable_merge_for_bucketing.q-and-1-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_spark_dynamic_partition_pruning
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_spark_dynamic_partition_pruning_2
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_bucket4
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_bucket2
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_bucket3
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_bucket4
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_column_access_stats
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_limit_partition_metadataonly
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_list_bucket_dml_2
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_optimize_nullscan
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_pcr
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_sample3
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_sample9
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_smb_mapjoin_11
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_temp_table
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_udf_example_add
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_udf_in_file
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_union_view
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_vector_elt
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_vector_string_concat
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_vectorization_decimal_date
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_vectorization_div0
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_vectorized_case
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_vectorized_math_funcs
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_vectorized_string_funcs
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/848/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/848/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-848/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 38 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12729595 - PreCommit-HIVE-SPARK-Build

 Dynamic Partition Pruning [Spark Branch]
 

 Key: HIVE-9152
  

[jira] [Commented] (HIVE-10444) HIVE-10223 breaks hadoop-1 build

2015-04-30 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522333#comment-14522333
 ] 

Chris Nauroth commented on HIVE-10444:
--

[~apivovarov], thank you for the review.

I reran the failed tests locally, and they all passed.

I also tried running the same tests with {{-Phadoop-1}}, and they failed due to 
a {{NoSuchMethodError}} in an HDFS class.  Looking at the test classpath, I can 
see it's picking up a 2.x version of the minicluster, even though I set 
{{-Phadoop-1}}.  I don't think this is related to the current patch.

 HIVE-10223 breaks hadoop-1 build
 

 Key: HIVE-10444
 URL: https://issues.apache.org/jira/browse/HIVE-10444
 Project: Hive
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: Prasanth Jayachandran
Assignee: Chris Nauroth
 Attachments: HIVE-10444.1.patch, HIVE-10444.2.patch


 FileStatus.isFile() and FileStatus.isDirectory() methods added in HIVE-10223 
 are not present in hadoop 1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10561) Tez session does not always end when CLI is ctrl-c-ed

2015-04-30 Thread Mostafa Mokhtar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522443#comment-14522443
 ] 

Mostafa Mokhtar commented on HIVE-10561:


Even after exit; this still happens.


 Tez session does not always end when CLI is ctrl-c-ed
 -

 Key: HIVE-10561
 URL: https://issues.apache.org/jira/browse/HIVE-10561
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Gunther Hagleitner

 When I run queries using 
 {noformat}
 hive -f some_file.sql 2 stderr.log
 {noformat}
 and then ctrl-c the shell as it is running the file, Tez app does not exist 
 and can be seen in RUNNING state on yarn. Not sure if it happens every time, 
 but it's frequent enough and the sessions need to be killed off via yarn 
 commands manually



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10564) webhcat should use webhcat-site.xml properties for controller job submission

2015-04-30 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-10564:
-
Attachment: HIVE-10564.1.patch

 webhcat should use webhcat-site.xml properties for controller job submission
 

 Key: HIVE-10564
 URL: https://issues.apache.org/jira/browse/HIVE-10564
 Project: Hive
  Issue Type: Bug
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-10564.1.patch


 webhcat should use webhcat-site.xml in configuration for the 
 TempletonController map-only job that it launches. This will allow users to 
 set any MR/hdfs properties that want to see used for the controller job.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-10488) cast DATE as TIMESTAMP returns incorrect values

2015-04-30 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang resolved HIVE-10488.

Resolution: Cannot Reproduce

[~the6campbells] We are not able to reproduce the issue, so I resolve the JIRA 
this moment. If you see any further issue, please feel free to reopen it.

 cast DATE as TIMESTAMP returns incorrect values
 ---

 Key: HIVE-10488
 URL: https://issues.apache.org/jira/browse/HIVE-10488
 Project: Hive
  Issue Type: Bug
  Components: SQL
Affects Versions: 0.13.1
Reporter: N Campbell
Assignee: Chaoyu Tang

 same data in textfile works
 same data loaded into an ORC table does not
 connection property of tez/mr makes no difference.
 select rnum, cdt, cast (cdt as timestamp) from tdt
 0 null  null
 1 1996-01-01  1969-12-31 19:00:09.496
 2 2000-01-01  1969-12-31 19:00:10.957
 3 2000-12-31  1969-12-31 19:00:11.322
 vs
 0 null  null
 1 1996-01-01  1996-01-01 00:00:00.0
 2 2000-01-01  2000-01-01 00:00:00.0
 3 2000-12-31  2000-12-31 00:00:00.0
 create table  if not exists TDT ( RNUM int , CDT date   )
  STORED AS orc  ;
 insert overwrite table TDT select * from  text.TDT;
 0|\N
 1|1996-01-01
 2|2000-01-01
 3|2000-12-31



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-10567) partial scan for rcfile table doesn't work for dynamic partition

2015-04-30 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang reassigned HIVE-10567:
--

Assignee: Chaoyu Tang

 partial scan for rcfile table doesn't work for dynamic partition
 

 Key: HIVE-10567
 URL: https://issues.apache.org/jira/browse/HIVE-10567
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 0.14.0, 1.0.0
Reporter: Thomas Friedrich
Assignee: Chaoyu Tang
Priority: Minor
  Labels: rcfile

 HIVE-3958 added support for partial scan for RCFile. This works fine for 
 static partitions (for example: analyze table analyze_srcpart_partial_scan 
 PARTITION(ds='2008-04-08',hr=11) compute statistics partialscan).
 For dynamic partition, the analyze files with an IOException 
 java.io.IOException: No input paths specified in job:
 hive ANALYZE TABLE testtable PARTITION(col_varchar) COMPUTE STATISTICS 
 PARTIALSCAN;
 java.io.IOException: No input paths specified in job
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.getInputPaths(HiveInputFormat.java:318)
 at 
 org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:459)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:624)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:616)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:492)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8950) Add support in ParquetHiveSerde to create table schema from a parquet file

2015-04-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522646#comment-14522646
 ] 

Hive QA commented on HIVE-8950:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12729354/HIVE-8950.8.patch

{color:red}ERROR:{color} -1 due to 13 failed/errored test(s), 8861 tests 
executed
*Failed tests:*
{noformat}
TestMinimrCliDriver-bucketmapjoin6.q-constprog_partitioner.q-infer_bucket_sort_dyn_part.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-external_table_with_space_in_location_path.q-infer_bucket_sort_merge.q-auto_sortmerge_join_16.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-groupby2.q-import_exported_table.q-bucketizedhiveinputformat.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-index_bitmap3.q-stats_counter_partitioned.q-temp_table_external.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_map_operators.q-join1.q-bucketmapjoin7.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_num_buckets.q-disable_merge_for_bucketing.q-uber_reduce.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_reducers_power_two.q-scriptfile1.q-scriptfile1_win.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-leftsemijoin_mr.q-load_hdfs_file_with_space_in_the_name.q-root_dir_external_table.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-list_bucket_dml_10.q-bucket_num_reducers.q-bucket6.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-load_fs2.q-file_with_header_footer.q-ql_rewrite_gbtoidx_cbo_1.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-parallel_orderby.q-reduce_deduplicate.q-ql_rewrite_gbtoidx_cbo_2.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-ql_rewrite_gbtoidx.q-smb_mapjoin_8.q - did not produce a 
TEST-*.xml file
TestMinimrCliDriver-schemeAuthority2.q-bucket4.q-input16_cc.q-and-1-more - did 
not produce a TEST-*.xml file
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3672/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3672/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3672/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 13 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12729354 - PreCommit-HIVE-TRUNK-Build

 Add support in ParquetHiveSerde to create table schema from a parquet file
 --

 Key: HIVE-8950
 URL: https://issues.apache.org/jira/browse/HIVE-8950
 Project: Hive
  Issue Type: Improvement
Reporter: Ashish K Singh
Assignee: Ashish K Singh
 Attachments: HIVE-8950.1.patch, HIVE-8950.2.patch, HIVE-8950.3.patch, 
 HIVE-8950.4.patch, HIVE-8950.5.patch, HIVE-8950.6.patch, HIVE-8950.7.patch, 
 HIVE-8950.8.patch, HIVE-8950.patch


 PARQUET-76 and PARQUET-47 ask for creating parquet backed tables without 
 having to specify the column names and types. As, parquet files store schema 
 in their footer, it is possible to generate hive schema from parquet file's 
 metadata. This will improve usability of parquet backed tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-10567) partial scan for rcfile table doesn't work for dynamic partition

2015-04-30 Thread Thomas Friedrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Friedrich reassigned HIVE-10567:
---

Assignee: Thomas Friedrich  (was: Chaoyu Tang)

 partial scan for rcfile table doesn't work for dynamic partition
 

 Key: HIVE-10567
 URL: https://issues.apache.org/jira/browse/HIVE-10567
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 0.14.0, 1.0.0
Reporter: Thomas Friedrich
Assignee: Thomas Friedrich
Priority: Minor
  Labels: rcfile
 Attachments: HIVE-10567.1.patch


 HIVE-3958 added support for partial scan for RCFile. This works fine for 
 static partitions (for example: analyze table analyze_srcpart_partial_scan 
 PARTITION(ds='2008-04-08',hr=11) compute statistics partialscan).
 For dynamic partition, the analyze files with an IOException 
 java.io.IOException: No input paths specified in job:
 hive ANALYZE TABLE testtable PARTITION(col_varchar) COMPUTE STATISTICS 
 PARTIALSCAN;
 java.io.IOException: No input paths specified in job
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.getInputPaths(HiveInputFormat.java:318)
 at 
 org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:459)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:624)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:616)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:492)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10061) HiveConf Should not be used as part of the HS2 client side code

2015-04-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522554#comment-14522554
 ] 

Hive QA commented on HIVE-10061:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12729335/HIVE-10061.1.patch

{color:red}ERROR:{color} -1 due to 14 failed/errored test(s), 8830 tests 
executed
*Failed tests:*
{noformat}
TestMinimrCliDriver-bucketmapjoin6.q-constprog_partitioner.q-infer_bucket_sort_dyn_part.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-external_table_with_space_in_location_path.q-infer_bucket_sort_merge.q-auto_sortmerge_join_16.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-groupby2.q-import_exported_table.q-bucketizedhiveinputformat.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-index_bitmap3.q-stats_counter_partitioned.q-temp_table_external.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_map_operators.q-join1.q-bucketmapjoin7.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_num_buckets.q-disable_merge_for_bucketing.q-uber_reduce.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_reducers_power_two.q-scriptfile1.q-scriptfile1_win.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-leftsemijoin_mr.q-load_hdfs_file_with_space_in_the_name.q-root_dir_external_table.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-list_bucket_dml_10.q-bucket_num_reducers.q-bucket6.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-load_fs2.q-file_with_header_footer.q-ql_rewrite_gbtoidx_cbo_1.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-parallel_orderby.q-reduce_deduplicate.q-ql_rewrite_gbtoidx_cbo_2.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-ql_rewrite_gbtoidx.q-smb_mapjoin_8.q - did not produce a 
TEST-*.xml file
TestMinimrCliDriver-schemeAuthority2.q-bucket4.q-input16_cc.q-and-1-more - did 
not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udaf_histogram_numeric
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3670/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3670/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3670/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 14 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12729335 - PreCommit-HIVE-TRUNK-Build

 HiveConf Should not be used as part of the HS2 client side code
 ---

 Key: HIVE-10061
 URL: https://issues.apache.org/jira/browse/HIVE-10061
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2, JDBC
Affects Versions: 1.3.0
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
 Fix For: 1.3.0

 Attachments: HIVE-10061.1.patch


 HiveConf crept in to the JDBC driver via the  embedded mode check. 
 if (isEmbeddedMode) {
   EmbeddedThriftBinaryCLIService embeddedClient = new 
 EmbeddedThriftBinaryCLIService();
   embeddedClient.init(new HiveConf());
   client = embeddedClient;
 } else {
 
 Ideally we'd like to keep driver code free of these dependencies. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10569) Hive CLI gets stuck when hive.exec.parallel=true; and some exception happens during SessionState.start

2015-04-30 Thread Rohit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Agarwal updated HIVE-10569:
-
Attachment: HIVE-10569.patch

 Hive CLI gets stuck when hive.exec.parallel=true; and some exception happens 
 during SessionState.start
 --

 Key: HIVE-10569
 URL: https://issues.apache.org/jira/browse/HIVE-10569
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.13.1, 1.1.0
Reporter: Rohit Agarwal
Priority: Critical
 Attachments: HIVE-10569.patch


 The CLI gets stuck in the loop in [DriverContext.pollFinished | 
 https://github.com/apache/hive/blob/release-1.1.0/ql/src/java/org/apache/hadoop/hive/ql/DriverContext.java#L108]
  when some {{TaskRunner}} which has completed has not been marked as 
 non-running.
 This can happen when there is exception in [SessionState.start | 
 https://github.com/apache/hive/blob/release-1.1.0/ql/src/java/org/apache/hadoop/hive/ql/exec/TaskRunner.java#L74]
  which is called from {{TaskRunner.run}}.
 This happened with us when we were running with {{hive.exec.parallel=true}}, 
 {{hive.execution.engine=tez}} and Tez wasn't correctly setup.
 In this case the CLI printed the exception and then got hung (No prompt.)
 A simple fix is to call {{result.setRunning(false);}} in the {{finally}} 
 block of {{TaskRunner.run}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10151) insert into A select from B is broken when both A and B are Acid tables and bucketed the same way

2015-04-30 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-10151:
--
Attachment: HIVE-10151.patch

[~alangates], could you review please

we also have GroupByOptimizer which can enable use of BucketziedHiveInputFormat 
but as far as I can tell it only applies if table is bucketed and sorted which 
is an invalid combination for ACID tables

 insert into A select from B is broken when both A and B are Acid tables and 
 bucketed the same way
 -

 Key: HIVE-10151
 URL: https://issues.apache.org/jira/browse/HIVE-10151
 Project: Hive
  Issue Type: Bug
  Components: Query Planning, Transactions
Affects Versions: 1.1.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman
 Attachments: HIVE-10151.patch


 BucketingSortingReduceSinkOptimizer makes 
 insert into AcidTable select * from otherAcidTable
 use BucketizedHiveInputFormat which bypasses ORC merge logic on read and 
 tries to send bucket files (rather than table dir) down to OrcInputFormat.
 (this is true only if both AcidTable and otherAcidTable are bucketed the same 
 way).  Then ORC dies.
 More specifically:
 {noformat}
 create table acidTbl(a int, b int) clustered by (a) into 2 buckets stored as 
 orc TBLPROPERTIES ('transactional'='true')
 create table acidTblPart(a int, b int) partitioned by (p string) clustered by 
 (a) into 2 buckets stored as orc TBLPROPERTIES ('transactional'='true')
 insert into acidTblPart partition(p=1) (a,b) values(1,2)
 insert into acidTbl(a,b) select a,b from acidTblPart where p = 1
 {noformat}
 results in 
 {noformat}
 2015-04-29 13:57:35,807 ERROR [main]: exec.Task 
 (SessionState.java:printError(956)) - Job Submission failed with exception 
 'java.lang.RuntimeException(serious problem)'
 java.lang.RuntimeException: serious problem
 at 
 org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1021)
 at 
 org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:1048)
 at 
 org.apache.hadoop.hive.ql.io.BucketizedHiveInputFormat.getSplits(BucketizedHiveInputFormat.java:141)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:624)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:616)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:492)
 at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
 at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
 at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
 at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
 at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
 at 
 org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
 at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
 at 
 org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:430)
 at 
 org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137)
 at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
 at 
 org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)
 at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1650)
 at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1409)
 at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1192)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
 at 
 org.apache.hadoop.hive.ql.TestTxnCommands2.runStatementOnDriver(TestTxnCommands2.java:225)
 at 
 org.apache.hadoop.hive.ql.TestTxnCommands2.testDeleteIn2(TestTxnCommands2.java:148)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
 at 
 

[jira] [Updated] (HIVE-10453) HS2 leaking open file descriptors when using UDFs

2015-04-30 Thread Yongzhi Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongzhi Chen updated HIVE-10453:

Attachment: HIVE-10453.2.patch

 HS2 leaking open file descriptors when using UDFs
 -

 Key: HIVE-10453
 URL: https://issues.apache.org/jira/browse/HIVE-10453
 Project: Hive
  Issue Type: Bug
Reporter: Yongzhi Chen
Assignee: Yongzhi Chen
 Attachments: HIVE-10453.1.patch, HIVE-10453.2.patch


 1. create a custom function by
 CREATE FUNCTION myfunc AS 'someudfclass' using jar 'hdfs:///tmp/myudf.jar';
 2. Create a simple jdbc client, just do 
 connect, 
 run simple query which using the function such as:
 select myfunc(col1) from sometable
 3. Disconnect.
 Check open file for HiveServer2 by:
 lsof -p HSProcID | grep myudf.jar
 You will see the leak as:
 {noformat}
 java  28718 ychen  txt  REG1,4741 212977666 
 /private/var/folders/6p/7_njf13d6h144wldzbbsfpz8gp/T/1bfe3de0-ac63-4eba-a725-6a9840f1f8d5_resources/myudf.jar
 java  28718 ychen  330r REG1,4741 212977666 
 /private/var/folders/6p/7_njf13d6h144wldzbbsfpz8gp/T/1bfe3de0-ac63-4eba-a725-6a9840f1f8d5_resources/myudf.jar
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10544) Beeline/Hive JDBC Driver fails in HTTP mode on Windows with java.lang.NoSuchFieldError: INSTANCE

2015-04-30 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522712#comment-14522712
 ] 

Hari Sankar Sivarama Subramaniyan commented on HIVE-10544:
--

[~thejas] Noticed sometime back that bin/beeline.cmd in the repo follows 
Windows convention for text file (i.e., M^ at the end of each line). So in 
order to apply the patch, you will need to do :

$ tr -d '\r'  bin/beeline.cmd   temp; rm bin/beeline.cmd; mv temp 
bin/beeline.cmd
$ patch -p1  HIVE-10544.2.patch

Thanks
Hari

 Beeline/Hive JDBC Driver fails in HTTP mode on Windows with 
 java.lang.NoSuchFieldError: INSTANCE
 

 Key: HIVE-10544
 URL: https://issues.apache.org/jira/browse/HIVE-10544
 Project: Hive
  Issue Type: Bug
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
 Attachments: HIVE-10544.1.patch, HIVE-10544.2.patch


 NO PRECOMMIT TESTS
 This appears to be caused by a dependency version mispatch with httpcore on 
 Beeline's classpath.
 We need to change beeline.cmd as well I guess to include the equivalent of 
 export HADOOP_USER_CLASSPATH_FIRST=true.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-5672) Insert with custom separator not supported for non-local directory

2015-04-30 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522584#comment-14522584
 ] 

Lefty Leverenz commented on HIVE-5672:
--

Doc note:  Please review the updated note about custom separators in the DML 
doc.  (If it's okay, remove the TODOC1.2 label.)

* [DML -- Writing data into the filesystem from queries -- Notes | 
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-Notes.2]

 Insert with custom separator not supported for non-local directory
 --

 Key: HIVE-5672
 URL: https://issues.apache.org/jira/browse/HIVE-5672
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0, 1.0.0
Reporter: Romain Rigaux
Assignee: Nemon Lou
  Labels: TODOC1.2
 Fix For: 1.2.0

 Attachments: HIVE-5672.1.patch, HIVE-5672.2.patch, HIVE-5672.3.patch, 
 HIVE-5672.4.patch, HIVE-5672.5.patch, HIVE-5672.5.patch.tar.gz, 
 HIVE-5672.6.patch, HIVE-5672.6.patch.tar.gz, HIVE-5672.7.patch, 
 HIVE-5672.7.patch.tar.gz, HIVE-5672.8.patch, HIVE-5672.8.patch.tar.gz


 https://issues.apache.org/jira/browse/HIVE-3682 is great but non local 
 directory don't seem to be supported:
 {code}
 insert overwrite directory '/tmp/test-02'
 row format delimited
 FIELDS TERMINATED BY ':'
 select description FROM sample_07
 {code}
 {code}
 Error while compiling statement: FAILED: ParseException line 2:0 cannot 
 recognize input near 'row' 'format' 'delimited' in select clause
 {code}
 This works (with 'local'):
 {code}
 insert overwrite local directory '/tmp/test-02'
 row format delimited
 FIELDS TERMINATED BY ':'
 select code, description FROM sample_07
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10455) CBO (Calcite Return Path): Different data types at Reducer before JoinOp

2015-04-30 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522591#comment-14522591
 ] 

Pengcheng Xiong commented on HIVE-10455:


We need HIVE-10479 so that all the cbo tests will pass. [~jpullokkaran]

 CBO (Calcite Return Path): Different data types at Reducer before JoinOp
 

 Key: HIVE-10455
 URL: https://issues.apache.org/jira/browse/HIVE-10455
 Project: Hive
  Issue Type: Sub-task
  Components: CBO
Reporter: Pengcheng Xiong
Assignee: Pengcheng Xiong
 Fix For: 1.2.0

 Attachments: HIVE-10455.01.patch, HIVE-10455.02.patch, 
 HIVE-10455.03.patch


 The following error occured for cbo_subq_not_in.q 
 {code}
 java.lang.Exception: java.lang.RuntimeException: 
 org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error: Unable 
 to deserialize reduce input key from x1x128x0x0x1 with properties 
 {columns=reducesinkkey0, 
 serialization.lib=org.apache.hadoop.hive.serde2.binarysortable.BinarySortableSerDe,
  serialization.sort.order=+, columns.types=double}
 at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
 at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529)
 {code}
 A more easier way to reproduce is 
 {code}
 set hive.cbo.enable=true;
 set hive.exec.check.crossproducts=false;
 set hive.stats.fetch.column.stats=true;
 set hive.auto.convert.join=false;
 select p_size, src.key
 from 
 part join src
 on p_size=key;
 {code}
 As you can see, p_size is integer while src.key is string. Both of them 
 should be cast to double when they join. When return path is off, this will 
 happen before Join, at RS. However, when return path is on, this will be 
 considered as an expression in Join. Thus, when reducer is collecting 
 different types of keys from different join branches, it throws exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-5672) Insert with custom separator not supported for non-local directory

2015-04-30 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-5672:
---
Labels:   (was: TODOC1.2)

 Insert with custom separator not supported for non-local directory
 --

 Key: HIVE-5672
 URL: https://issues.apache.org/jira/browse/HIVE-5672
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0, 1.0.0
Reporter: Romain Rigaux
Assignee: Nemon Lou
 Fix For: 1.2.0

 Attachments: HIVE-5672.1.patch, HIVE-5672.2.patch, HIVE-5672.3.patch, 
 HIVE-5672.4.patch, HIVE-5672.5.patch, HIVE-5672.5.patch.tar.gz, 
 HIVE-5672.6.patch, HIVE-5672.6.patch.tar.gz, HIVE-5672.7.patch, 
 HIVE-5672.7.patch.tar.gz, HIVE-5672.8.patch, HIVE-5672.8.patch.tar.gz


 https://issues.apache.org/jira/browse/HIVE-3682 is great but non local 
 directory don't seem to be supported:
 {code}
 insert overwrite directory '/tmp/test-02'
 row format delimited
 FIELDS TERMINATED BY ':'
 select description FROM sample_07
 {code}
 {code}
 Error while compiling statement: FAILED: ParseException line 2:0 cannot 
 recognize input near 'row' 'format' 'delimited' in select clause
 {code}
 This works (with 'local'):
 {code}
 insert overwrite local directory '/tmp/test-02'
 row format delimited
 FIELDS TERMINATED BY ':'
 select code, description FROM sample_07
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-5672) Insert with custom separator not supported for non-local directory

2015-04-30 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522595#comment-14522595
 ] 

Sushanth Sowmyan commented on HIVE-5672:


Looks good, Lefty. Thanks!

Removing the label.

 Insert with custom separator not supported for non-local directory
 --

 Key: HIVE-5672
 URL: https://issues.apache.org/jira/browse/HIVE-5672
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0, 1.0.0
Reporter: Romain Rigaux
Assignee: Nemon Lou
 Fix For: 1.2.0

 Attachments: HIVE-5672.1.patch, HIVE-5672.2.patch, HIVE-5672.3.patch, 
 HIVE-5672.4.patch, HIVE-5672.5.patch, HIVE-5672.5.patch.tar.gz, 
 HIVE-5672.6.patch, HIVE-5672.6.patch.tar.gz, HIVE-5672.7.patch, 
 HIVE-5672.7.patch.tar.gz, HIVE-5672.8.patch, HIVE-5672.8.patch.tar.gz


 https://issues.apache.org/jira/browse/HIVE-3682 is great but non local 
 directory don't seem to be supported:
 {code}
 insert overwrite directory '/tmp/test-02'
 row format delimited
 FIELDS TERMINATED BY ':'
 select description FROM sample_07
 {code}
 {code}
 Error while compiling statement: FAILED: ParseException line 2:0 cannot 
 recognize input near 'row' 'format' 'delimited' in select clause
 {code}
 This works (with 'local'):
 {code}
 insert overwrite local directory '/tmp/test-02'
 row format delimited
 FIELDS TERMINATED BY ':'
 select code, description FROM sample_07
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10567) partial scan for rcfile table doesn't work for dynamic partition

2015-04-30 Thread Thomas Friedrich (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522619#comment-14522619
 ] 

Thomas Friedrich commented on HIVE-10567:
-

Chaoyu, I attached a proposed patch. The problem is in method 
getInputPathsForPartialScan in class GenMapRedUtils.java. 
I added the case for DYNAMIC_PARTITION, but wasn't sure about the 
aggregrationKey. In the current patch, the aggregationKey is just the table 
name and the PartialScanMapper will join this with the task id which is 
different for each partition (one task per partition):
org.apache.hadoop.hive.ql.stats.fs.FSStatsPublisher: Writing stats in it : 
{default.testtable/00/={numRows=2, rawDataSize=16}}
org.apache.hadoop.hive.ql.stats.fs.FSStatsPublisher: Writing stats in it : 
{default.testtable/01/={numRows=1, rawDataSize=8}}
The output seems ok to me. 
Do you know whether the aggregationKey should be set to a different value, like 
in the STATIC_PARTITION case?

I would like to add a unit test for this case as well, that's why I didn't 
submit the patch yet.

 partial scan for rcfile table doesn't work for dynamic partition
 

 Key: HIVE-10567
 URL: https://issues.apache.org/jira/browse/HIVE-10567
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 0.14.0, 1.0.0
Reporter: Thomas Friedrich
Assignee: Chaoyu Tang
Priority: Minor
  Labels: rcfile
 Attachments: HIVE-10567.1.patch


 HIVE-3958 added support for partial scan for RCFile. This works fine for 
 static partitions (for example: analyze table analyze_srcpart_partial_scan 
 PARTITION(ds='2008-04-08',hr=11) compute statistics partialscan).
 For dynamic partition, the analyze files with an IOException 
 java.io.IOException: No input paths specified in job:
 hive ANALYZE TABLE testtable PARTITION(col_varchar) COMPUTE STATISTICS 
 PARTIALSCAN;
 java.io.IOException: No input paths specified in job
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.getInputPaths(HiveInputFormat.java:318)
 at 
 org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:459)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:624)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:616)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:492)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10567) partial scan for rcfile table doesn't work for dynamic partition

2015-04-30 Thread Chaoyu Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522625#comment-14522625
 ] 

Chaoyu Tang commented on HIVE-10567:


Hi [~tfriedr], I have not started to look into this JIRA yet. Since you are 
working on this, please feel free to reassign it to yourself. Thanks

 partial scan for rcfile table doesn't work for dynamic partition
 

 Key: HIVE-10567
 URL: https://issues.apache.org/jira/browse/HIVE-10567
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 0.14.0, 1.0.0
Reporter: Thomas Friedrich
Assignee: Chaoyu Tang
Priority: Minor
  Labels: rcfile
 Attachments: HIVE-10567.1.patch


 HIVE-3958 added support for partial scan for RCFile. This works fine for 
 static partitions (for example: analyze table analyze_srcpart_partial_scan 
 PARTITION(ds='2008-04-08',hr=11) compute statistics partialscan).
 For dynamic partition, the analyze files with an IOException 
 java.io.IOException: No input paths specified in job:
 hive ANALYZE TABLE testtable PARTITION(col_varchar) COMPUTE STATISTICS 
 PARTIALSCAN;
 java.io.IOException: No input paths specified in job
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.getInputPaths(HiveInputFormat.java:318)
 at 
 org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:459)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:624)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:616)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:492)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10307) Support to use number literals in partition column

2015-04-30 Thread Chaoyu Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522691#comment-14522691
 ] 

Chaoyu Tang commented on HIVE-10307:


[~leftylev] I updated wiki for the hive.typecheck.on.insert property and the 
use of number liberals as partition value in following related sections (sorry, 
I have not figured out how to link these sections to their website in JIRA as 
you did):
1. Configuration Properties – Query and DDL Execution
2. DDL – Alter Partition
3. DDL – Describe Partition
4. DML – Inserting data into Hive Tables from queries
Please review them and feel free to revise if necessary. Thanks

 Support to use number literals in partition column
 --

 Key: HIVE-10307
 URL: https://issues.apache.org/jira/browse/HIVE-10307
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 1.0.0
Reporter: Chaoyu Tang
Assignee: Chaoyu Tang
  Labels: TODOC1.2
 Fix For: 1.2.0

 Attachments: HIVE-10307.1.patch, HIVE-10307.2.patch, 
 HIVE-10307.3.patch, HIVE-10307.4.patch, HIVE-10307.5.patch, 
 HIVE-10307.6.patch, HIVE-10307.patch


 Data types like TinyInt, SmallInt, BigInt or Decimal can be expressed as 
 literals with postfix like Y, S, L, or BD appended to the number. These 
 literals work in most Hive queries, but do not when they are used as 
 partition column value. For a partitioned table like:
 create table partcoltypenum (key int, value string) partitioned by (tint 
 tinyint, sint smallint, bint bigint);
 insert into partcoltypenum partition (tint=100Y, sint=1S, 
 bint=1000L) select key, value from src limit 30;
 Queries like select, describe and drop partition do not work. For an example
 select * from partcoltypenum where tint=100Y and sint=1S and 
 bint=1000L;
 does not return any rows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-5297) Hive does not honor type for partition columns

2015-04-30 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522699#comment-14522699
 ] 

Lefty Leverenz commented on HIVE-5297:
--

Doc note:  This added *hive.typecheck.on.insert* to HiveConf.java, so it needs 
to be documented in the wiki.  (HIVE-10307 extends the parameter in release 
1.2.0.)

* [Configuration Properties -- Query and DDL Execution | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-QueryandDDLExecution]

 Hive does not honor type for partition columns
 --

 Key: HIVE-5297
 URL: https://issues.apache.org/jira/browse/HIVE-5297
 Project: Hive
  Issue Type: Bug
  Components: CLI
Affects Versions: 0.11.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
  Labels: TODOC12
 Fix For: 0.12.0

 Attachments: HIVE-5297.1.patch, HIVE-5297.2.patch, HIVE-5297.3.patch, 
 HIVE-5297.4.patch, HIVE-5297.5.patch, HIVE-5297.6.patch, HIVE-5297.7.patch, 
 HIVE-5297.8.patch


 Hive does not consider the type of the partition column while writing 
 partitions. Consider for example the query:
 {noformat}
 create table tab1 (id1 int, id2 string) PARTITIONED BY(month string,day int) 
 row format delimited fields terminated by ',';
 alter table tab1 add partition (month='June', day='second');
 {noformat}
 Hive accepts this query. However if you try to select from this table and 
 insert into another expecting schema match, it will insert nulls instead. We 
 should throw an exception on such user error at the time the partition 
 addition/load happens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10514) Fix MiniCliDriver tests failure

2015-04-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14522711#comment-14522711
 ] 

Hive QA commented on HIVE-10514:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12729026/HIVE-10514.3.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 8881 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropTable
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3673/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3673/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3673/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12729026 - PreCommit-HIVE-TRUNK-Build

 Fix MiniCliDriver tests failure
 ---

 Key: HIVE-10514
 URL: https://issues.apache.org/jira/browse/HIVE-10514
 Project: Hive
  Issue Type: Bug
  Components: Testing Infrastructure
Reporter: Szehon Ho
Assignee: Hari Sankar Sivarama Subramaniyan
 Attachments: HIVE-10514.1.patch, HIVE-10514.2.patch, 
 HIVE-10514.3.patch


 The MinimrCliDriver tests always fail to run.
 This can be reproduced by the following, run the command:
 {noformat}
 mvn -B test -Phadoop-2 -Dtest=TestMinimrCliDriver 
 -Dminimr.query.files=infer_bucket_sort_map_operators.q,join1.q,bucketmapjoin7.q,udf_using.q
 {noformat}
 And the following exception comes:
 {noformat}
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
 (default-testCompile) on project hive-it-qfile: Compilation failure
 [ERROR] 
 /Users/szehon/repos/apache-hive-git/hive/itests/qtest/target/generated-test-sources/java/org/apache/hadoop/hive/cli/TestCliDriver.java:[100,22]
  code too large
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10403) Add n-way join support for Hybrid Grace Hash Join

2015-04-30 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-10403:
-
Attachment: HIVE-10403.08.patch

Replace patch 08 with updated try/catch for KeyValueContainer and 
ObjectContainer

 Add n-way join support for Hybrid Grace Hash Join
 -

 Key: HIVE-10403
 URL: https://issues.apache.org/jira/browse/HIVE-10403
 Project: Hive
  Issue Type: Improvement
Affects Versions: 1.2.0
Reporter: Wei Zheng
Assignee: Wei Zheng
 Attachments: HIVE-10403.01.patch, HIVE-10403.02.patch, 
 HIVE-10403.03.patch, HIVE-10403.04.patch, HIVE-10403.06.patch, 
 HIVE-10403.07.patch, HIVE-10403.08.patch


 Currently Hybrid Grace Hash Join only supports 2-way join (one big table and 
 one small table). This task will enable n-way join (one big table and 
 multiple small tables).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10403) Add n-way join support for Hybrid Grace Hash Join

2015-04-30 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-10403:
-
Attachment: (was: HIVE-10403.08.patch)

 Add n-way join support for Hybrid Grace Hash Join
 -

 Key: HIVE-10403
 URL: https://issues.apache.org/jira/browse/HIVE-10403
 Project: Hive
  Issue Type: Improvement
Affects Versions: 1.2.0
Reporter: Wei Zheng
Assignee: Wei Zheng
 Attachments: HIVE-10403.01.patch, HIVE-10403.02.patch, 
 HIVE-10403.03.patch, HIVE-10403.04.patch, HIVE-10403.06.patch, 
 HIVE-10403.07.patch


 Currently Hybrid Grace Hash Join only supports 2-way join (one big table and 
 one small table). This task will enable n-way join (one big table and 
 multiple small tables).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10541) Beeline requires newline at the end of each query in a file

2015-04-30 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang updated HIVE-10541:
---
Attachment: HIVE-10541.1.patch

Add a unittest testLastLineCmdInScriptFile in TestBeeLineWithArgs. Thanks 
[~thejas] for review.

 Beeline requires newline at the end of each query in a file
 ---

 Key: HIVE-10541
 URL: https://issues.apache.org/jira/browse/HIVE-10541
 Project: Hive
  Issue Type: Bug
  Components: Beeline
Affects Versions: 0.13.1
Reporter: Chaoyu Tang
Assignee: Chaoyu Tang
Priority: Minor
 Attachments: HIVE-10541.1.patch, HIVE-10541.patch


 Beeline requires newline at the end of each query in a file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >