[jira] [Commented] (HIVE-13726) Improve dynamic partition loading VI

2016-05-11 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281228#comment-15281228
 ] 

Rui Li commented on HIVE-13726:
---

Thanks [~ashutoshc], patch looks good to me. Only left some minor comments on 
RB.
Meanwhile, do you have any numbers as to how the patch improves performance?

> Improve dynamic partition loading VI
> 
>
> Key: HIVE-13726
> URL: https://issues.apache.org/jira/browse/HIVE-13726
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 1.2.0, 2.0.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-13726.patch
>
>
> Parallelize deletes and other refactoring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13363) Add hive.metastore.token.signature property to HiveConf

2016-05-11 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281218#comment-15281218
 ] 

Lefty Leverenz commented on HIVE-13363:
---

Doc note:  *hive.metastore.token.signature* will need to be documented in the 
wiki for release 2.1.0.

* [Configuration Properties -- MetaStore | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-MetaStore]

Added a TODOC2.1 label.

> Add hive.metastore.token.signature property to HiveConf
> ---
>
> Key: HIVE-13363
> URL: https://issues.apache.org/jira/browse/HIVE-13363
> Project: Hive
>  Issue Type: Improvement
>Reporter: Anthony Hsu
>Assignee: Anthony Hsu
>  Labels: TODOC2.1
> Fix For: 2.1.0
>
> Attachments: HIVE-13363.1.patch, HIVE-13363.2.patch
>
>
> I noticed that the {{hive.metastore.token.signature}} property is not defined 
> in HiveConf.java, but hardcoded everywhere it's used in the Hive codebase.
> [HIVE-2963] fixes this but was never committed due to being resolved as a 
> duplicate ticket.
> We should add {{hive.metastore.token.signature}} to HiveConf.java to 
> centralize its definition and make the property more discoverable (it's 
> useful to set it when talking to multiple metastores).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13363) Add hive.metastore.token.signature property to HiveConf

2016-05-11 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-13363:
--
Labels: TODOC2.1  (was: )

> Add hive.metastore.token.signature property to HiveConf
> ---
>
> Key: HIVE-13363
> URL: https://issues.apache.org/jira/browse/HIVE-13363
> Project: Hive
>  Issue Type: Improvement
>Reporter: Anthony Hsu
>Assignee: Anthony Hsu
>  Labels: TODOC2.1
> Fix For: 2.1.0
>
> Attachments: HIVE-13363.1.patch, HIVE-13363.2.patch
>
>
> I noticed that the {{hive.metastore.token.signature}} property is not defined 
> in HiveConf.java, but hardcoded everywhere it's used in the Hive codebase.
> [HIVE-2963] fixes this but was never committed due to being resolved as a 
> duplicate ticket.
> We should add {{hive.metastore.token.signature}} to HiveConf.java to 
> centralize its definition and make the property more discoverable (it's 
> useful to set it when talking to multiple metastores).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13458) Heartbeater doesn't fail query when heartbeat fails

2016-05-11 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281215#comment-15281215
 ] 

Lefty Leverenz commented on HIVE-13458:
---

No-doc note:  This adds *hive.test.fail.heartbeater* to HiveConf.java.  It 
doesn't need to be documented in the wiki because it's just for testing.

By the way, Configuration Properties does have a section for test properties 
but only lists a few of them, saying "For other test properties, search for 
'hive.test.' in hive-default.xml.template or HiveConf.java."

* [Configuration Properties -- Test Properties | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-TestProperties]

> Heartbeater doesn't fail query when heartbeat fails
> ---
>
> Key: HIVE-13458
> URL: https://issues.apache.org/jira/browse/HIVE-13458
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 2.1.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
> Fix For: 1.3.0, 2.1.0
>
> Attachments: HIVE-13458.1.patch, HIVE-13458.2.patch, 
> HIVE-13458.3.patch, HIVE-13458.4.patch, HIVE-13458.5.patch, 
> HIVE-13458.6.patch, HIVE-13458.7.patch, HIVE-13458.8.patch, 
> HIVE-13458.9.patch, HIVE-13458.branch-1.patch
>
>
> When a heartbeat fails to locate a lock, it should fail the current query. 
> That doesn't happen, which is a bug.
> Another thing is, we need to make sure stopHeartbeat really stops the 
> heartbeat, i.e. no additional heartbeat will be sent, since that will break 
> the assumption and cause the query to fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13308) HiveOnSpark sumbit query very slow when hundred of beeline exectue at same time

2016-05-11 Thread wangwenli (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281186#comment-15281186
 ] 

wangwenli commented on HIVE-13308:
--

[~lirui] generate secret has no cocurrent issue,nothing is shared,  and 
register RemoteDirver seems also thread safe, not !00% sure, maybe [~vanzin] 
can give some input?   thank you all very much.

> HiveOnSpark sumbit query very slow when hundred of beeline exectue at same 
> time
> ---
>
> Key: HIVE-13308
> URL: https://issues.apache.org/jira/browse/HIVE-13308
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 1.2.1
>Reporter: wangwenli
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> backgroud: hive on spark ,  yarn cluster mode
> details:
> when using hundred of beeline submit query at the same time, we found that   
> yarn get application very slow, and hiveserver is blocked at 
> SparkClientFactory.createClient method
> after analysis, we think the synchronize on SparkClientFactory.createClient , 
> can be removed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13608) We should provide better error message while constraints with duplicate names are created

2016-05-11 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-13608:
-
Attachment: HIVE-13608.3.patch

After trying out some test cases, It seems in patch#2 the regex was wrong. I 
tried to rewrite the regex with negative look ahead/behind. However, given the 
number of variable characters between JDOException and 
java.sql.SQLIntegrityConstraintViolationException, it didnt look trivial(or 
rather it looked not possible to incorporate the negative condition of 
java.sql.SQLIntegrityConstraintViolationException string as part of the 
existing regex). I have uploaded the patch#3 with a basic string match 
condition which should serve the purpose for now.

> We should provide better error message while constraints with duplicate names 
> are created
> -
>
> Key: HIVE-13608
> URL: https://issues.apache.org/jira/browse/HIVE-13608
> Project: Hive
>  Issue Type: Bug
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Attachments: HIVE-13608.1.patch, HIVE-13608.2.patch, 
> HIVE-13608.3.patch
>
>
> {code}
> PREHOOK: query: create table t1(x int, constraint pk1 primary key (x) disable 
> novalidate)
> PREHOOK: type: CREATETABLE
> PREHOOK: Output: database:default
> PREHOOK: Output: default@t1
> POSTHOOK: query: create table t1(x int, constraint pk1 primary key (x) 
> disable novalidate)
> POSTHOOK: type: CREATETABLE
> POSTHOOK: Output: database:default
> POSTHOOK: Output: default@t1
> PREHOOK: query: create table t2(x int, constraint pk1 primary key (x) disable 
> novalidate)
> PREHOOK: type: CREATETABLE
> PREHOOK: Output: database:default
> PREHOOK: Output: default@t2
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:For direct 
> MetaStore DB connections, we don't support retries at the client level.)
> {code}
> In the above case, it seems like useful error message is lost. It looks like 
> a  generic problem with metastore server/client exception handling and 
> message propagation. Seems like exception parsing logic of 
> RetryingMetaStoreClient::invoke() needs to be updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13308) HiveOnSpark sumbit query very slow when hundred of beeline exectue at same time

2016-05-11 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281147#comment-15281147
 ] 

Rui Li commented on HIVE-13308:
---

Thanks very much [~vanzin] for your inputs. Like [~wenli] said, 
SparkClientFactory is stopped only when hiveserver2 shuts down, so we don't 
have to worry about {{server}} is null while we creating the client.
What I'm not sure is that {{server}} is used to generate the secret and 
register the RemoteDriver, during creating the SparkClientImpl. Without 
synchronization, we need the {{server}} to be thread safe.

> HiveOnSpark sumbit query very slow when hundred of beeline exectue at same 
> time
> ---
>
> Key: HIVE-13308
> URL: https://issues.apache.org/jira/browse/HIVE-13308
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 1.2.1
>Reporter: wangwenli
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> backgroud: hive on spark ,  yarn cluster mode
> details:
> when using hundred of beeline submit query at the same time, we found that   
> yarn get application very slow, and hiveserver is blocked at 
> SparkClientFactory.createClient method
> after analysis, we think the synchronize on SparkClientFactory.createClient , 
> can be removed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13308) HiveOnSpark sumbit query very slow when hundred of beeline exectue at same time

2016-05-11 Thread wangwenli (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281141#comment-15281141
 ] 

wangwenli commented on HIVE-13308:
--

[~vanzin] consider the server is initialized one time and stopped only 
hiveserver shutting down,  seems no concurrent scenarios.  still require mark 
server variable volatile?

> HiveOnSpark sumbit query very slow when hundred of beeline exectue at same 
> time
> ---
>
> Key: HIVE-13308
> URL: https://issues.apache.org/jira/browse/HIVE-13308
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 1.2.1
>Reporter: wangwenli
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> backgroud: hive on spark ,  yarn cluster mode
> details:
> when using hundred of beeline submit query at the same time, we found that   
> yarn get application very slow, and hiveserver is blocked at 
> SparkClientFactory.createClient method
> after analysis, we think the synchronize on SparkClientFactory.createClient , 
> can be removed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13696) Monitor fair-scheduler.xml and automatically update/validate jobs submitted to fair-scheduler

2016-05-11 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281138#comment-15281138
 ] 

Hive QA commented on HIVE-13696:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12803128/HIVE-13696.02.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 102 failed/errored test(s), 9927 tests 
executed
*Failed tests:*
{noformat}
TestHWISessionManager - did not produce a TEST-*.xml file
TestMiniLlapCliDriver - did not produce a TEST-*.xml file
TestMiniTezCliDriver-auto_sortmerge_join_7.q-tez_union_group_by.q-orc_merge9.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-explainuser_4.q-update_after_multiple_inserts.q-mapreduce2.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-join1.q-schema_evol_orc_nonvec_mapwork_part.q-mapjoin_decimal.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-load_dyn_part2.q-selectDistinctStar.q-vector_decimal_5.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-vectorized_parquet.q-insert_values_non_partitioned.q-orc_merge4.q-and-12-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_merge_stats_orc
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_archive_excludeHadoop20
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_fetch_aggregation
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ivyDownload
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_nonmr_fetch
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_analyze
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample10
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample_islocalmode_hook
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample_islocalmode_hook_use_metadata
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorized_parquet
org.apache.hadoop.hive.cli.TestContribCliDriver.testCliDriver_dboutput
org.apache.hadoop.hive.cli.TestContribNegativeCliDriver.testNegativeCliDriver_case_with_row_sequence
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_bucket4
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_bucket5
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_bucket6
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_disable_merge_for_bucketing
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_infer_bucket_sort_map_operators
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_infer_bucket_sort_num_buckets
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_infer_bucket_sort_reducers_power_two
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_list_bucket_dml_10
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge1
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge2
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge9
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge_diff_fs
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_reduce_deduplicate
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join1
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join2
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join3
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join4
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join5
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join29
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_custom_input_output_format
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby2
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby3_map_multi_distinct
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby4_map
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby6_map_skew
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby8_map
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_having
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_input14
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_join22
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_join_reorder
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_mapjoin_distinct
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_merge2

[jira] [Commented] (HIVE-13743) Data move codepath is broken with hive (2.1.0-SNAPSHOT)

2016-05-11 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281126#comment-15281126
 ] 

Ashutosh Chauhan commented on HIVE-13743:
-

[~rajesh.balamohan] Do you have small repro for this?

> Data move codepath is broken with hive (2.1.0-SNAPSHOT)
> ---
>
> Key: HIVE-13743
> URL: https://issues.apache.org/jira/browse/HIVE-13743
> Project: Hive
>  Issue Type: Bug
>Reporter: Rajesh Balamohan
>
> Data move codepath is broken with hive 2.1.0-SNAPSHOT with hadoop 
> 2.8.0-snapshot.
> {noformat}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): Path 
> not found: /apps/hive/warehouse/tpcds_bin_partitioned_orc_1.db/date_dim1
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirEncryptionZoneOp.getEZForPath(FSDirEncryptionZoneOp.java:178)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getEZForPath(FSNamesystem.java:7336)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getEZForPath(NameNodeRpcServer.java:1973)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getEZForPath(ClientNamenodeProtocolServerSideTranslatorPB.java:1376)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:645)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2339)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2335)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1711)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2333)
> at org.apache.hadoop.ipc.Client.call(Client.java:1448)
> at org.apache.hadoop.ipc.Client.call(Client.java:1385)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
> at com.sun.proxy.$Proxy30.getEZForPath(Unknown 
> Source)/apps/hive/warehouse/tpcds_bin_partitioned_orc_200.db/
> ...
> ...
> ...
> 2016-05-11T09:40:43,760 ERROR [main]: ql.Driver (:()) - FAILED: Execution 
> Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask. Unable to 
> move source 
> hdfs://xyz:8020/apps/hive/warehouse/tpcds_bin_partitioned_orc_1.db/.hive-staging_hive_2016-05-11_09-40-42_489_5056654133706433454-1/-ext-10002
>  to destination 
> hdfs://xyz:8020/apps/hive/warehouse/tpcds_bin_partitioned_orc_1.db/date_dim1
> {noformat}
> https://github.com/apache/hive/blob/26b5c7b56a4f28ce3eabc0207566cce46b29b558/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java#L2836
> hdfsEncryptionShim.isPathEncrypted(destf) in Hive could end up throwing 
> FileNotFoundException as the destf is not present yet.  This causes moveFile 
> to fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13004) Remove encryption shims

2016-05-11 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281125#comment-15281125
 ] 

Ashutosh Chauhan commented on HIVE-13004:
-

[~spena] Its time to remove shims. What do you think?

> Remove encryption shims
> ---
>
> Key: HIVE-13004
> URL: https://issues.apache.org/jira/browse/HIVE-13004
> Project: Hive
>  Issue Type: Task
>  Components: Encryption
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-13004.patch
>
>
> It has served its purpose. Now that we don't support hadoop-1, its no longer 
> needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13716) Improve dynamic partition loading V

2016-05-11 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-13716:

   Resolution: Fixed
Fix Version/s: 2.1.0
   Status: Resolved  (was: Patch Available)

Pushed to master. Thanks, [~lirui]

> Improve dynamic partition loading V
> ---
>
> Key: HIVE-13716
> URL: https://issues.apache.org/jira/browse/HIVE-13716
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 1.2.0, 2.0.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Fix For: 2.1.0
>
> Attachments: HIVE-13716.1.patch, HIVE-13716.patch
>
>
> Parallelize permission settings and other refactoring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13617) LLAP: support non-vectorized execution in IO

2016-05-11 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13617:

Attachment: HIVE-13617.patch
HIVE-13617-wo-11417.patch

Updated the patch. The test now actually tests what it says it does :) Complex 
types don't work in elevator, I filed a separate JIRA for that.

> LLAP: support non-vectorized execution in IO
> 
>
> Key: HIVE-13617
> URL: https://issues.apache.org/jira/browse/HIVE-13617
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13617-wo-11417.patch, HIVE-13617-wo-11417.patch, 
> HIVE-13617.patch, HIVE-13617.patch, HIVE-15396-with-oi.patch
>
>
> Two approaches - a separate decoding path, into rows instead of VRBs; or 
> decoding VRBs into rows on a higher level (the original LlapInputFormat). I 
> think the latter might be better - it's not a hugely important path, and perf 
> in non-vectorized case is not the best anyway, so it's better to make do with 
> much less new code and architectural disruption. 
> Some ORC patches in progress introduce an easy to reuse (or so I hope, 
> anyway) VRB-to-row conversion, so we should just use that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13656) need to set direct memory limit higher in LlapServiceDriver for certain edge case configurations

2016-05-11 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13656:

Attachment: HIVE-13656.addendum.patch

[~vikram.dixit] can you take a look at the addendum patch? Thanks

> need to set direct memory limit higher in LlapServiceDriver for certain edge 
> case configurations
> 
>
> Key: HIVE-13656
> URL: https://issues.apache.org/jira/browse/HIVE-13656
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 2.1.0
>
> Attachments: HIVE-13656.addendum.patch, HIVE-13656.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13608) We should provide better error message while constraints with duplicate names are created

2016-05-11 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-13608:
-
Status: Patch Available  (was: Open)

> We should provide better error message while constraints with duplicate names 
> are created
> -
>
> Key: HIVE-13608
> URL: https://issues.apache.org/jira/browse/HIVE-13608
> Project: Hive
>  Issue Type: Bug
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Attachments: HIVE-13608.1.patch, HIVE-13608.2.patch
>
>
> {code}
> PREHOOK: query: create table t1(x int, constraint pk1 primary key (x) disable 
> novalidate)
> PREHOOK: type: CREATETABLE
> PREHOOK: Output: database:default
> PREHOOK: Output: default@t1
> POSTHOOK: query: create table t1(x int, constraint pk1 primary key (x) 
> disable novalidate)
> POSTHOOK: type: CREATETABLE
> POSTHOOK: Output: database:default
> POSTHOOK: Output: default@t1
> PREHOOK: query: create table t2(x int, constraint pk1 primary key (x) disable 
> novalidate)
> PREHOOK: type: CREATETABLE
> PREHOOK: Output: database:default
> PREHOOK: Output: default@t2
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:For direct 
> MetaStore DB connections, we don't support retries at the client level.)
> {code}
> In the above case, it seems like useful error message is lost. It looks like 
> a  generic problem with metastore server/client exception handling and 
> message propagation. Seems like exception parsing logic of 
> RetryingMetaStoreClient::invoke() needs to be updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13608) We should provide better error message while constraints with duplicate names are created

2016-05-11 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-13608:
-
Attachment: HIVE-13608.2.patch

[~ashutoshc] Can you please take a look at the patch. Please verify the 
modified regex.

Thanks
Hari

> We should provide better error message while constraints with duplicate names 
> are created
> -
>
> Key: HIVE-13608
> URL: https://issues.apache.org/jira/browse/HIVE-13608
> Project: Hive
>  Issue Type: Bug
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Attachments: HIVE-13608.1.patch, HIVE-13608.2.patch
>
>
> {code}
> PREHOOK: query: create table t1(x int, constraint pk1 primary key (x) disable 
> novalidate)
> PREHOOK: type: CREATETABLE
> PREHOOK: Output: database:default
> PREHOOK: Output: default@t1
> POSTHOOK: query: create table t1(x int, constraint pk1 primary key (x) 
> disable novalidate)
> POSTHOOK: type: CREATETABLE
> POSTHOOK: Output: database:default
> POSTHOOK: Output: default@t1
> PREHOOK: query: create table t2(x int, constraint pk1 primary key (x) disable 
> novalidate)
> PREHOOK: type: CREATETABLE
> PREHOOK: Output: database:default
> PREHOOK: Output: default@t2
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:For direct 
> MetaStore DB connections, we don't support retries at the client level.)
> {code}
> In the above case, it seems like useful error message is lost. It looks like 
> a  generic problem with metastore server/client exception handling and 
> message propagation. Seems like exception parsing logic of 
> RetryingMetaStoreClient::invoke() needs to be updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13608) We should provide better error message while constraints with duplicate names are created

2016-05-11 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-13608:
-
Status: Open  (was: Patch Available)

> We should provide better error message while constraints with duplicate names 
> are created
> -
>
> Key: HIVE-13608
> URL: https://issues.apache.org/jira/browse/HIVE-13608
> Project: Hive
>  Issue Type: Bug
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Attachments: HIVE-13608.1.patch
>
>
> {code}
> PREHOOK: query: create table t1(x int, constraint pk1 primary key (x) disable 
> novalidate)
> PREHOOK: type: CREATETABLE
> PREHOOK: Output: database:default
> PREHOOK: Output: default@t1
> POSTHOOK: query: create table t1(x int, constraint pk1 primary key (x) 
> disable novalidate)
> POSTHOOK: type: CREATETABLE
> POSTHOOK: Output: database:default
> POSTHOOK: Output: default@t1
> PREHOOK: query: create table t2(x int, constraint pk1 primary key (x) disable 
> novalidate)
> PREHOOK: type: CREATETABLE
> PREHOOK: Output: database:default
> PREHOOK: Output: default@t2
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:For direct 
> MetaStore DB connections, we don't support retries at the client level.)
> {code}
> In the above case, it seems like useful error message is lost. It looks like 
> a  generic problem with metastore server/client exception handling and 
> message propagation. Seems like exception parsing logic of 
> RetryingMetaStoreClient::invoke() needs to be updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HIVE-13656) need to set direct memory limit higher in LlapServiceDriver for certain edge case configurations

2016-05-11 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin reopened HIVE-13656:
-

This patch is broken all over the place

> need to set direct memory limit higher in LlapServiceDriver for certain edge 
> case configurations
> 
>
> Key: HIVE-13656
> URL: https://issues.apache.org/jira/browse/HIVE-13656
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 2.1.0
>
> Attachments: HIVE-13656.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13744) LLAP IO - add complex types support

2016-05-11 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13744:

Description: 
Recently, complex type column vectors were added to Hive. We should use them in 
IO elevator.
Vectorization itself doesn't support complex types (yet), but this would be 
useful when it does, also it will enable LLAP IO elevator to be used in 
non-vectorized context with complex types after HIVE-13617

  was:
Recently, vectorized column vectors were added to Hive. We should use them in 
IO elevator.
Vectorization itself doesn't support complex types (yet), but this would be 
useful when it does, also it will enable LLAP IO elevator to be used in 
non-vectorized context with complex types after HIVE-13617


> LLAP IO - add complex types support
> ---
>
> Key: HIVE-13744
> URL: https://issues.apache.org/jira/browse/HIVE-13744
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Prasanth Jayachandran
>
> Recently, complex type column vectors were added to Hive. We should use them 
> in IO elevator.
> Vectorization itself doesn't support complex types (yet), but this would be 
> useful when it does, also it will enable LLAP IO elevator to be used in 
> non-vectorized context with complex types after HIVE-13617



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-13617) LLAP: support non-vectorized execution in IO

2016-05-11 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280951#comment-15280951
 ] 

Sergey Shelukhin edited comment on HIVE-13617 at 5/12/16 12:21 AM:
---

Preliminary patch. I need to double check that results from simple types are 
correct.
Also complex types do not appear to work in the elevator, would need to double 
check that too.

I gave up on trying to make a universal converter because it's nigh impossible 
to substitute the OIs... attaching the terminated-in-progress patch for that 
too, in case it's ever needed.


was (Author: sershe):
Preliminary patch. I need to double check that results from simple types are 
correct.
Also complex types do not appear to work in the elevator, would need to double 
check that too.

I gave up on trying to make a universal converted because it's nigh impossible 
to substitute the OIs... attaching the terminated-in-progress patch for that 
too, in case it's ever needed.

> LLAP: support non-vectorized execution in IO
> 
>
> Key: HIVE-13617
> URL: https://issues.apache.org/jira/browse/HIVE-13617
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13617-wo-11417.patch, HIVE-13617.patch, 
> HIVE-15396-with-oi.patch
>
>
> Two approaches - a separate decoding path, into rows instead of VRBs; or 
> decoding VRBs into rows on a higher level (the original LlapInputFormat). I 
> think the latter might be better - it's not a hugely important path, and perf 
> in non-vectorized case is not the best anyway, so it's better to make do with 
> much less new code and architectural disruption. 
> Some ORC patches in progress introduce an easy to reuse (or so I hope, 
> anyway) VRB-to-row conversion, so we should just use that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13458) Heartbeater doesn't fail query when heartbeat fails

2016-05-11 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-13458:
-
   Resolution: Fixed
Fix Version/s: 2.1.0
   1.3.0
   Status: Resolved  (was: Patch Available)

> Heartbeater doesn't fail query when heartbeat fails
> ---
>
> Key: HIVE-13458
> URL: https://issues.apache.org/jira/browse/HIVE-13458
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 2.1.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
> Fix For: 1.3.0, 2.1.0
>
> Attachments: HIVE-13458.1.patch, HIVE-13458.2.patch, 
> HIVE-13458.3.patch, HIVE-13458.4.patch, HIVE-13458.5.patch, 
> HIVE-13458.6.patch, HIVE-13458.7.patch, HIVE-13458.8.patch, 
> HIVE-13458.9.patch, HIVE-13458.branch-1.patch
>
>
> When a heartbeat fails to locate a lock, it should fail the current query. 
> That doesn't happen, which is a bug.
> Another thing is, we need to make sure stopHeartbeat really stops the 
> heartbeat, i.e. no additional heartbeat will be sent, since that will break 
> the assumption and cause the query to fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-13667) Improve performance for ServiceInstanceSet.getByHost

2016-05-11 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran reassigned HIVE-13667:


Assignee: Prasanth Jayachandran

> Improve performance for ServiceInstanceSet.getByHost
> 
>
> Key: HIVE-13667
> URL: https://issues.apache.org/jira/browse/HIVE-13667
> Project: Hive
>  Issue Type: Improvement
>Reporter: Siddharth Seth
>Assignee: Prasanth Jayachandran
>
> ServiceInstanceSet.getByHost is used for scheduling local tasks as well as 
> constructing the log URL.
> It ends up traversing all hosts on each lookup. This should be avoided.
> cc [~prasanth_j]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13458) Heartbeater doesn't fail query when heartbeat fails

2016-05-11 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-13458:
-
Attachment: HIVE-13458.branch-1.patch

> Heartbeater doesn't fail query when heartbeat fails
> ---
>
> Key: HIVE-13458
> URL: https://issues.apache.org/jira/browse/HIVE-13458
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 2.1.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
> Attachments: HIVE-13458.1.patch, HIVE-13458.2.patch, 
> HIVE-13458.3.patch, HIVE-13458.4.patch, HIVE-13458.5.patch, 
> HIVE-13458.6.patch, HIVE-13458.7.patch, HIVE-13458.8.patch, 
> HIVE-13458.9.patch, HIVE-13458.branch-1.patch
>
>
> When a heartbeat fails to locate a lock, it should fail the current query. 
> That doesn't happen, which is a bug.
> Another thing is, we need to make sure stopHeartbeat really stops the 
> heartbeat, i.e. no additional heartbeat will be sent, since that will break 
> the assumption and cause the query to fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13507) Improved logging for ptest

2016-05-11 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280967#comment-15280967
 ] 

Ashutosh Chauhan commented on HIVE-13507:
-

I see. Also, I see many junit tests which has timeouts set, getting timed out 
like : TestDynamicPartitionPruner,TestFirstInFirstOutComparator, 
TestHBaseImport,TestHostAffinitySplitLocationProvider etc. Most of these have 
nothing to do with metastore. 
 Are these machines under load?

> Improved logging for ptest
> --
>
> Key: HIVE-13507
> URL: https://issues.apache.org/jira/browse/HIVE-13507
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Fix For: 2.1.0
>
> Attachments: HIVE-13507.01.patch, HIVE-13507.02.patch
>
>
> NO PRECOMMIT TESTS
> Include information about batch runtimes, outlier lists, host completion 
> times, etc. Try identifying tests which cause the build to take a long time 
> while holding onto resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13670) Improve Beeline connect/reconnect semantics

2016-05-11 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280964#comment-15280964
 ] 

Hive QA commented on HIVE-13670:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12803116/HIVE-13670.4.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 33 failed/errored test(s), 9246 tests 
executed
*Failed tests:*
{noformat}
TestHWISessionManager - did not produce a TEST-*.xml file
TestMiniLlapCliDriver - did not produce a TEST-*.xml file
TestMiniTezCliDriver-constprog_dpp.q-dynamic_partition_pruning.q-vectorization_10.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-explainuser_4.q-update_after_multiple_inserts.q-mapreduce2.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-join1.q-schema_evol_orc_nonvec_mapwork_part.q-mapjoin_decimal.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-vectorization_16.q-vector_decimal_round.q-orc_merge6.q-and-12-more
 - did not produce a TEST-*.xml file
TestNegativeCliDriver-udf_invalid.q-nopart_insert.q-insert_into_with_schema.q-and-727-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_with_constraints
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ivyDownload
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
org.apache.hadoop.hive.llap.tez.TestConverters.testFragmentSpecToTaskSpec
org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskCommunicator.testFinishableStateUpdateFailure
org.apache.hadoop.hive.metastore.TestFilterHooks.org.apache.hadoop.hive.metastore.TestFilterHooks
org.apache.hadoop.hive.metastore.TestHiveMetaStoreGetMetaConf.org.apache.hadoop.hive.metastore.TestHiveMetaStoreGetMetaConf
org.apache.hadoop.hive.metastore.TestMetaStoreEndFunctionListener.testEndFunctionListener
org.apache.hadoop.hive.metastore.TestMetaStoreEventListenerOnlyOnCommit.testEventStatus
org.apache.hadoop.hive.metastore.TestMetaStoreInitListener.testMetaStoreInitListener
org.apache.hadoop.hive.metastore.TestMetaStoreMetrics.org.apache.hadoop.hive.metastore.TestMetaStoreMetrics
org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAddPartitionWithCommas
org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAddPartitionWithUnicode
org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAddPartitionWithValidPartVal
org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAppendPartitionWithCommas
org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAppendPartitionWithUnicode
org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAppendPartitionWithValidCharacters
org.apache.hadoop.hive.metastore.TestRetryingHMSHandler.testRetryingHMSHandler
org.apache.hadoop.hive.metastore.hbase.TestHBaseSchemaTool.oneMondoTest
org.apache.hadoop.hive.metastore.txn.TestCompactionTxnHandler.testRevokeTimedOutWorkers
org.apache.hadoop.hive.ql.exec.tez.TestDynamicPartitionPruner.testSingleSourceMultipleFiltersOrdering1
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationProviderWithACL.testSimplePrivileges
org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc.org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc
org.apache.hive.service.TestHS2ImpersonationWithRemoteMS.org.apache.hive.service.TestHS2ImpersonationWithRemoteMS
org.apache.hive.service.cli.session.TestHiveSessionImpl.testLeakOperationHandle
org.apache.hive.spark.client.TestSparkClient.testSyncRpc
{noformat}

Test results: 
http://ec2-54-177-240-2.us-west-1.compute.amazonaws.com/job/PreCommit-HIVE-MASTER-Build/239/testReport
Console output: 
http://ec2-54-177-240-2.us-west-1.compute.amazonaws.com/job/PreCommit-HIVE-MASTER-Build/239/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-239/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 33 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12803116 - PreCommit-HIVE-MASTER-Build

> Improve Beeline connect/reconnect semantics
> ---
>
> Key: HIVE-13670
> URL: https://issues.apache.org/jira/browse/HIVE-13670
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.1.0
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-13670.2.patch, HIVE-13670.3.patch, 
> HIVE-13670.4.patch, HIVE-13670.patch
>
>
> For most users of beeline, 

[jira] [Updated] (HIVE-13742) Hive ptest has many failures due to metastore connection refused

2016-05-11 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-13742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-13742:
---
Attachment: hive.log

Here's the hive.log from the slave when I run {{TestFilterHooks}} manually.

> Hive ptest has many failures due to metastore connection refused
> 
>
> Key: HIVE-13742
> URL: https://issues.apache.org/jira/browse/HIVE-13742
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergio Peña
> Attachments: hive.log
>
>
> The following exception is thrown on the Hive ptest with many tests, and it 
> is due to some Derby database issues:
> {noformat}
> 016-05-11T15:46:25,123 INFO  [Thread-2[]]: metastore.HiveMetaStore 
> (HiveMetaStore.java:newRawStore(563)) - 0: Opening raw store with 
> implementation class:org.apache.hadoop.hive.metastore.ObjectStore
> 2016-05-11T15:46:25,175 INFO  [Thread-2[]]: metastore.ObjectStore 
> (ObjectStore.java:initialize(324)) - ObjectStore, initialize called
> 2016-05-11T15:46:25,966 DEBUG [Thread-2[]]: bonecp.BoneCPDataSource 
> (BoneCPDataSource.java:getConnection(119)) - JDBC URL = 
> jdbc:derby:;databaseName=/home/hiveptest/54.177.132.113-hiveptest-1/apache-github-source-source/itests/hive-unit/target/tmpTestFilterHooksmetastore_db;create=true,
>  Username = APP, partitions = 1, max (per partition) = 10, min (per 
> partition) = 0, idle max age = 60 min, idle test period = 240 min, strategy = 
> DEFAULT
> 2016-05-11T15:46:26,003 ERROR [Thread-2[]]: Datastore.Schema 
> (Log4JLogger.java:error(125)) - Failed initialising database.
> org.datanucleus.exceptions.NucleusDataStoreException: Unable to open a test 
> connection to the given database. JDBC url = 
> jdbc:derby:;databaseName=/home/hiveptest/54.177.132.113-hiveptest-1/apache-github-source-source/itests/hive-unit/target/tmpTestFilterHooksmetastore_db;create=true,
>  username = APP. Terminating connection pool (set lazyInit to true if you 
> expect to start your database after your app). Original Exception: --
> java.sql.SQLException: Failed to create database 
> '/home/hiveptest/54.177.132.113-hiveptest-1/apache-github-source-source/itests/hive-unit/target/tmpTestFilterHooksmetastore_db',
>  see the next exception for details.
>   at 
> org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown 
> Source)
>   at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
>   at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source)
>   at org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown 
> Source)
>   at org.apache.derby.impl.jdbc.EmbedConnection.(Unknown Source)
>   at org.apache.derby.impl.jdbc.EmbedConnection40.(Unknown Source)
>   at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown Source)
>   at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
>   at org.apache.derby.jdbc.Driver20.connect(Unknown Source)
>   at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
>   at java.sql.DriverManager.getConnection(DriverManager.java:664)
>   at java.sql.DriverManager.getConnection(DriverManager.java:208)
>   at com.jolbox.bonecp.BoneCP.obtainRawInternalConnection(BoneCP.java:361)
>   at com.jolbox.bonecp.BoneCP.(BoneCP.java:416)
>   at 
> com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:120)
>   at 
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:483)
>   at 
> org.datanucleus.store.rdbms.RDBMSStoreManager.(RDBMSStoreManager.java:296)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
>   at 
> org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:606)
>   at 
> org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:301)
>   at 
> org.datanucleus.NucleusContextHelper.createStoreManagerForProperties(NucleusContextHelper.java:133)
>   at 
> org.datanucleus.PersistenceNucleusContextImpl.initialise(PersistenceNucleusContextImpl.java:420)
>   at 
> org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:821)
>   at 
> org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:338)
>   at 
> org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:217)
>   

[jira] [Commented] (HIVE-13742) Hive ptest has many failures due to metastore connection refused

2016-05-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-13742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280954#comment-15280954
 ] 

Sergio Peña commented on HIVE-13742:


[~sseth] [~ashutoshc] [~thejas] [~szehon] FYI

> Hive ptest has many failures due to metastore connection refused
> 
>
> Key: HIVE-13742
> URL: https://issues.apache.org/jira/browse/HIVE-13742
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergio Peña
>
> The following exception is thrown on the Hive ptest with many tests, and it 
> is due to some Derby database issues:
> {noformat}
> 016-05-11T15:46:25,123 INFO  [Thread-2[]]: metastore.HiveMetaStore 
> (HiveMetaStore.java:newRawStore(563)) - 0: Opening raw store with 
> implementation class:org.apache.hadoop.hive.metastore.ObjectStore
> 2016-05-11T15:46:25,175 INFO  [Thread-2[]]: metastore.ObjectStore 
> (ObjectStore.java:initialize(324)) - ObjectStore, initialize called
> 2016-05-11T15:46:25,966 DEBUG [Thread-2[]]: bonecp.BoneCPDataSource 
> (BoneCPDataSource.java:getConnection(119)) - JDBC URL = 
> jdbc:derby:;databaseName=/home/hiveptest/54.177.132.113-hiveptest-1/apache-github-source-source/itests/hive-unit/target/tmpTestFilterHooksmetastore_db;create=true,
>  Username = APP, partitions = 1, max (per partition) = 10, min (per 
> partition) = 0, idle max age = 60 min, idle test period = 240 min, strategy = 
> DEFAULT
> 2016-05-11T15:46:26,003 ERROR [Thread-2[]]: Datastore.Schema 
> (Log4JLogger.java:error(125)) - Failed initialising database.
> org.datanucleus.exceptions.NucleusDataStoreException: Unable to open a test 
> connection to the given database. JDBC url = 
> jdbc:derby:;databaseName=/home/hiveptest/54.177.132.113-hiveptest-1/apache-github-source-source/itests/hive-unit/target/tmpTestFilterHooksmetastore_db;create=true,
>  username = APP. Terminating connection pool (set lazyInit to true if you 
> expect to start your database after your app). Original Exception: --
> java.sql.SQLException: Failed to create database 
> '/home/hiveptest/54.177.132.113-hiveptest-1/apache-github-source-source/itests/hive-unit/target/tmpTestFilterHooksmetastore_db',
>  see the next exception for details.
>   at 
> org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown 
> Source)
>   at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
>   at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source)
>   at org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown 
> Source)
>   at org.apache.derby.impl.jdbc.EmbedConnection.(Unknown Source)
>   at org.apache.derby.impl.jdbc.EmbedConnection40.(Unknown Source)
>   at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown Source)
>   at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
>   at org.apache.derby.jdbc.Driver20.connect(Unknown Source)
>   at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
>   at java.sql.DriverManager.getConnection(DriverManager.java:664)
>   at java.sql.DriverManager.getConnection(DriverManager.java:208)
>   at com.jolbox.bonecp.BoneCP.obtainRawInternalConnection(BoneCP.java:361)
>   at com.jolbox.bonecp.BoneCP.(BoneCP.java:416)
>   at 
> com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:120)
>   at 
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:483)
>   at 
> org.datanucleus.store.rdbms.RDBMSStoreManager.(RDBMSStoreManager.java:296)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
>   at 
> org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:606)
>   at 
> org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:301)
>   at 
> org.datanucleus.NucleusContextHelper.createStoreManagerForProperties(NucleusContextHelper.java:133)
>   at 
> org.datanucleus.PersistenceNucleusContextImpl.initialise(PersistenceNucleusContextImpl.java:420)
>   at 
> org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:821)
>   at 
> org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:338)
>   at 
> org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:217)
>   at 

[jira] [Updated] (HIVE-13617) LLAP: support non-vectorized execution in IO

2016-05-11 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13617:

Attachment: HIVE-13617.patch
HIVE-15396-with-oi.patch
HIVE-13617-wo-11417.patch

Preliminary patch. I need to double check that results from simple types are 
correct.
Also complex types do not appear to work in the elevator, would need to double 
check that too.

I gave up on trying to make a universal converted because it's nigh impossible 
to substitute the OIs... attaching the terminated-in-progress patch for that 
too, in case it's ever needed.

> LLAP: support non-vectorized execution in IO
> 
>
> Key: HIVE-13617
> URL: https://issues.apache.org/jira/browse/HIVE-13617
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13617-wo-11417.patch, HIVE-13617.patch, 
> HIVE-15396-with-oi.patch
>
>
> Two approaches - a separate decoding path, into rows instead of VRBs; or 
> decoding VRBs into rows on a higher level (the original LlapInputFormat). I 
> think the latter might be better - it's not a hugely important path, and perf 
> in non-vectorized case is not the best anyway, so it's better to make do with 
> much less new code and architectural disruption. 
> Some ORC patches in progress introduce an easy to reuse (or so I hope, 
> anyway) VRB-to-row conversion, so we should just use that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13617) LLAP: support non-vectorized execution in IO

2016-05-11 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13617:

Status: Patch Available  (was: Open)

> LLAP: support non-vectorized execution in IO
> 
>
> Key: HIVE-13617
> URL: https://issues.apache.org/jira/browse/HIVE-13617
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13617-wo-11417.patch, HIVE-13617.patch, 
> HIVE-15396-with-oi.patch
>
>
> Two approaches - a separate decoding path, into rows instead of VRBs; or 
> decoding VRBs into rows on a higher level (the original LlapInputFormat). I 
> think the latter might be better - it's not a hugely important path, and perf 
> in non-vectorized case is not the best anyway, so it's better to make do with 
> much less new code and architectural disruption. 
> Some ORC patches in progress introduce an easy to reuse (or so I hope, 
> anyway) VRB-to-row conversion, so we should just use that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13507) Improved logging for ptest

2016-05-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-13507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280949#comment-15280949
 ] 

Sergio Peña commented on HIVE-13507:


I see many issues are metastore connection errors, and this is the culprit of 
everything:

{noformat}
Caused by: java.sql.SQLException: Failed to create database 
'/home/hiveptest/54.177.132.113-hiveptest-1/apache-github-source-source/itests/hive-unit/target/tmpTestFilterHooksmetastore_db',
 see the next exception for details.
at 
org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)
at 
org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
 Source)
... 58 more
Caused by: java.sql.SQLException: The database directory 
'/home/hiveptest/54.177.132.113-hiveptest-1/apache-github-source-source/itests/hive-unit/target/tmpTestFilterHooksmetastore_db'
 exists. However, it does not contain the expected 'service.properties' file. 
Perhaps Derby was brought down in the middle of creating this database. You may 
want to delete this directory and try creating the database again.
{noformat}

I deleted the directory directly on the slave, and the test now passes. I'd 
like to create a Jira to fix this, but I don't know if its a ptest or another 
test that is corrupting the metastore. 

> Improved logging for ptest
> --
>
> Key: HIVE-13507
> URL: https://issues.apache.org/jira/browse/HIVE-13507
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Fix For: 2.1.0
>
> Attachments: HIVE-13507.01.patch, HIVE-13507.02.patch
>
>
> NO PRECOMMIT TESTS
> Include information about batch runtimes, outlier lists, host completion 
> times, etc. Try identifying tests which cause the build to take a long time 
> while holding onto resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13507) Improved logging for ptest

2016-05-11 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280941#comment-15280941
 ] 

Ashutosh Chauhan commented on HIVE-13507:
-

[~spena] Any ideas?

> Improved logging for ptest
> --
>
> Key: HIVE-13507
> URL: https://issues.apache.org/jira/browse/HIVE-13507
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Fix For: 2.1.0
>
> Attachments: HIVE-13507.01.patch, HIVE-13507.02.patch
>
>
> NO PRECOMMIT TESTS
> Include information about batch runtimes, outlier lists, host completion 
> times, etc. Try identifying tests which cause the build to take a long time 
> while holding onto resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13565) thrift change

2016-05-11 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-13565:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> thrift change
> -
>
> Key: HIVE-13565
> URL: https://issues.apache.org/jira/browse/HIVE-13565
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Fix For: 2.1.0
>
> Attachments: HIVE-13565.01.patch, HIVE-13565.02.patch, 
> HIVE-13565.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13565) thrift change

2016-05-11 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-13565:
---
Fix Version/s: 2.1.0

> thrift change
> -
>
> Key: HIVE-13565
> URL: https://issues.apache.org/jira/browse/HIVE-13565
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Fix For: 2.1.0
>
> Attachments: HIVE-13565.01.patch, HIVE-13565.02.patch, 
> HIVE-13565.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13565) thrift change

2016-05-11 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280938#comment-15280938
 ] 

Pengcheng Xiong commented on HIVE-13565:


can not repo the failures in SparkCliDriver and the SparkOnYarnCliDriver. 
Pushed to master. Thanks [~ashutoshc] for the review.

> thrift change
> -
>
> Key: HIVE-13565
> URL: https://issues.apache.org/jira/browse/HIVE-13565
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Fix For: 2.1.0
>
> Attachments: HIVE-13565.01.patch, HIVE-13565.02.patch, 
> HIVE-13565.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13565) thrift change

2016-05-11 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-13565:
---
Affects Version/s: 2.0.0

> thrift change
> -
>
> Key: HIVE-13565
> URL: https://issues.apache.org/jira/browse/HIVE-13565
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Fix For: 2.1.0
>
> Attachments: HIVE-13565.01.patch, HIVE-13565.02.patch, 
> HIVE-13565.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13449) LLAP: HS2 should get the token directly, rather than from LLAP

2016-05-11 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280936#comment-15280936
 ] 

Sergey Shelukhin commented on HIVE-13449:
-

[~hagleitn] [~vikram.dixit] ping?

> LLAP: HS2 should get the token directly, rather than from LLAP
> --
>
> Key: HIVE-13449
> URL: https://issues.apache.org/jira/browse/HIVE-13449
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13449.01.patch, HIVE-13449.02.WIP.patch, 
> HIVE-13449.02.patch, HIVE-13449.03.patch, HIVE-13449.patch
>
>
> HS2 doesn't need a roundtrip to LLAP; it can instantiate the SecretManager 
> directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13549) Remove jdk version specific out files from Hive2

2016-05-11 Thread Mohit Sabharwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohit Sabharwal updated HIVE-13549:
---
Attachment: HIVE-13549-1-java8.patch

> Remove jdk version specific out files from Hive2
> 
>
> Key: HIVE-13549
> URL: https://issues.apache.org/jira/browse/HIVE-13549
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Mohit Sabharwal
>Assignee: Mohit Sabharwal
> Attachments: HIVE-13549-1-java8.patch, HIVE-13549-java8.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13076) Implement FK/PK "rely novalidate" constraints for better CBO

2016-05-11 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-13076:
-
Attachment: AddingPKFKconstraints.pdf

> Implement FK/PK "rely novalidate" constraints for better CBO
> 
>
> Key: HIVE-13076
> URL: https://issues.apache.org/jira/browse/HIVE-13076
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO, Logical Optimizer
>Reporter: Ruslan Dautkhanov
>Assignee: Hari Sankar Sivarama Subramaniyan
> Fix For: 2.1.0
>
> Attachments: AddingPKFKconstraints.pdf
>
>
> Oracle has "RELY NOVALIDATE" option for constraints.. Could be easier for 
> Hive to start with something like that for PK/FK constraints. So CBO has more 
> information for optimizations. It does not have to actually check if that 
> constraint is relationship is true; it can just "rely" on that constraint.. 
> https://docs.oracle.com/database/121/SQLRF/clauses002.htm#sthref2289
> So it would be helpful with join cardinality estimates, and with cases like 
> this - https://issues.apache.org/jira/browse/HIVE-13019



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13076) Implement FK/PK "rely novalidate" constraints for better CBO

2016-05-11 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280891#comment-15280891
 ] 

Hari Sankar Sivarama Subramaniyan commented on HIVE-13076:
--

I have added the basic documentation as a pdf.

Thanks
Hari

> Implement FK/PK "rely novalidate" constraints for better CBO
> 
>
> Key: HIVE-13076
> URL: https://issues.apache.org/jira/browse/HIVE-13076
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO, Logical Optimizer
>Reporter: Ruslan Dautkhanov
>Assignee: Hari Sankar Sivarama Subramaniyan
> Fix For: 2.1.0
>
>
> Oracle has "RELY NOVALIDATE" option for constraints.. Could be easier for 
> Hive to start with something like that for PK/FK constraints. So CBO has more 
> information for optimizations. It does not have to actually check if that 
> constraint is relationship is true; it can just "rely" on that constraint.. 
> https://docs.oracle.com/database/121/SQLRF/clauses002.htm#sthref2289
> So it would be helpful with join cardinality estimates, and with cases like 
> this - https://issues.apache.org/jira/browse/HIVE-13019



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13507) Improved logging for ptest

2016-05-11 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280880#comment-15280880
 ] 

Ashutosh Chauhan commented on HIVE-13507:
-

I ran following batches locally which were killed in recent run of 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-238/failed/
  All of them ran successfully locally. Wondering what might have caused this:

mvn test -Dtest=TestMiniTezCliDriver -Dtest.output.overwrite=true 
-Dqfile=mapjoin_mapjoin.q,insert_into1.q,vector_decimal_2.q,explainuser_1.q,explainuser_3.q,vectorization_short_regress.q,insert_values_dynamic_partitioned.q,delete_where_no_match.q,limit_pushdown.q,orc_merge_incompat2.q,vectorized_date_funcs.q,tez_union_multiinsert.q,vectorization_part.q,vector_interval_mapjoin.q,vectorized_timestamp_funcs.q

 mvn test -Dtest.output.overwrite=true -Dtest=TestMiniTezCliDriver 
-Dqfile=load_dyn_part2.q,selectDistinctStar.q,vector_decimal_5.q,schema_evol_orc_acid_mapwork_table.q,cbo_stats.q,update_all_types.q,auto_sortmerge_join_5.q,vector_outer_join2.q,vectorization_3.q,schema_evol_orc_vec_mapwork_part_all_complex.q,vectorization_0.q,vector_groupby_3.q,tez_union_decimal.q,insert_orig_table.q,ptf_matchpath.q

 mvn test -Dtest.output.overwrite=true -Dtest=TestMiniTezCliDriver 
-Dqfile=groupby2.q,tez_dynpart_hashjoin_1.q,custom_input_output_format.q,schema_evol_orc_nonvec_fetchwork_table.q,schema_evol_orc_nonvec_mapwork_part_all_complex.q,tez_multi_union.q,auto_sortmerge_join_14.q,vector_between_in.q,vector_char_4.q,dynamic_partition_pruning_2.q,vector_decimal_math_funcs.q,vector_char_simple.q,auto_sortmerge_join_8.q,schema_evol_orc_nonvec_mapwork_table.q,merge2.q


> Improved logging for ptest
> --
>
> Key: HIVE-13507
> URL: https://issues.apache.org/jira/browse/HIVE-13507
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Fix For: 2.1.0
>
> Attachments: HIVE-13507.01.patch, HIVE-13507.02.patch
>
>
> NO PRECOMMIT TESTS
> Include information about batch runtimes, outlier lists, host completion 
> times, etc. Try identifying tests which cause the build to take a long time 
> while holding onto resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13628) Support for permanent functions - error handling if no restart

2016-05-11 Thread Vikram Dixit K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Dixit K updated HIVE-13628:
--
Status: Patch Available  (was: Open)

> Support for permanent functions - error handling if no restart
> --
>
> Key: HIVE-13628
> URL: https://issues.apache.org/jira/browse/HIVE-13628
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 2.1.0
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
> Attachments: HIVE-13628.1.patch, HIVE-13628.2.patch
>
>
> Support for permanent functions - error handling if no restart



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13363) Add hive.metastore.token.signature property to HiveConf

2016-05-11 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-13363:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.1.0
   Status: Resolved  (was: Patch Available)

Committed to master. Thanks Anthony!

> Add hive.metastore.token.signature property to HiveConf
> ---
>
> Key: HIVE-13363
> URL: https://issues.apache.org/jira/browse/HIVE-13363
> Project: Hive
>  Issue Type: Improvement
>Reporter: Anthony Hsu
>Assignee: Anthony Hsu
> Fix For: 2.1.0
>
> Attachments: HIVE-13363.1.patch, HIVE-13363.2.patch
>
>
> I noticed that the {{hive.metastore.token.signature}} property is not defined 
> in HiveConf.java, but hardcoded everywhere it's used in the Hive codebase.
> [HIVE-2963] fixes this but was never committed due to being resolved as a 
> duplicate ticket.
> We should add {{hive.metastore.token.signature}} to HiveConf.java to 
> centralize its definition and make the property more discoverable (it's 
> useful to set it when talking to multiple metastores).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11793) SHOW LOCKS with DbTxnManager ignores filter options

2016-05-11 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-11793:
-
   Resolution: Fixed
Fix Version/s: 2.1.0
   1.3.0
   Status: Resolved  (was: Patch Available)

Committed to master and branch-1. Thanks Eugene for the review!

> SHOW LOCKS with DbTxnManager ignores filter options
> ---
>
> Key: HIVE-11793
> URL: https://issues.apache.org/jira/browse/HIVE-11793
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Wei Zheng
>Priority: Minor
> Fix For: 1.3.0, 2.1.0
>
> Attachments: HIVE-11793.1.patch, HIVE-11793.2.patch, 
> HIVE-11793.3.patch, HIVE-11793.4.patch, HIVE-11793.5.patch, 
> HIVE-11793.6.patch, HIVE-11793.7.patch, HIVE-11793.8.patch
>
>
> https://cwiki.apache.org/confluence/display/Hive/Locking and 
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-ShowLocks
>  list various options that can be used with SHOW LOCKS, e.g. 
> When ACID is enabled, all these options are ignored and a full list is 
> returned.
> (also only ext lock id is shown, int lock id is not).
> see DDLTask.showLocks() and TxnHandler.showLocks()
> requires extending ShowLocksRequest which is a Thrift object



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13350) Support Alter commands for Rely/NoRely novalidate for PK/FK constraints

2016-05-11 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-13350:
-
Labels: TODOC2.1  (was: )

> Support Alter commands for Rely/NoRely  novalidate for PK/FK constraints
> 
>
> Key: HIVE-13350
> URL: https://issues.apache.org/jira/browse/HIVE-13350
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO, Logical Optimizer
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
>  Labels: TODOC2.1
> Fix For: 2.1.0
>
> Attachments: HIVE-13350.1.patch, HIVE-13350.2.patch, 
> HIVE-13350.final.patch
>
>
> Support commands like :
> ALTER TABLE table2 ADD CONSTRAINT pkt2 primary key (a) disable novalidate;
> ALTER TABLE table3 ADD CONSTRAINT fk1 FOREIGN KEY ( x ) REFERENCES table2(a)  
> DISABLE NOVALIDATE RELY;
> ALTER TABLE table6 ADD CONSTRAINT fk4 FOREIGN KEY ( y ) REFERENCES table1(a)  
> DISABLE NOVALIDATE;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13530) Hive on Spark throws Kryo exception in some cases

2016-05-11 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-13530:

   Resolution: Fixed
Fix Version/s: 2.1.0
   Status: Resolved  (was: Patch Available)

Pushed to master. Thanks Szehon for the work.

> Hive on Spark throws Kryo exception in some cases
> -
>
> Key: HIVE-13530
> URL: https://issues.apache.org/jira/browse/HIVE-13530
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Szehon Ho
>Assignee: Szehon Ho
> Fix For: 2.1.0
>
> Attachments: HIVE-13530.2.patch, HIVE-13530.patch
>
>
> After recent changes, Hive on Spark throws KryoException:
> {noformat}
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: ERROR : Failed to execute spark 
> task, with exception 'java.lang.Exception(Failed to submit Spark work, please 
> retry later)'
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: java.lang.Exception: Failed to 
> submit Spark work, please retry later
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: at 
> org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.execute(RemoteHiveSparkClient.java:174)
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: at 
> org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.submit(SparkSessionImpl.java:71)
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: at 
> org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:103)
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: at 
> org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: at 
> org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1769)
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: at 
> org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1526)
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: at 
> org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1305)
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: at 
> org.apache.hadoop.hive.ql.Driver.run(Driver.java:1114)
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: at 
> org.apache.hadoop.hive.ql.Driver.run(Driver.java:1107)
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:178)
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: at 
> org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:72)
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: at 
> org.apache.hive.service.cli.operation.SQLOperation$2$1.run(SQLOperation.java:232)
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: at 
> java.security.AccessController.doPrivileged(Native Method)
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: at 
> javax.security.auth.Subject.doAs(Subject.java:415)
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: at 
> org.apache.hive.service.cli.operation.SQLOperation$2.run(SQLOperation.java:245)
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: at 
> java.util.concurrent.FutureTask.run(FutureTask.java:262)
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: at 
> java.lang.Thread.run(Thread.java:745)
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: Caused by: 
> org.apache.hive.com.esotericsoftware.kryo.KryoException: 
> java.util.ConcurrentModificationException
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: Serialization trace:
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: classes 
> (sun.misc.Launcher$AppClassLoader)
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: classloader 
> (java.security.ProtectionDomain)
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: context 
> (java.security.AccessControlContext)
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: acc (java.net.URLClassLoader)
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: classLoader 
> (org.apache.hadoop.hive.conf.HiveConf)
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: conf 
> (org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics)
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: metrics 
> (org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics$CodahaleMetricsScope)
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: openScopes 
> (org.apache.hadoop.hive.ql.log.PerfLogger)
> 16/04/14 21:53:24 INFO hiveserver2.DDLTest: perfLogger 
> 

[jira] [Commented] (HIVE-13076) Implement FK/PK "rely novalidate" constraints for better CBO

2016-05-11 Thread Ruslan Dautkhanov (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280794#comment-15280794
 ] 

Ruslan Dautkhanov commented on HIVE-13076:
--

That's awesome. Thanks a lot [~hsubramaniyan] 

> Implement FK/PK "rely novalidate" constraints for better CBO
> 
>
> Key: HIVE-13076
> URL: https://issues.apache.org/jira/browse/HIVE-13076
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO, Logical Optimizer
>Reporter: Ruslan Dautkhanov
>Assignee: Hari Sankar Sivarama Subramaniyan
> Fix For: 2.1.0
>
>
> Oracle has "RELY NOVALIDATE" option for constraints.. Could be easier for 
> Hive to start with something like that for PK/FK constraints. So CBO has more 
> information for optimizations. It does not have to actually check if that 
> constraint is relationship is true; it can just "rely" on that constraint.. 
> https://docs.oracle.com/database/121/SQLRF/clauses002.htm#sthref2289
> So it would be helpful with join cardinality estimates, and with cases like 
> this - https://issues.apache.org/jira/browse/HIVE-13019



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-9144) Beeline + Kerberos shouldn't prompt for unused username + password

2016-05-11 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam reassigned HIVE-9144:
---

Assignee: Naveen Gangam

> Beeline + Kerberos shouldn't prompt for unused username + password
> --
>
> Key: HIVE-9144
> URL: https://issues.apache.org/jira/browse/HIVE-9144
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 0.13.0
> Environment: Hive 0.13 on MapR 4.0.1
>Reporter: Hari Sekhon
>Assignee: Naveen Gangam
>Priority: Minor
>
> When using beeline to connect to a kerberized HiveServer2 it still prompts 
> for a username and password that aren't used. It should be changed to not 
> prompt when using Kerberos:
> {code}/opt/mapr/hive/hive-0.13/bin/beeline
> Beeline version 0.13.0-mapr-1409 by Apache Hive
> beeline> !connect 
> jdbc:hive2://:1/default;principal=hive/@REALM
> scan complete in 6ms
> Connecting to jdbc:hive2://:1/default;principal=hive/@REALM
> Enter username for 
> jdbc:hive2://:1/default;principal=hive/@REALM: wronguser
> Enter password for 
> jdbc:hive2://:1/default;principal=hive/@REALM: 
> Connected to: Apache Hive (version 0.13.0-mapr-1409)
> Driver: Hive JDBC (version 0.13.0-mapr-1409)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> {code}
> Hive conf includes (as concisely shown by set):
> {code}hive.server2.authentication = KERBEROS
> hive.server2.enable.doAs = true
> hive.server2.enable.impersonation = true
> {code}
> I can't see how to demonstrate in HQL session that I am not connected as 
> "wronguser" (which obviously doesn't exist either locally or as a Kerberos 
> principal or account in my LDAP directory), so I've raised another ticket for 
> that HIVE-9143, but it should be clear given I specifed a non-existent user 
> and a completely blank password just hitting enter that it's not using those 
> credentials. Same happens with ,  for both username and 
> password.
> Regards,
> Hari Sekhon
> http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13737) incorrect count when multiple inserts with union all

2016-05-11 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280783#comment-15280783
 ] 

Ashutosh Chauhan commented on HIVE-13737:
-

Is it the case that you get different results in different runs? Can you post 
output of explain when you get wrong results?

> incorrect count when multiple inserts with union all
> 
>
> Key: HIVE-13737
> URL: https://issues.apache.org/jira/browse/HIVE-13737
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.2.1
> Environment: hdp 2.3.4.7 on Red Hat 6
>Reporter: Frank Luo
>Priority: Critical
>
> Here is a test case to illustrate the issue. It seems MR works fine but Tez 
> is having the problem. 
> CREATE TABLE test(col1   STRING);
> CREATE TABLE src (col1 string);
> insert into table src values ('a');
> INSERT into TABLE test
> select * from (
>SELECT * from src
>UNION ALL
>SELECT * from src) x;
> -- do it one more time
> INSERT INTO TABLE test
>SELECT * from src
>UNION ALL
>SELECT * from src;
> --below gives correct result
> SELECT * FROM TEST;
> --count is incorrect. It might give either '1' or '2', but I am expecting '4'
> SELECT count (*) FROM test;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-13076) Implement FK/PK "rely novalidate" constraints for better CBO

2016-05-11 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan resolved HIVE-13076.
--
   Resolution: Implemented
Fix Version/s: 2.1.0

I am closing this jira based on the fact the sub-jiras have been closed. Please 
note that this does not include the CBO changes to perform join elimination 
based on PK/FK "rely novalidate" information. That should be tracked in a 
separate jira and when done is expected to link to this jira.

Thanks
Hari

> Implement FK/PK "rely novalidate" constraints for better CBO
> 
>
> Key: HIVE-13076
> URL: https://issues.apache.org/jira/browse/HIVE-13076
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO, Logical Optimizer
>Reporter: Ruslan Dautkhanov
>Assignee: Hari Sankar Sivarama Subramaniyan
> Fix For: 2.1.0
>
>
> Oracle has "RELY NOVALIDATE" option for constraints.. Could be easier for 
> Hive to start with something like that for PK/FK constraints. So CBO has more 
> information for optimizations. It does not have to actually check if that 
> constraint is relationship is true; it can just "rely" on that constraint.. 
> https://docs.oracle.com/database/121/SQLRF/clauses002.htm#sthref2289
> So it would be helpful with join cardinality estimates, and with cases like 
> this - https://issues.apache.org/jira/browse/HIVE-13019



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13350) Support Alter commands for Rely/NoRely novalidate for PK/FK constraints

2016-05-11 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-13350:
-
   Resolution: Fixed
Fix Version/s: 2.1.0
   Status: Resolved  (was: Patch Available)

> Support Alter commands for Rely/NoRely  novalidate for PK/FK constraints
> 
>
> Key: HIVE-13350
> URL: https://issues.apache.org/jira/browse/HIVE-13350
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO, Logical Optimizer
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Fix For: 2.1.0
>
> Attachments: HIVE-13350.1.patch, HIVE-13350.2.patch, 
> HIVE-13350.final.patch
>
>
> Support commands like :
> ALTER TABLE table2 ADD CONSTRAINT pkt2 primary key (a) disable novalidate;
> ALTER TABLE table3 ADD CONSTRAINT fk1 FOREIGN KEY ( x ) REFERENCES table2(a)  
> DISABLE NOVALIDATE RELY;
> ALTER TABLE table6 ADD CONSTRAINT fk4 FOREIGN KEY ( y ) REFERENCES table1(a)  
> DISABLE NOVALIDATE;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11793) SHOW LOCKS with DbTxnManager ignores filter options

2016-05-11 Thread Wei Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280760#comment-15280760
 ] 

Wei Zheng commented on HIVE-11793:
--

Here's the test result. They're irrelevant.

Test Name
Duration
Age
 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_with_constraints 
8 sec   1
 
org.apache.hive.hcatalog.mapreduce.TestHCatPartitionPublish.testPartitionPublish
   20 min  1
 org.apache.hadoop.hive.metastore.hbase.TestHBaseSchemaTool.oneMondoTest
1.4 sec 3
 
org.apache.hadoop.hive.llap.daemon.impl.comparator.TestShortestJobFirstComparator.testWaitQueueComparatorWithinDagPriority
 5.1 sec 3
 org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_external_table_ppd 
34 sec  4
 
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_binary_storage_queries
   1 min 34 sec4
 
org.apache.hadoop.hive.llap.daemon.impl.TestTaskExecutorService.testPreemptionQueueComparator
  5 sec   4
 org.apache.hadoop.hive.llap.tez.TestConverters.testFragmentSpecToTaskSpec  
98 ms   15
 
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver_encryption_insert_partition_dynamic
1 min 29 sec15
 
org.apache.hive.service.cli.session.TestHiveSessionImpl.testLeakOperationHandle 
   32 sec  15
 
org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskCommunicator.testFinishableStateUpdateFailure
   2.9 sec 28
 
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver_encryption_insert_partition_static
 1 min 44 sec43
 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_selectindate14 sec  
47
 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avrocountemptytbl   
9.6 sec 47
 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_order_null  43 sec  
47
 
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver_encryption_join_with_different_encryption_keys
 1 min 30 sec47
 
org.apache.hive.minikdc.TestHiveAuthFactory.testStartTokenManagerForMemoryTokenStore
   1.4 sec 47
 
org.apache.hive.minikdc.TestHiveAuthFactory.testStartTokenManagerForDBTokenStore
   0.33 sec47
 org.apache.hive.minikdc.TestMiniHiveKdc.testLogin  1 min 38 sec47
 
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
8.6 sec 47
 
org.apache.hadoop.hive.cli.TestMinimrCliDriver.org.apache.hadoop.hive.cli.TestMinimrCliDriver
  1 min 4 sec 47

> SHOW LOCKS with DbTxnManager ignores filter options
> ---
>
> Key: HIVE-11793
> URL: https://issues.apache.org/jira/browse/HIVE-11793
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Wei Zheng
>Priority: Minor
> Attachments: HIVE-11793.1.patch, HIVE-11793.2.patch, 
> HIVE-11793.3.patch, HIVE-11793.4.patch, HIVE-11793.5.patch, 
> HIVE-11793.6.patch, HIVE-11793.7.patch, HIVE-11793.8.patch
>
>
> https://cwiki.apache.org/confluence/display/Hive/Locking and 
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-ShowLocks
>  list various options that can be used with SHOW LOCKS, e.g. 
> When ACID is enabled, all these options are ignored and a full list is 
> returned.
> (also only ext lock id is shown, int lock id is not).
> see DDLTask.showLocks() and TxnHandler.showLocks()
> requires extending ShowLocksRequest which is a Thrift object



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13705) Insert into table removes existing data

2016-05-11 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280757#comment-15280757
 ] 

Ashutosh Chauhan commented on HIVE-13705:
-

+1

> Insert into table removes existing data
> ---
>
> Key: HIVE-13705
> URL: https://issues.apache.org/jira/browse/HIVE-13705
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 2.1.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-13705.1.patch, HIVE-13705.2.patch, 
> HIVE-13705.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13350) Support Alter commands for Rely/NoRely novalidate for PK/FK constraints

2016-05-11 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-13350:
-
Attachment: HIVE-13350.final.patch

> Support Alter commands for Rely/NoRely  novalidate for PK/FK constraints
> 
>
> Key: HIVE-13350
> URL: https://issues.apache.org/jira/browse/HIVE-13350
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO, Logical Optimizer
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Attachments: HIVE-13350.1.patch, HIVE-13350.2.patch, 
> HIVE-13350.final.patch
>
>
> Support commands like :
> ALTER TABLE table2 ADD CONSTRAINT pkt2 primary key (a) disable novalidate;
> ALTER TABLE table3 ADD CONSTRAINT fk1 FOREIGN KEY ( x ) REFERENCES table2(a)  
> DISABLE NOVALIDATE RELY;
> ALTER TABLE table6 ADD CONSTRAINT fk4 FOREIGN KEY ( y ) REFERENCES table1(a)  
> DISABLE NOVALIDATE;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13350) Support Alter commands for Rely/NoRely novalidate for PK/FK constraints

2016-05-11 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280747#comment-15280747
 ] 

Hari Sankar Sivarama Subramaniyan commented on HIVE-13350:
--

Results of the tests run locally :
{code}
Test Result (18 failures / -3)
org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskSchedulerService.testDelayedLocalityNodeCommErrorImmediateAllocation
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_with_constraints
org.apache.hadoop.hive.metastore.hbase.TestHBaseSchemaTool.oneMondoTest
org.apache.hadoop.hive.llap.daemon.impl.TestTaskExecutorService.testPreemptionQueueComparator
org.apache.hadoop.hive.llap.tez.TestConverters.testFragmentSpecToTaskSpec
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver_encryption_insert_partition_dynamic
org.apache.hive.service.cli.session.TestHiveSessionImpl.testLeakOperationHandle
org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskCommunicator.testFinishableStateUpdateFailure
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver_encryption_insert_partition_static
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_selectindate
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avrocountemptytbl
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_order_null
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver_encryption_join_with_different_encryption_keys
org.apache.hive.minikdc.TestHiveAuthFactory.testStartTokenManagerForMemoryTokenStore
org.apache.hive.minikdc.TestHiveAuthFactory.testStartTokenManagerForDBTokenStore
org.apache.hive.minikdc.TestMiniHiveKdc.testLogin
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
org.apache.hadoop.hive.cli.TestMinimrCliDriver.org.apache.hadoop.hive.cli.TestMinimrCliDriver
{code}

org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_with_constraints 
is flaky even in master as of now because of the system generated constraint 
names that can cause an ordering issue when the keys are listed. I have 
modified the test slightly to fix this issue. The rest of the above failures 
are unrelated to this change.

Thanks
Hari

> Support Alter commands for Rely/NoRely  novalidate for PK/FK constraints
> 
>
> Key: HIVE-13350
> URL: https://issues.apache.org/jira/browse/HIVE-13350
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO, Logical Optimizer
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Attachments: HIVE-13350.1.patch, HIVE-13350.2.patch
>
>
> Support commands like :
> ALTER TABLE table2 ADD CONSTRAINT pkt2 primary key (a) disable novalidate;
> ALTER TABLE table3 ADD CONSTRAINT fk1 FOREIGN KEY ( x ) REFERENCES table2(a)  
> DISABLE NOVALIDATE RELY;
> ALTER TABLE table6 ADD CONSTRAINT fk4 FOREIGN KEY ( y ) REFERENCES table1(a)  
> DISABLE NOVALIDATE;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13507) Improved logging for ptest

2016-05-11 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280742#comment-15280742
 ] 

Ashutosh Chauhan commented on HIVE-13507:
-

FYI I am seeing lot more batches now getting killed in latest runs. Haven't 
verified if they were actual hangs or not.

> Improved logging for ptest
> --
>
> Key: HIVE-13507
> URL: https://issues.apache.org/jira/browse/HIVE-13507
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Fix For: 2.1.0
>
> Attachments: HIVE-13507.01.patch, HIVE-13507.02.patch
>
>
> NO PRECOMMIT TESTS
> Include information about batch runtimes, outlier lists, host completion 
> times, etc. Try identifying tests which cause the build to take a long time 
> while holding onto resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13705) Insert into table removes existing data

2016-05-11 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280733#comment-15280733
 ] 

Aihua Xu commented on HIVE-13705:
-

Thanks. Didn't know there is a reset.

> Insert into table removes existing data
> ---
>
> Key: HIVE-13705
> URL: https://issues.apache.org/jira/browse/HIVE-13705
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 2.1.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-13705.1.patch, HIVE-13705.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13705) Insert into table removes existing data

2016-05-11 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280726#comment-15280726
 ] 

Ashutosh Chauhan commented on HIVE-13705:
-

I was thinking of {{rest fs.defaultFS;}} which should restore it to its default 
config value.

> Insert into table removes existing data
> ---
>
> Key: HIVE-13705
> URL: https://issues.apache.org/jira/browse/HIVE-13705
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 2.1.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-13705.1.patch, HIVE-13705.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13733) CTE + "IS NULL" predicate + column aliasing as existing column leads to wrong results

2016-05-11 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280701#comment-15280701
 ] 

Jesus Camacho Rodriguez commented on HIVE-13733:


Thanks [~pxiong]!

> CTE + "IS NULL" predicate + column aliasing as existing column leads to wrong 
> results 
> --
>
> Key: HIVE-13733
> URL: https://issues.apache.org/jira/browse/HIVE-13733
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Dudu Markovitz
>Assignee: Jesus Camacho Rodriguez
>
> hive> create table t (i int,a string,b string);
> hive> insert into t values (1,'hello','world'),(2,'bye',null);
> hive> select * from t where t.b is null;
> 2 bye NULL
> This is wrong, all 3 columns should return the same value - t.a.
> hive> with cte as (select t.a as a,t.a as b,t.a as c from t where t.b is 
> null) select * from cte;
> bye   NULLbye
> However, these are right:
> hive> select t.a as a,t.a as b,t.a as c from t where t.b is null;
> bye   bye bye
> hive> with cte as (select t.a as a,t.a as b,t.a as c from t where t.b is not 
> null) select * from cte;OK
> hello hello   hello



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13733) CTE + "IS NULL" predicate + column aliasing as existing column leads to wrong results

2016-05-11 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-13733:
---
Assignee: Pengcheng Xiong  (was: Jesus Camacho Rodriguez)

> CTE + "IS NULL" predicate + column aliasing as existing column leads to wrong 
> results 
> --
>
> Key: HIVE-13733
> URL: https://issues.apache.org/jira/browse/HIVE-13733
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Dudu Markovitz
>Assignee: Pengcheng Xiong
>
> hive> create table t (i int,a string,b string);
> hive> insert into t values (1,'hello','world'),(2,'bye',null);
> hive> select * from t where t.b is null;
> 2 bye NULL
> This is wrong, all 3 columns should return the same value - t.a.
> hive> with cte as (select t.a as a,t.a as b,t.a as c from t where t.b is 
> null) select * from cte;
> bye   NULLbye
> However, these are right:
> hive> select t.a as a,t.a as b,t.a as c from t where t.b is null;
> bye   bye bye
> hive> with cte as (select t.a as a,t.a as b,t.a as c from t where t.b is not 
> null) select * from cte;OK
> hello hello   hello



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13502) Beeline doesnt support session parameters in JDBC URL as documentation states.

2016-05-11 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280700#comment-15280700
 ] 

Aihua Xu commented on HIVE-13502:
-

+1 on the new patch. Pending test.

> Beeline doesnt support session parameters in JDBC URL as documentation states.
> --
>
> Key: HIVE-13502
> URL: https://issues.apache.org/jira/browse/HIVE-13502
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 1.1.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Attachments: HIVE-13502.1.patch, HIVE-13502.2.patch, HIVE-13502.patch
>
>
> https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-ConnectionURLs
> documents that sessions variables like credentials etc are accepted as part 
> of the URL. However, Beeline does not support such URLs today.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13733) CTE + "IS NULL" predicate + column aliasing as existing column leads to wrong results

2016-05-11 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280695#comment-15280695
 ] 

Pengcheng Xiong commented on HIVE-13733:


[~jcamachorodriguez], after I apply the patch in 
https://issues.apache.org/jira/browse/HIVE-13602, the error disappears. Will 
add a new test case in HIVE-13602 as well [~Mahens] and [~dmarkovitz], 
could u try the patch and confirm? If so, we can close it as duplicate. Thanks. 

> CTE + "IS NULL" predicate + column aliasing as existing column leads to wrong 
> results 
> --
>
> Key: HIVE-13733
> URL: https://issues.apache.org/jira/browse/HIVE-13733
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Dudu Markovitz
>Assignee: Jesus Camacho Rodriguez
>
> hive> create table t (i int,a string,b string);
> hive> insert into t values (1,'hello','world'),(2,'bye',null);
> hive> select * from t where t.b is null;
> 2 bye NULL
> This is wrong, all 3 columns should return the same value - t.a.
> hive> with cte as (select t.a as a,t.a as b,t.a as c from t where t.b is 
> null) select * from cte;
> bye   NULLbye
> However, these are right:
> hive> select t.a as a,t.a as b,t.a as c from t where t.b is null;
> bye   bye bye
> hive> with cte as (select t.a as a,t.a as b,t.a as c from t where t.b is not 
> null) select * from cte;OK
> hello hello   hello



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13502) Beeline doesnt support session parameters in JDBC URL as documentation states.

2016-05-11 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-13502:
-
Status: Patch Available  (was: Open)

> Beeline doesnt support session parameters in JDBC URL as documentation states.
> --
>
> Key: HIVE-13502
> URL: https://issues.apache.org/jira/browse/HIVE-13502
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 1.1.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Attachments: HIVE-13502.1.patch, HIVE-13502.2.patch, HIVE-13502.patch
>
>
> https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-ConnectionURLs
> documents that sessions variables like credentials etc are accepted as part 
> of the URL. However, Beeline does not support such URLs today.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13502) Beeline doesnt support session parameters in JDBC URL as documentation states.

2016-05-11 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-13502:
-
Attachment: HIVE-13502.2.patch

> Beeline doesnt support session parameters in JDBC URL as documentation states.
> --
>
> Key: HIVE-13502
> URL: https://issues.apache.org/jira/browse/HIVE-13502
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 1.1.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Attachments: HIVE-13502.1.patch, HIVE-13502.2.patch, HIVE-13502.patch
>
>
> https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-ConnectionURLs
> documents that sessions variables like credentials etc are accepted as part 
> of the URL. However, Beeline does not support such URLs today.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13502) Beeline doesnt support session parameters in JDBC URL as documentation states.

2016-05-11 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-13502:
-
Status: Open  (was: Patch Available)

Will upload a new patch.

> Beeline doesnt support session parameters in JDBC URL as documentation states.
> --
>
> Key: HIVE-13502
> URL: https://issues.apache.org/jira/browse/HIVE-13502
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 1.1.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Attachments: HIVE-13502.1.patch, HIVE-13502.2.patch, HIVE-13502.patch
>
>
> https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-ConnectionURLs
> documents that sessions variables like credentials etc are accepted as part 
> of the URL. However, Beeline does not support such URLs today.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13458) Heartbeater doesn't fail query when heartbeat fails

2016-05-11 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280682#comment-15280682
 ] 

Eugene Koifman commented on HIVE-13458:
---

+1

> Heartbeater doesn't fail query when heartbeat fails
> ---
>
> Key: HIVE-13458
> URL: https://issues.apache.org/jira/browse/HIVE-13458
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 2.1.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
> Attachments: HIVE-13458.1.patch, HIVE-13458.2.patch, 
> HIVE-13458.3.patch, HIVE-13458.4.patch, HIVE-13458.5.patch, 
> HIVE-13458.6.patch, HIVE-13458.7.patch, HIVE-13458.8.patch, HIVE-13458.9.patch
>
>
> When a heartbeat fails to locate a lock, it should fail the current query. 
> That doesn't happen, which is a bug.
> Another thing is, we need to make sure stopHeartbeat really stops the 
> heartbeat, i.e. no additional heartbeat will be sent, since that will break 
> the assumption and cause the query to fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13705) Insert into table removes existing data

2016-05-11 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280681#comment-15280681
 ] 

Aihua Xu commented on HIVE-13705:
-

[~ashutoshc] Seems we may not need to unset in the test. I checked existing 
tests like groupby1.q test which sets {set fs.default.name=file:///;}}  but 
doesn't unset. Are you thinking of unseting {{set fs.defaultFS=;}} at the end 
of the test?

> Insert into table removes existing data
> ---
>
> Key: HIVE-13705
> URL: https://issues.apache.org/jira/browse/HIVE-13705
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 2.1.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-13705.1.patch, HIVE-13705.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13565) thrift change

2016-05-11 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280670#comment-15280670
 ] 

Hive QA commented on HIVE-13565:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12803105/HIVE-13565.03.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 74 failed/errored test(s), 9906 tests 
executed
*Failed tests:*
{noformat}
TestHWISessionManager - did not produce a TEST-*.xml file
TestMiniLlapCliDriver - did not produce a TEST-*.xml file
TestMiniTezCliDriver-groupby2.q-tez_dynpart_hashjoin_1.q-custom_input_output_format.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-join1.q-schema_evol_orc_nonvec_mapwork_part.q-mapjoin_decimal.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-load_dyn_part2.q-selectDistinctStar.q-vector_decimal_5.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-mapjoin_mapjoin.q-insert_into1.q-vector_decimal_2.q-and-12-more
 - did not produce a TEST-*.xml file
TestSparkCliDriver-groupby10.q-groupby4_noskew.q-union5.q-and-12-more - did not 
produce a TEST-*.xml file
TestSparkCliDriver-script_pipe.q-stats12.q-auto_join24.q-and-12-more - did not 
produce a TEST-*.xml file
TestSparkCliDriver-skewjoin_union_remove_2.q-timestamp_null.q-union32.q-and-12-more
 - did not produce a TEST-*.xml file
TestSparkClient - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_with_constraints
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ivyDownload
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_bucket4
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_bucket5
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_bucket6
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_disable_merge_for_bucketing
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_infer_bucket_sort_map_operators
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_infer_bucket_sort_num_buckets
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_infer_bucket_sort_reducers_power_two
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_list_bucket_dml_10
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge1
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge2
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge9
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge_diff_fs
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_reduce_deduplicate
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join1
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join2
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join3
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join4
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join5
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testNegativeCliDriver_minimr_broken_pipe
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join0
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_sortmerge_join_14
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby5_noskew
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby_complex_types
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby_map_ppr_multi_distinct
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_input13
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_input18
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_multi_insert_gby3
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_multi_insert_mixed
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_vectorization_16
org.apache.hadoop.hive.llap.daemon.impl.comparator.TestFirstInFirstOutComparator.testWaitQueueComparatorWithinDagPriority
org.apache.hadoop.hive.llap.daemon.impl.comparator.TestShortestJobFirstComparator.testWaitQueueComparatorWithinDagPriority
org.apache.hadoop.hive.llap.tez.TestConverters.testFragmentSpecToTaskSpec
org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskCommunicator.testFinishableStateUpdateFailure
org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskSchedulerService.testDelayedLocalityNodeCommErrorImmediateAllocation
org.apache.hadoop.hive.metastore.TestFilterHooks.org.apache.hadoop.hive.metastore.TestFilterHooks

[jira] [Updated] (HIVE-13616) Investigate renaming a table without invalidating the column stats

2016-05-11 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-13616:

Status: Patch Available  (was: Open)

Patch-1: when the table is just renamed, the old column stats are carried over 
to the new table. In the patch, we collect the old column stats and remove 
unwanted column stats, then alter table to the new one and update to new column 
stats (since the table needs to exist when updating the stats).

Seems better to do a update to the existing column stats rather than deleting 
and inserting. But I will file followup jiras to handle that since it involves 
the interface change.

> Investigate renaming a table without invalidating the column stats
> --
>
> Key: HIVE-13616
> URL: https://issues.apache.org/jira/browse/HIVE-13616
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-13616.1.patch
>
>
> Right now when we rename a table, we clear the column stats rather than 
> updating it (HIVE-9720) since ObjectStore uses DN to talk to DB. Investigate 
> the possibility that if we can achieve updating the stats without rescanning 
> the whole table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13616) Investigate renaming a table without invalidating the column stats

2016-05-11 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-13616:

Attachment: HIVE-13616.1.patch

> Investigate renaming a table without invalidating the column stats
> --
>
> Key: HIVE-13616
> URL: https://issues.apache.org/jira/browse/HIVE-13616
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-13616.1.patch
>
>
> Right now when we rename a table, we clear the column stats rather than 
> updating it (HIVE-9720) since ObjectStore uses DN to talk to DB. Investigate 
> the possibility that if we can achieve updating the stats without rescanning 
> the whole table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13343) Need to disable hybrid grace hash join in llap mode except for dynamically partitioned hash join

2016-05-11 Thread Vikram Dixit K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Dixit K updated HIVE-13343:
--
Attachment: HIVE-13343.7.patch

> Need to disable hybrid grace hash join in llap mode except for dynamically 
> partitioned hash join
> 
>
> Key: HIVE-13343
> URL: https://issues.apache.org/jira/browse/HIVE-13343
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 2.1.0
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
> Attachments: HIVE-13343.1.patch, HIVE-13343.2.patch, 
> HIVE-13343.3.patch, HIVE-13343.4.patch, HIVE-13343.5.patch, 
> HIVE-13343.6.patch, HIVE-13343.7.patch
>
>
> Due to performance reasons, we should disable use of hybrid grace hash join 
> in llap when dynamic partition hash join is not used. With dynamic partition 
> hash join, we need hybrid grace hash join due to the possibility of skews.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12996) Temp tables shouldn't be locked

2016-05-11 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-12996:
-
   Resolution: Fixed
Fix Version/s: 2.1.0
   1.3.0
   Status: Resolved  (was: Patch Available)

Committed to master and branch-1

> Temp tables shouldn't be locked
> ---
>
> Key: HIVE-12996
> URL: https://issues.apache.org/jira/browse/HIVE-12996
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 2.0.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
> Fix For: 1.3.0, 2.1.0
>
> Attachments: HIVE-12996.1.patch, HIVE-12996.2.patch, 
> HIVE-12996.3.patch
>
>
> Internally, INSERT INTO ... VALUES statements use temp table to accomplish 
> its functionality. But temp tables shouldn't be stored in the metastore 
> tables for ACID, because they are by definition only visible inside the 
> session that created them, and we don't allow multiple threads inside a 
> session. If a temp table is used in a query, it should be ignored by lock 
> manager.
> {code}
> mysql> select * from COMPLETED_TXN_COMPONENTS;
> +---+--+---+--+
> | CTC_TXNID | CTC_DATABASE | CTC_TABLE | CTC_PARTITION|
> +---+--+---+--+
> | 1 | acid | t1| NULL |
> | 1 | acid | values__tmp__table__1 | NULL |
> | 2 | acid | t1| NULL |
> | 2 | acid | values__tmp__table__2 | NULL |
> | 3 | acid | values__tmp__table__3 | NULL |
> | 3 | acid | t1| NULL |
> | 4 | acid | values__tmp__table__1 | NULL |
> | 4 | acid | t2p   | ds=today |
> | 5 | acid | values__tmp__table__1 | NULL |
> | 5 | acid | t3p   | ds=today/hour=12 |
> +---+--+---+--+
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13738) Bump up httpcomponent.*.version deps in branch-1.2 to 4.4

2016-05-11 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-13738:

Fix Version/s: 1.2.2

> Bump up httpcomponent.*.version deps in branch-1.2 to 4.4
> -
>
> Key: HIVE-13738
> URL: https://issues.apache.org/jira/browse/HIVE-13738
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1, 1.2.2
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
>Priority: Minor
> Fix For: 1.2.2
>
>
> apache-httpcomponents has had certain security issues (see HADOOP-12767) due 
> to which upgrading to a newer dep version is recommended.
> We've already upped the dep. version to 4.4 in other branches of hive as part 
> of HIVE-9709, we should do so here as well if we are going to do a new update 
> of 1.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-13738) Bump up httpcomponent.*.version deps in branch-1.2 to 4.4

2016-05-11 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan resolved HIVE-13738.
-
Resolution: Invalid

Actually, never mind - I was looking at the wrong branch - it looks like 
HIVE-9709 was already committed to branch-1.2 as well, and thus, we have this 
update here as well.

> Bump up httpcomponent.*.version deps in branch-1.2 to 4.4
> -
>
> Key: HIVE-13738
> URL: https://issues.apache.org/jira/browse/HIVE-13738
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1, 1.2.2
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
>Priority: Minor
>
> apache-httpcomponents has had certain security issues (see HADOOP-12767) due 
> to which upgrading to a newer dep version is recommended.
> We've already upped the dep. version to 4.4 in other branches of hive as part 
> of HIVE-9709, we should do so here as well if we are going to do a new update 
> of 1.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13738) Bump up httpcomponent.*.version deps in branch-1.2 to 4.4

2016-05-11 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-13738:

Priority: Minor  (was: Major)

> Bump up httpcomponent.*.version deps in branch-1.2 to 4.4
> -
>
> Key: HIVE-13738
> URL: https://issues.apache.org/jira/browse/HIVE-13738
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1, 1.2.2
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
>Priority: Minor
>
> apache-httpcomponents has had certain security issues (see HADOOP-12767) due 
> to which upgrading to a newer dep version is recommended.
> We've already upped the dep. version to 4.4 in other branches of hive as part 
> of HIVE-9709, we should do so here as well if we are going to do a new update 
> of 1.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13738) Bump up httpcomponent.*.version deps in branch-1.2 to 4.4

2016-05-11 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-13738:

Description: 
apache-httpcomponents has had certain security issues (see HADOOP-12767) due to 
which upgrading to a newer dep version is recommended.

We've already upped the dep. version to 4.4 in other branches of hive as part 
of HIVE-9709, we should do so here as well if we are going to do a new update 
of 1.2.



  was:
apache-httpcomponents has had certain security issues (see HADOOP-12767) due to 
which upgrading to a newer dep version is recommended.

We've already upped the dep. version to 4.4 in other branches of hive, we 
should do so here as well if we are going to do a new update of 1.2.




> Bump up httpcomponent.*.version deps in branch-1.2 to 4.4
> -
>
> Key: HIVE-13738
> URL: https://issues.apache.org/jira/browse/HIVE-13738
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1, 1.2.2
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
>
> apache-httpcomponents has had certain security issues (see HADOOP-12767) due 
> to which upgrading to a newer dep version is recommended.
> We've already upped the dep. version to 4.4 in other branches of hive as part 
> of HIVE-9709, we should do so here as well if we are going to do a new update 
> of 1.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12996) Temp tables shouldn't be locked

2016-05-11 Thread Wei Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280533#comment-15280533
 ] 

Wei Zheng commented on HIVE-12996:
--

Here's the test result. They're irrelevant.

Test Name
Duration
Age
 
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver_encryption_select_read_only_encrypted_tbl
  1 min 18 sec1
 
org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler.org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler
1 min 2 sec 1
 
org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler.org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler
1 min 2 sec 1
 org.apache.hadoop.hive.metastore.hbase.TestHBaseSchemaTool.oneMondoTest
1.2 sec 2
 
org.apache.hadoop.hive.llap.daemon.impl.comparator.TestShortestJobFirstComparator.testWaitQueueComparatorWithinDagPriority
 5.2 sec 2
 org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_external_table_ppd 
31 sec  3
 
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_binary_storage_queries
   1 min 54 sec3
 
org.apache.hadoop.hive.llap.daemon.impl.TestTaskExecutorService.testPreemptionQueueComparator
  5 sec   3
 org.apache.hadoop.hive.llap.tez.TestConverters.testFragmentSpecToTaskSpec  
38 ms   14
 
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver_encryption_insert_partition_dynamic
1 min 25 sec14
 
org.apache.hive.service.cli.session.TestHiveSessionImpl.testLeakOperationHandle 
   34 sec  14
 
org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskCommunicator.testFinishableStateUpdateFailure
   5 sec   27
 
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver_encryption_insert_partition_static
 1 min 31 sec42
 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_selectindate12 sec  
46
 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avrocountemptytbl   
13 sec  46
 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_order_null  35 sec  
46
 
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver_encryption_join_with_different_encryption_keys
 1 min 32 sec46
 
org.apache.hive.minikdc.TestHiveAuthFactory.testStartTokenManagerForMemoryTokenStore
   1.1 sec 46
 
org.apache.hive.minikdc.TestHiveAuthFactory.testStartTokenManagerForDBTokenStore
   0.31 sec46
 org.apache.hive.minikdc.TestMiniHiveKdc.testLogin  1 min 41 sec46
 
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
9.9 sec 46
 
org.apache.hadoop.hive.cli.TestMinimrCliDriver.org.apache.hadoop.hive.cli.TestMinimrCliDriver
  58 sec  46

> Temp tables shouldn't be locked
> ---
>
> Key: HIVE-12996
> URL: https://issues.apache.org/jira/browse/HIVE-12996
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 2.0.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
> Attachments: HIVE-12996.1.patch, HIVE-12996.2.patch, 
> HIVE-12996.3.patch
>
>
> Internally, INSERT INTO ... VALUES statements use temp table to accomplish 
> its functionality. But temp tables shouldn't be stored in the metastore 
> tables for ACID, because they are by definition only visible inside the 
> session that created them, and we don't allow multiple threads inside a 
> session. If a temp table is used in a query, it should be ignored by lock 
> manager.
> {code}
> mysql> select * from COMPLETED_TXN_COMPONENTS;
> +---+--+---+--+
> | CTC_TXNID | CTC_DATABASE | CTC_TABLE | CTC_PARTITION|
> +---+--+---+--+
> | 1 | acid | t1| NULL |
> | 1 | acid | values__tmp__table__1 | NULL |
> | 2 | acid | t1| NULL |
> | 2 | acid | values__tmp__table__2 | NULL |
> | 3 | acid | values__tmp__table__3 | NULL |
> | 3 | acid | t1| NULL |
> | 4 | acid | values__tmp__table__1 | NULL |
> | 4 | acid | t2p   | ds=today |
> | 5 | acid | values__tmp__table__1 | NULL |
> | 5 | acid | t3p   | ds=today/hour=12 |
> +---+--+---+--+
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13563) Hive Streaming does not honor orc.compress.size and orc.stripe.size table properties

2016-05-11 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280508#comment-15280508
 ] 

Prasanth Jayachandran commented on HIVE-13563:
--

IMO, we are often hit by OOM because of compression buffer size and not stripe 
size. 

How about this?

default ratio: 4

*Base files:*
default compression chunk: 64k
default stripe size: 64MB

*Delta files:*
delta compression: 16k
delta stripe: 16MB

> Hive Streaming does not honor orc.compress.size and orc.stripe.size table 
> properties
> 
>
> Key: HIVE-13563
> URL: https://issues.apache.org/jira/browse/HIVE-13563
> Project: Hive
>  Issue Type: Bug
>  Components: ORC
>Affects Versions: 2.1.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>  Labels: TODOC2.1
> Attachments: HIVE-13563.1.patch
>
>
> According to the doc:
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+ORC#LanguageManualORC-HiveQLSyntax
> One should be able to specify tblproperties for many ORC options.
> But the settings for orc.compress.size and orc.stripe.size don't take effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13308) HiveOnSpark sumbit query very slow when hundred of beeline exectue at same time

2016-05-11 Thread Marcelo Vanzin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280475#comment-15280475
 ] 

Marcelo Vanzin commented on HIVE-13308:
---

There needs to be a synchronization block to access the {{server}} field; 
otherwise, it's probably safe to call the {{SparkClientImpl}} constructor 
outside of a synchronized block, as long as {{SparkClientFactory.stop()}} is 
not called before it finishes. Something like:

{code}
  public static SparkClient createClient(Map sparkConf, 
HiveConf hiveConf)
  throws IOException, SparkException {
RpcServer _server;
synchronized (SparkClientFactory.class) {
  Preconditions.checkState(server != null, "initialize() not called.");
  _server = server;
}
return new SparkClientImpl(_server, sparkConf, hiveConf);
  }
{code}

Or maybe just making the {{server}} variable volatile would suffice, too (and 
then no synchronization is needed in {{createClient}}).

> HiveOnSpark sumbit query very slow when hundred of beeline exectue at same 
> time
> ---
>
> Key: HIVE-13308
> URL: https://issues.apache.org/jira/browse/HIVE-13308
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 1.2.1
>Reporter: wangwenli
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> backgroud: hive on spark ,  yarn cluster mode
> details:
> when using hundred of beeline submit query at the same time, we found that   
> yarn get application very slow, and hiveserver is blocked at 
> SparkClientFactory.createClient method
> after analysis, we think the synchronize on SparkClientFactory.createClient , 
> can be removed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13716) Improve dynamic partition loading V

2016-05-11 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280424#comment-15280424
 ] 

Rui Li commented on HIVE-13716:
---

Thanks for the update, +1.

> Improve dynamic partition loading V
> ---
>
> Key: HIVE-13716
> URL: https://issues.apache.org/jira/browse/HIVE-13716
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 1.2.0, 2.0.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-13716.1.patch, HIVE-13716.patch
>
>
> Parallelize permission settings and other refactoring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13716) Improve dynamic partition loading V

2016-05-11 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-13716:

Attachment: (was: HIVE-13716.1.patch)

> Improve dynamic partition loading V
> ---
>
> Key: HIVE-13716
> URL: https://issues.apache.org/jira/browse/HIVE-13716
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 1.2.0, 2.0.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-13716.1.patch, HIVE-13716.patch
>
>
> Parallelize permission settings and other refactoring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13716) Improve dynamic partition loading V

2016-05-11 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-13716:

Status: Patch Available  (was: Open)

> Improve dynamic partition loading V
> ---
>
> Key: HIVE-13716
> URL: https://issues.apache.org/jira/browse/HIVE-13716
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-13716.1.patch, HIVE-13716.patch
>
>
> Parallelize permission settings and other refactoring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13716) Improve dynamic partition loading V

2016-05-11 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-13716:

Status: Open  (was: Patch Available)

> Improve dynamic partition loading V
> ---
>
> Key: HIVE-13716
> URL: https://issues.apache.org/jira/browse/HIVE-13716
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-13716.1.patch, HIVE-13716.patch
>
>
> Parallelize permission settings and other refactoring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13716) Improve dynamic partition loading V

2016-05-11 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-13716:

Attachment: HIVE-13716.1.patch

> Improve dynamic partition loading V
> ---
>
> Key: HIVE-13716
> URL: https://issues.apache.org/jira/browse/HIVE-13716
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 1.2.0, 2.0.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-13716.1.patch, HIVE-13716.patch
>
>
> Parallelize permission settings and other refactoring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13293) Query occurs performance degradation after enabling parallel order by for Hive on Spark

2016-05-11 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280380#comment-15280380
 ] 

Xuefu Zhang commented on HIVE-13293:


Okay. Sounds good. +1

> Query occurs performance degradation after enabling parallel order by for 
> Hive on Spark
> ---
>
> Key: HIVE-13293
> URL: https://issues.apache.org/jira/browse/HIVE-13293
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 2.0.0
>Reporter: Lifeng Wang
>Assignee: Rui Li
> Attachments: HIVE-13293.1.patch, HIVE-13293.1.patch
>
>
> I use TPCx-BB to do some performance test on Hive on Spark engine. And found 
> query 10 has performance degradation when enabling parallel order by.
> It seems that sampling cost much time before running the real query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13716) Improve dynamic partition loading V

2016-05-11 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-13716:

Attachment: HIVE-13716.1.patch

> Improve dynamic partition loading V
> ---
>
> Key: HIVE-13716
> URL: https://issues.apache.org/jira/browse/HIVE-13716
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 1.2.0, 2.0.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-13716.1.patch, HIVE-13716.patch
>
>
> Parallelize permission settings and other refactoring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13716) Improve dynamic partition loading V

2016-05-11 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-13716:

Status: Open  (was: Patch Available)

> Improve dynamic partition loading V
> ---
>
> Key: HIVE-13716
> URL: https://issues.apache.org/jira/browse/HIVE-13716
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-13716.1.patch, HIVE-13716.patch
>
>
> Parallelize permission settings and other refactoring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13716) Improve dynamic partition loading V

2016-05-11 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-13716:

Status: Patch Available  (was: Open)

> Improve dynamic partition loading V
> ---
>
> Key: HIVE-13716
> URL: https://issues.apache.org/jira/browse/HIVE-13716
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-13716.1.patch, HIVE-13716.patch
>
>
> Parallelize permission settings and other refactoring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13293) Query occurs performance degradation after enabling parallel order by for Hive on Spark

2016-05-11 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280347#comment-15280347
 ] 

Rui Li commented on HIVE-13293:
---

Hi [~xuefuz], yeah order by is mostly at the end of stages. But that doesn't 
mean the amount of data is small - that's why we need parallel order by. During 
our benchmark, we hit OOM for several cases, which is due to some bug in Spark 
1.6.0. So I thought using memory level cache may make it even worse.

To your second question, we unpersist cached RDDs at the end of each job. You 
can refer to {{RemoteDriver#JobWrapper}} for that.

> Query occurs performance degradation after enabling parallel order by for 
> Hive on Spark
> ---
>
> Key: HIVE-13293
> URL: https://issues.apache.org/jira/browse/HIVE-13293
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 2.0.0
>Reporter: Lifeng Wang
>Assignee: Rui Li
> Attachments: HIVE-13293.1.patch, HIVE-13293.1.patch
>
>
> I use TPCx-BB to do some performance test on Hive on Spark engine. And found 
> query 10 has performance degradation when enabling parallel order by.
> It seems that sampling cost much time before running the real query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10176) skip.header.line.count causes values to be skipped when performing insert values

2016-05-11 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280345#comment-15280345
 ] 

Hive QA commented on HIVE-10176:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12803224/HIVE-10176.16.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 111 failed/errored test(s), 9882 tests 
executed
*Failed tests:*
{noformat}
TestCliDriver-gen_udf_example_add10.q-ppd_join4.q-union27.q-and-12-more - did 
not produce a TEST-*.xml file
TestCliDriver-partition_timestamp.q-ppd_random.q-vector_outer_join5.q-and-12-more
 - did not produce a TEST-*.xml file
TestCliDriver-ptf_general_queries.q-unionDistinct_1.q-groupby1_noskew.q-and-12-more
 - did not produce a TEST-*.xml file
TestHBaseAggrStatsCacheIntegration - did not produce a TEST-*.xml file
TestHWISessionManager - did not produce a TEST-*.xml file
TestMiniLlapCliDriver - did not produce a TEST-*.xml file
TestMiniTezCliDriver-enforce_order.q-vector_partition_diff_num_cols.q-unionDistinct_1.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-join1.q-schema_evol_orc_nonvec_mapwork_part.q-mapjoin_decimal.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-mapjoin_mapjoin.q-insert_into1.q-vector_decimal_2.q-and-12-more
 - did not produce a TEST-*.xml file
TestSparkCliDriver-auto_join30.q-join2.q-input17.q-and-12-more - did not 
produce a TEST-*.xml file
TestSparkCliDriver-bucketsortoptimize_insert_7.q-smb_mapjoin_15.q-mapreduce1.q-and-12-more
 - did not produce a TEST-*.xml file
TestSparkCliDriver-skewjoinopt3.q-union27.q-multigroupby_singlemr.q-and-12-more 
- did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket_map_join_spark1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket_map_join_spark2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket_map_join_spark3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketizedhiveinputformat_auto
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin10
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin9
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin_negative
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin_negative2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_global_limit
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input40
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert2_overwrite_partitions
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ivyDownload
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_merge_dynamic_partition
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_merge_dynamic_partition2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_merge_dynamic_partition3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_merge_dynamic_partition4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_merge_dynamic_partition5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_mergejoin
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats18
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_bucketmapjoin7

[jira] [Commented] (HIVE-13563) Hive Streaming does not honor orc.compress.size and orc.stripe.size table properties

2016-05-11 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280339#comment-15280339
 ] 

Owen O'Malley commented on HIVE-13563:
--

Actually, let's split the difference. I'd propose:

default ratio: 8
default compression chunk: 128k
default stripe size: 64MB

delta compression: 16k
delta stripe: 8MB

thoughts?

> Hive Streaming does not honor orc.compress.size and orc.stripe.size table 
> properties
> 
>
> Key: HIVE-13563
> URL: https://issues.apache.org/jira/browse/HIVE-13563
> Project: Hive
>  Issue Type: Bug
>  Components: ORC
>Affects Versions: 2.1.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>  Labels: TODOC2.1
> Attachments: HIVE-13563.1.patch
>
>
> According to the doc:
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+ORC#LanguageManualORC-HiveQLSyntax
> One should be able to specify tblproperties for many ORC options.
> But the settings for orc.compress.size and orc.stripe.size don't take effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9853) Bad version tested in org/apache/hive/hcatalog/templeton/TestWebHCatE2e.java

2016-05-11 Thread Damien Carol (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280335#comment-15280335
 ] 

Damien Carol commented on HIVE-9853:


No answer from reporter from many months.

> Bad version tested in org/apache/hive/hcatalog/templeton/TestWebHCatE2e.java
> 
>
> Key: HIVE-9853
> URL: https://issues.apache.org/jira/browse/HIVE-9853
> Project: Hive
>  Issue Type: Test
>Affects Versions: 1.0.0
>Reporter: Laurent GAY
>Assignee: Damien Carol
> Attachments: correct_version_test.patch
>
>
> The test getHiveVersion in class 
> org.apache.hive.hcatalog.templeton.TestWebHCatE2e check bad format of version.
> It checks "0.[0-9]+.[0-9]+.*" and not "1.[0-9]+.[0-9]+.*"
> This test is failed for hive, tag "release-1.0.0"
> I propose a patch to correct it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13734) How to initialize hive metastore database

2016-05-11 Thread Damien Carol (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280321#comment-15280321
 ] 

Damien Carol commented on HIVE-13734:
-

[~ljy423897608] You JIRA ticket seems to be a question of configuration, no?
Please, if it's the case use the user mailing list. You will have more feedback 
for this kind of questions, trust me.
If it's ok, I will chnage this JIRA into "not a bug".

> How to initialize hive metastore database
> -
>
> Key: HIVE-13734
> URL: https://issues.apache.org/jira/browse/HIVE-13734
> Project: Hive
>  Issue Type: Test
>  Components: Configuration, Database/Schema
>Affects Versions: 2.0.0
>Reporter: Lijiayong
>Assignee: Lijiayong
>  Labels: mesosphere
> Fix For: 2.0.0
>
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> When run "hive",there is a mistake:Exception in thread "main" 
> java.lang.RuntimeException:Hive metastore database is not initializad.Please 
> use schematool to create the schema.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9853) Bad version tested in org/apache/hive/hcatalog/templeton/TestWebHCatE2e.java

2016-05-11 Thread Damien Carol (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damien Carol updated HIVE-9853:
---
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

> Bad version tested in org/apache/hive/hcatalog/templeton/TestWebHCatE2e.java
> 
>
> Key: HIVE-9853
> URL: https://issues.apache.org/jira/browse/HIVE-9853
> Project: Hive
>  Issue Type: Test
>Affects Versions: 1.0.0
>Reporter: Laurent GAY
>Assignee: Damien Carol
> Attachments: correct_version_test.patch
>
>
> The test getHiveVersion in class 
> org.apache.hive.hcatalog.templeton.TestWebHCatE2e check bad format of version.
> It checks "0.[0-9]+.[0-9]+.*" and not "1.[0-9]+.[0-9]+.*"
> This test is failed for hive, tag "release-1.0.0"
> I propose a patch to correct it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13734) How to initialize hive metastore database

2016-05-11 Thread Damien Carol (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280324#comment-15280324
 ] 

Damien Carol commented on HIVE-13734:
-

Here => https://hive.apache.org/mailing_lists.html

> How to initialize hive metastore database
> -
>
> Key: HIVE-13734
> URL: https://issues.apache.org/jira/browse/HIVE-13734
> Project: Hive
>  Issue Type: Test
>  Components: Configuration, Database/Schema
>Affects Versions: 2.0.0
>Reporter: Lijiayong
>Assignee: Damien Carol
>  Labels: mesosphere
> Fix For: 2.0.0
>
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> When run "hive",there is a mistake:Exception in thread "main" 
> java.lang.RuntimeException:Hive metastore database is not initializad.Please 
> use schematool to create the schema.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-13734) How to initialize hive metastore database

2016-05-11 Thread Damien Carol (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damien Carol reassigned HIVE-13734:
---

Assignee: Damien Carol  (was: Lijiayong)

> How to initialize hive metastore database
> -
>
> Key: HIVE-13734
> URL: https://issues.apache.org/jira/browse/HIVE-13734
> Project: Hive
>  Issue Type: Test
>  Components: Configuration, Database/Schema
>Affects Versions: 2.0.0
>Reporter: Lijiayong
>Assignee: Damien Carol
>  Labels: mesosphere
> Fix For: 2.0.0
>
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> When run "hive",there is a mistake:Exception in thread "main" 
> java.lang.RuntimeException:Hive metastore database is not initializad.Please 
> use schematool to create the schema.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13722) Add flag to detect constants to CBO pull up rules

2016-05-11 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-13722:
---
   Resolution: Fixed
Fix Version/s: 2.1.0
   Status: Resolved  (was: Patch Available)

Tested locally and fails are unrelated. Pushed to master, thanks for reviewing 
[~ashutoshc]!

> Add flag to detect constants to CBO pull up rules
> -
>
> Key: HIVE-13722
> URL: https://issues.apache.org/jira/browse/HIVE-13722
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO, Logical Optimizer, Physical Optimizer
>Affects Versions: 2.1.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Fix For: 2.1.0
>
> Attachments: HIVE-13722.patch
>
>
> Add flag to avoid firing CBO pull up constants rules indefinitely. This issue 
> can be reproduced using e.g. union27.q, union_remove_19.q, unionDistinct_1.q.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13293) Query occurs performance degradation after enabling parallel order by for Hive on Spark

2016-05-11 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280312#comment-15280312
 ] 

Xuefu Zhang commented on HIVE-13293:


[~lirui], thanks for working on this. The patch looks good, but one thing I'm 
not very sure of is the persistence level. Order by is almost always at the end 
of stages. Thus, does it make sense to have a mixed of memory and disk?

As a side, out of scope question, do we need to explicitly call rdd.unpersist() 
for those cached rdds once a query is completed? Right now, rdds are never 
reused across queries.

> Query occurs performance degradation after enabling parallel order by for 
> Hive on Spark
> ---
>
> Key: HIVE-13293
> URL: https://issues.apache.org/jira/browse/HIVE-13293
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 2.0.0
>Reporter: Lifeng Wang
>Assignee: Rui Li
> Attachments: HIVE-13293.1.patch, HIVE-13293.1.patch
>
>
> I use TPCx-BB to do some performance test on Hive on Spark engine. And found 
> query 10 has performance degradation when enabling parallel order by.
> It seems that sampling cost much time before running the real query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13733) CTE + "IS NULL" predicate + column aliasing as existing column leads to wrong results

2016-05-11 Thread Mahens (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280202#comment-15280202
 ] 

Mahens commented on HIVE-13733:
---

let me check If hive.cbo.enable = true will resolve this issue.

> CTE + "IS NULL" predicate + column aliasing as existing column leads to wrong 
> results 
> --
>
> Key: HIVE-13733
> URL: https://issues.apache.org/jira/browse/HIVE-13733
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Dudu Markovitz
>Assignee: Jesus Camacho Rodriguez
>
> hive> create table t (i int,a string,b string);
> hive> insert into t values (1,'hello','world'),(2,'bye',null);
> hive> select * from t where t.b is null;
> 2 bye NULL
> This is wrong, all 3 columns should return the same value - t.a.
> hive> with cte as (select t.a as a,t.a as b,t.a as c from t where t.b is 
> null) select * from cte;
> bye   NULLbye
> However, these are right:
> hive> select t.a as a,t.a as b,t.a as c from t where t.b is null;
> bye   bye bye
> hive> with cte as (select t.a as a,t.a as b,t.a as c from t where t.b is not 
> null) select * from cte;OK
> hello hello   hello



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13733) CTE + "IS NULL" predicate + column aliasing as existing column leads to wrong results

2016-05-11 Thread Mahens (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280198#comment-15280198
 ] 

Mahens commented on HIVE-13733:
---

Yes, Whenever we are using select statement, it works perfect. Within CTE with 
NULL Check is causing issue

> CTE + "IS NULL" predicate + column aliasing as existing column leads to wrong 
> results 
> --
>
> Key: HIVE-13733
> URL: https://issues.apache.org/jira/browse/HIVE-13733
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Dudu Markovitz
>Assignee: Jesus Camacho Rodriguez
>
> hive> create table t (i int,a string,b string);
> hive> insert into t values (1,'hello','world'),(2,'bye',null);
> hive> select * from t where t.b is null;
> 2 bye NULL
> This is wrong, all 3 columns should return the same value - t.a.
> hive> with cte as (select t.a as a,t.a as b,t.a as c from t where t.b is 
> null) select * from cte;
> bye   NULLbye
> However, these are right:
> hive> select t.a as a,t.a as b,t.a as c from t where t.b is null;
> bye   bye bye
> hive> with cte as (select t.a as a,t.a as b,t.a as c from t where t.b is not 
> null) select * from cte;OK
> hello hello   hello



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-13733) CTE + "IS NULL" predicate + column aliasing as existing column leads to wrong results

2016-05-11 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez reassigned HIVE-13733:
--

Assignee: Jesus Camacho Rodriguez

> CTE + "IS NULL" predicate + column aliasing as existing column leads to wrong 
> results 
> --
>
> Key: HIVE-13733
> URL: https://issues.apache.org/jira/browse/HIVE-13733
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Dudu Markovitz
>Assignee: Jesus Camacho Rodriguez
>
> hive> create table t (i int,a string,b string);
> hive> insert into t values (1,'hello','world'),(2,'bye',null);
> hive> select * from t where t.b is null;
> 2 bye NULL
> This is wrong, all 3 columns should return the same value - t.a.
> hive> with cte as (select t.a as a,t.a as b,t.a as c from t where t.b is 
> null) select * from cte;
> bye   NULLbye
> However, these are right:
> hive> select t.a as a,t.a as b,t.a as c from t where t.b is null;
> bye   bye bye
> hive> with cte as (select t.a as a,t.a as b,t.a as c from t where t.b is not 
> null) select * from cte;OK
> hello hello   hello



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >