[jira] [Created] (HIVE-19059) Support DEFAULT keyword with INSERT and UPDATE

2018-03-26 Thread Vineet Garg (JIRA)
Vineet Garg created HIVE-19059:
--

 Summary: Support DEFAULT keyword with INSERT and UPDATE
 Key: HIVE-19059
 URL: https://issues.apache.org/jira/browse/HIVE-19059
 Project: Hive
  Issue Type: New Feature
  Components: SQL
Reporter: Vineet Garg
Assignee: Vineet Garg
 Fix For: 3.0.0
 Attachments: HIVE-19059.1.patch

Support DEFAULT keyword in INSERT e.g.

{code:sql}
INSERT INTO TABLE t values (DEFAULT, DEFAULT)
UPDATE TABLE t SET col1=DEFAULT WHERE col2 > 4
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-19058) add object owner to HivePrivilegeObject

2018-03-26 Thread Eugene Koifman (JIRA)
Eugene Koifman created HIVE-19058:
-

 Summary: add object owner to HivePrivilegeObject
 Key: HIVE-19058
 URL: https://issues.apache.org/jira/browse/HIVE-19058
 Project: Hive
  Issue Type: Bug
  Components: Security
Reporter: Eugene Koifman
Assignee: Eugene Koifman


this can enable HiveAuthorizer to create policies based on the owner of the 
object - for example, only let the owner of a table read/write it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Review Request 66288: HIVE-18885 : DbNotificationListener has a deadlock between Java and DB locks (2.x line)

2018-03-26 Thread Alexander Kolbasov

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/66288/#review27
---


Ship it!




Ship It!

- Alexander Kolbasov


On March 26, 2018, 10:14 p.m., Vihang Karajgaonkar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/66288/
> ---
> 
> (Updated March 26, 2018, 10:14 p.m.)
> 
> 
> Review request for hive and Alexander Kolbasov.
> 
> 
> Bugs: HIVE-18885
> https://issues.apache.org/jira/browse/HIVE-18885
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> HIVE-18885 : DbNotificationListener has a deadlock between Java and DB locks 
> (2.x line)
> 
> 
> Diffs
> -
> 
>   
> hcatalog/server-extensions/src/main/java/org/apache/hive/hcatalog/listener/DbNotificationListener.java
>  41347c22df21a678241edeb766264e6d19c7885a 
> 
> 
> Diff: https://reviews.apache.org/r/66288/diff/2/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Vihang Karajgaonkar
> 
>



[jira] [Created] (HIVE-19057) Query result caching cannot be disabled by client

2018-03-26 Thread Deepesh Khandelwal (JIRA)
Deepesh Khandelwal created HIVE-19057:
-

 Summary: Query result caching cannot be disabled by client
 Key: HIVE-19057
 URL: https://issues.apache.org/jira/browse/HIVE-19057
 Project: Hive
  Issue Type: Bug
  Components: Query Planning
Reporter: Deepesh Khandelwal


HIVE-18513 introduced query results caching along with some toggles to control 
enabling/disabling it. We should whiltelist the following configs so that the 
end user can dynamically control it in their session.
{noformat}
hive.query.results.cache.enabled
hive.query.results.cache.wait.for.pending.results
{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Review Request 66285: HIVE-18770

2018-03-26 Thread Jesús Camacho Rodríguez

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/66285/
---

(Updated March 26, 2018, 11:16 p.m.)


Review request for hive and Ashutosh Chauhan.


Bugs: HIVE-18770
https://issues.apache.org/jira/browse/HIVE-18770


Repository: hive-git


Description
---

HIVE-18770


Diffs (updated)
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
8d9b5a3194708ffacabfdb69d6af7d6193dcf156 
  ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java 
5ad4406ceff5d83bf74264c33947f207ff2c1a61 
  
ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveMaterializedViewsRegistry.java
 3f73fd7fcc2d6c52a2015bdd947c1708723058d6 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveConfPlannerContext.java
 b0f1a8dfafa46f2cb06ca05c673ba37c736d 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveRelBuilder.java 
efd8a35699ef2c4bb9c363925b8adc1e2ca3cbd3 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/cost/HiveVolcanoPlanner.java
 88aedb6381a293c0dd0f7d4e767df6726a86f40f 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/reloperators/HiveTableScan.java
 94a3bac1a7df35c825247e51946ee6ef1b0b6342 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/views/HiveMaterializedViewRule.java
 df9c1802c8983279500d3a06c1c526ce20af6146 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/stats/HiveRelMdPredicates.java
 4dc48f4710196acb68a9df5331244827b212aefe 
  ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java 
612deb8327d85966751834257ab686cfa74f9feb 
  ql/src/test/queries/clientpositive/materialized_view_rewrite_1.q PRE-CREATION 
  ql/src/test/queries/clientpositive/materialized_view_rewrite_2.q PRE-CREATION 
  ql/src/test/queries/clientpositive/materialized_view_rewrite_3.q PRE-CREATION 
  ql/src/test/queries/clientpositive/materialized_view_rewrite_4.q PRE-CREATION 
  ql/src/test/queries/clientpositive/materialized_view_rewrite_5.q PRE-CREATION 
  ql/src/test/queries/clientpositive/materialized_view_rewrite_6.q PRE-CREATION 
  ql/src/test/queries/clientpositive/materialized_view_rewrite_7.q PRE-CREATION 
  ql/src/test/queries/clientpositive/materialized_view_rewrite_8.q PRE-CREATION 
  ql/src/test/queries/clientpositive/materialized_view_rewrite_9.q PRE-CREATION 
  ql/src/test/results/clientpositive/druid/druidmini_mv.q.out 
97f6d844806cf33ea4403b33665142c612da6e84 
  ql/src/test/results/clientpositive/materialized_view_create_rewrite.q.out 
4da3d0930fd30cc3ab74155efb4d82a910ea6944 
  
ql/src/test/results/clientpositive/materialized_view_create_rewrite_multi_db.q.out
 d7ee468b49af904da93a74c86f0898c310970cab 
  ql/src/test/results/clientpositive/materialized_view_rewrite_1.q.out 
PRE-CREATION 
  ql/src/test/results/clientpositive/materialized_view_rewrite_2.q.out 
PRE-CREATION 
  ql/src/test/results/clientpositive/materialized_view_rewrite_3.q.out 
PRE-CREATION 
  ql/src/test/results/clientpositive/materialized_view_rewrite_4.q.out 
PRE-CREATION 
  ql/src/test/results/clientpositive/materialized_view_rewrite_5.q.out 
PRE-CREATION 
  ql/src/test/results/clientpositive/materialized_view_rewrite_6.q.out 
PRE-CREATION 
  ql/src/test/results/clientpositive/materialized_view_rewrite_7.q.out 
PRE-CREATION 
  ql/src/test/results/clientpositive/materialized_view_rewrite_8.q.out 
PRE-CREATION 
  ql/src/test/results/clientpositive/materialized_view_rewrite_9.q.out 
PRE-CREATION 
  
standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
 3e1fea9d4fe707c59ee99781bd4c5aacdbd9d381 


Diff: https://reviews.apache.org/r/66285/diff/3/

Changes: https://reviews.apache.org/r/66285/diff/2-3/


Testing
---


Thanks,

Jesús Camacho Rodríguez



Re: Review Request 66237: HIVE-18971 add HS2 WM metrics for use in Grafana and such

2018-03-26 Thread j . prasanth . j


> On March 24, 2018, 1:34 a.m., Prasanth_J wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WmPoolMetrics.java
> > Lines 159 (patched)
> > 
> >
> > why isn't this tag alone be sufficient? With this tag we can get all 
> > metrics associated/registered under pool. right?
> > 
> > instead of emitting metrics like
> > WM__
> > 
> > why not build a wrapper around this getMetrics() which gets all pool 
> > names and set the tag. So we will have something like
> > 
> > {
> > "tag.SessionId": "6020e225-f36e-470b-a170-b18e69af6fc8",
> > "tag.Poolname": "llap",
> > "NumExecutors": 2,
> > "NumSessions": 2
> > }
> > 
> > If you try to run 2 LLAP daemons on the same host, you would get 2 
> > different metrics with different SessionId. This looks similar to that 
> > except that only thing that changes here is poolName. Am I missing 
> > something?
> 
> Sergey Shelukhin wrote:
> I'm not sure what you mean. I just used session ID as a standard tag, 
> since there's nothing else to put in there.
> Note that metrics are emitted into Hadoop metrics and also codahale 
> (mostly for HS2 JMX).
> Only codahale one uses silly names (which seems to be a common pattern if 
> you see HS2 JMX), the tagged metrics from Hadoop metrics should all have the 
> same name.

make sense.. missed the part about handling codahale vs hadoop metrics which 
supports tags.


- Prasanth_J


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/66237/#review199922
---


On March 26, 2018, 9:30 p.m., Sergey Shelukhin wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/66237/
> ---
> 
> (Updated March 26, 2018, 9:30 p.m.)
> 
> 
> Review request for hive and Prasanth_J.
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> .
> 
> 
> Diffs
> -
> 
>   common/src/java/org/apache/hadoop/hive/common/metrics/LegacyMetrics.java 
> effe26b6b6 
>   common/src/java/org/apache/hadoop/hive/common/metrics/common/Metrics.java 
> 88c513b8cd 
>   
> common/src/java/org/apache/hadoop/hive/common/metrics/metrics2/CodahaleMetrics.java
>  a43b09db8c 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 8d9b5a3194 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/io/api/impl/LlapRecordReader.java
>  3a2c19a3e6 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/tez/GuaranteedTasksAllocator.java 
> a52928cc7a 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/QueryAllocationManager.java 
> 9885ce7221 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WmPoolMetrics.java 
> PRE-CREATION 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManager.java 
> f0e620c684 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/TezProgressMonitor.java
>  a14cdb609a 
>   ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestWorkloadManager.java 
> 20a5947291 
> 
> 
> Diff: https://reviews.apache.org/r/66237/diff/4/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Sergey Shelukhin
> 
>



Re: Review Request 66237: HIVE-18971 add HS2 WM metrics for use in Grafana and such

2018-03-26 Thread j . prasanth . j

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/66237/#review20
---


Ship it!




Ship It!

- Prasanth_J


On March 26, 2018, 9:30 p.m., Sergey Shelukhin wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/66237/
> ---
> 
> (Updated March 26, 2018, 9:30 p.m.)
> 
> 
> Review request for hive and Prasanth_J.
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> .
> 
> 
> Diffs
> -
> 
>   common/src/java/org/apache/hadoop/hive/common/metrics/LegacyMetrics.java 
> effe26b6b6 
>   common/src/java/org/apache/hadoop/hive/common/metrics/common/Metrics.java 
> 88c513b8cd 
>   
> common/src/java/org/apache/hadoop/hive/common/metrics/metrics2/CodahaleMetrics.java
>  a43b09db8c 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 8d9b5a3194 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/io/api/impl/LlapRecordReader.java
>  3a2c19a3e6 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/tez/GuaranteedTasksAllocator.java 
> a52928cc7a 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/QueryAllocationManager.java 
> 9885ce7221 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WmPoolMetrics.java 
> PRE-CREATION 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManager.java 
> f0e620c684 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/TezProgressMonitor.java
>  a14cdb609a 
>   ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestWorkloadManager.java 
> 20a5947291 
> 
> 
> Diff: https://reviews.apache.org/r/66237/diff/4/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Sergey Shelukhin
> 
>



[jira] [Created] (HIVE-19056) IllegalArgumentException in FixAcidKeyIndex when ORC file has 0 rows

2018-03-26 Thread Jason Dere (JIRA)
Jason Dere created HIVE-19056:
-

 Summary: IllegalArgumentException in FixAcidKeyIndex when ORC file 
has 0 rows
 Key: HIVE-19056
 URL: https://issues.apache.org/jira/browse/HIVE-19056
 Project: Hive
  Issue Type: Bug
  Components: ORC, Transactions
Reporter: Jason Dere
Assignee: Jason Dere


{noformat}
ERROR recovering 
/Users/jdere/dev/hwx/gerrit/hive2-gerrit/ql/target/tmp/TestFixAcidKeyIndex.testValidKeyIndex.orc
java.lang.IllegalArgumentException: Seek to a negative row number -1
at 
org.apache.orc.impl.RecordReaderImpl.seekToRow(RecordReaderImpl.java:1300)
at 
org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.seekToRow(RecordReaderImpl.java:101)
at 
org.apache.hadoop.hive.ql.io.orc.FixAcidKeyIndex.recoverFile(FixAcidKeyIndex.java:232)
at 
org.apache.hadoop.hive.ql.io.orc.FixAcidKeyIndex.recoverFiles(FixAcidKeyIndex.java:132)
at 
org.apache.hadoop.hive.ql.io.orc.FixAcidKeyIndex.main(FixAcidKeyIndex.java:104)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Review Request 66288: HIVE-18885 : DbNotificationListener has a deadlock between Java and DB locks (2.x line)

2018-03-26 Thread Vihang Karajgaonkar via Review Board

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/66288/
---

(Updated March 26, 2018, 10:14 p.m.)


Review request for hive and Alexander Kolbasov.


Changes
---

added Alexander's suggestions.


Bugs: HIVE-18885
https://issues.apache.org/jira/browse/HIVE-18885


Repository: hive-git


Description
---

HIVE-18885 : DbNotificationListener has a deadlock between Java and DB locks 
(2.x line)


Diffs (updated)
-

  
hcatalog/server-extensions/src/main/java/org/apache/hive/hcatalog/listener/DbNotificationListener.java
 41347c22df21a678241edeb766264e6d19c7885a 


Diff: https://reviews.apache.org/r/66288/diff/2/

Changes: https://reviews.apache.org/r/66288/diff/1-2/


Testing
---


Thanks,

Vihang Karajgaonkar



[jira] [Created] (HIVE-19055) WM alter may fail if the name is not changed

2018-03-26 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HIVE-19055:
---

 Summary: WM alter may fail if the name is not changed
 Key: HIVE-19055
 URL: https://issues.apache.org/jira/browse/HIVE-19055
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-19054) Function replication shall use "hive.repl.replica.functions.root.dir" as root

2018-03-26 Thread Daniel Dai (JIRA)
Daniel Dai created HIVE-19054:
-

 Summary: Function replication shall use 
"hive.repl.replica.functions.root.dir" as root
 Key: HIVE-19054
 URL: https://issues.apache.org/jira/browse/HIVE-19054
 Project: Hive
  Issue Type: Bug
  Components: repl
Reporter: Daniel Dai
Assignee: Daniel Dai
 Attachments: HIVE-19054.1.patch

It's wrongly use fs.defaultFS as the root, ignore 
"hive.repl.replica.functions.root.dir" definition, thus prevent replicating to 
cloud destination.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Review Request 66290: HIVE-14388 : Add number of rows inserted message after insert command in Beeline

2018-03-26 Thread Bharathkrishna Guruvayoor Murali via Review Board

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/66290/
---

Review request for hive, Sahil Takiar and Vihang Karajgaonkar.


Bugs: HIVE-14388
https://issues.apache.org/jira/browse/HIVE-14388


Repository: hive-git


Description
---

Currently, when you run insert command on beeline, it returns a message saying 
"No rows affected .."
A better and more intuitive msg would be "xxx rows inserted (26.068 seconds)"

Added the numRows parameter as part of QueryState.
Adding the numRows to the response as well to display in beeline.

Getting the count in FileSinkOperator and setting it in statsMap, when it 
operates only on table specific rows for the particular operation. (so that we 
can get only the insert to table count and avoid counting non-table specific 
file-sink operations happening during query execution).


Diffs
-

  jdbc/src/java/org/apache/hive/jdbc/HiveStatement.java 
06542cee02e5dc4696f2621bb45cc4f24c67dfda 
  ql/src/java/org/apache/hadoop/hive/ql/Driver.java 
75f928b69d3d7b206564216d24be450848a1fe8a 
  ql/src/java/org/apache/hadoop/hive/ql/MapRedStats.java 
cf9c2273159c0d779ea90ad029613678fb0967a6 
  ql/src/java/org/apache/hadoop/hive/ql/QueryState.java 
706c9ffa48b9c3b4a6fdaae78bab1d39c3d0efda 
  ql/src/java/org/apache/hadoop/hive/ql/exec/FileSinkOperator.java 
c084fa054cb771bfdb033d244935713e3c7eb874 
  ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java 
c28ef99621e67a5b16bf02a1112df2ec993c4f79 
  ql/src/java/org/apache/hadoop/hive/ql/exec/mr/HadoopJobExecHelper.java 
eb3a11a8815e35dee825edb7d3246c8ecef6b0a7 
  service-rpc/if/TCLIService.thrift 30f8af7f3e6e0598b410498782900ac27971aef0 
  service-rpc/src/gen/thrift/gen-cpp/TCLIService_types.h 
4321ad6d3c966d30f7a69552f91804cf2f1ba6c4 
  service-rpc/src/gen/thrift/gen-cpp/TCLIService_types.cpp 
b2b62c71492b844f4439367364c5c81aa62f3908 
  
service-rpc/src/gen/thrift/gen-javabean/org/apache/hive/service/rpc/thrift/TGetOperationStatusResp.java
 15e8220eb3eb12b72c7b64029410dced33bc0d72 
  service-rpc/src/gen/thrift/gen-php/Types.php 
abb7c1ff3a2c8b72dc97689758266b675880e32b 
  service-rpc/src/gen/thrift/gen-py/TCLIService/ttypes.py 
0f8fd0745be0f4ed9e96b7bbe0f092d03649bcdf 
  service-rpc/src/gen/thrift/gen-rb/t_c_l_i_service_types.rb 
60183dae9e9927bd09a9676e49eeb4aea2401737 
  service/src/java/org/apache/hive/service/cli/CLIService.java 
c9914ba9bf8653cbcbca7d6612e98a64058c0fcc 
  service/src/java/org/apache/hive/service/cli/OperationStatus.java 
52cc3ae4f26b990b3e4edb52d9de85b3cc25f269 
  service/src/java/org/apache/hive/service/cli/operation/Operation.java 
3706c72abc77ac8bd77947cc1c5d084ddf965e9f 
  service/src/java/org/apache/hive/service/cli/thrift/ThriftCLIService.java 
c64c99120ad21ee98af81ec6659a2722e3e1d1c7 


Diff: https://reviews.apache.org/r/66290/diff/1/


Testing
---


Thanks,

Bharathkrishna Guruvayoor Murali



Re: Review Request 66237: HIVE-18971 add HS2 WM metrics for use in Grafana and such

2018-03-26 Thread Sergey Shelukhin


> On March 24, 2018, 1:34 a.m., Prasanth_J wrote:
> > common/src/java/org/apache/hadoop/hive/common/metrics/metrics2/CodahaleMetrics.java
> > Lines 304 (patched)
> > 
> >
> > why do we need to remove the gauge names? why not add a guage and 
> > forget about it.

Cause if the pools are removed, the gauges will just sit there. Why keep them?


> On March 24, 2018, 1:34 a.m., Prasanth_J wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WmPoolMetrics.java
> > Lines 159 (patched)
> > 
> >
> > why isn't this tag alone be sufficient? With this tag we can get all 
> > metrics associated/registered under pool. right?
> > 
> > instead of emitting metrics like
> > WM__
> > 
> > why not build a wrapper around this getMetrics() which gets all pool 
> > names and set the tag. So we will have something like
> > 
> > {
> > "tag.SessionId": "6020e225-f36e-470b-a170-b18e69af6fc8",
> > "tag.Poolname": "llap",
> > "NumExecutors": 2,
> > "NumSessions": 2
> > }
> > 
> > If you try to run 2 LLAP daemons on the same host, you would get 2 
> > different metrics with different SessionId. This looks similar to that 
> > except that only thing that changes here is poolName. Am I missing 
> > something?

I'm not sure what you mean. I just used session ID as a standard tag, since 
there's nothing else to put in there.
Note that metrics are emitted into Hadoop metrics and also codahale (mostly for 
HS2 JMX).
Only codahale one uses silly names (which seems to be a common pattern if you 
see HS2 JMX), the tagged metrics from Hadoop metrics should all have the same 
name.


- Sergey


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/66237/#review199922
---


On March 26, 2018, 9:30 p.m., Sergey Shelukhin wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/66237/
> ---
> 
> (Updated March 26, 2018, 9:30 p.m.)
> 
> 
> Review request for hive and Prasanth_J.
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> .
> 
> 
> Diffs
> -
> 
>   common/src/java/org/apache/hadoop/hive/common/metrics/LegacyMetrics.java 
> effe26b6b6 
>   common/src/java/org/apache/hadoop/hive/common/metrics/common/Metrics.java 
> 88c513b8cd 
>   
> common/src/java/org/apache/hadoop/hive/common/metrics/metrics2/CodahaleMetrics.java
>  a43b09db8c 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 8d9b5a3194 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/io/api/impl/LlapRecordReader.java
>  3a2c19a3e6 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/tez/GuaranteedTasksAllocator.java 
> a52928cc7a 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/QueryAllocationManager.java 
> 9885ce7221 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WmPoolMetrics.java 
> PRE-CREATION 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManager.java 
> f0e620c684 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/TezProgressMonitor.java
>  a14cdb609a 
>   ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestWorkloadManager.java 
> 20a5947291 
> 
> 
> Diff: https://reviews.apache.org/r/66237/diff/4/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Sergey Shelukhin
> 
>



Re: Review Request 66237: HIVE-18971 add HS2 WM metrics for use in Grafana and such

2018-03-26 Thread Sergey Shelukhin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/66237/
---

(Updated March 26, 2018, 9:30 p.m.)


Review request for hive and Prasanth_J.


Repository: hive-git


Description
---

.


Diffs (updated)
-

  common/src/java/org/apache/hadoop/hive/common/metrics/LegacyMetrics.java 
effe26b6b6 
  common/src/java/org/apache/hadoop/hive/common/metrics/common/Metrics.java 
88c513b8cd 
  
common/src/java/org/apache/hadoop/hive/common/metrics/metrics2/CodahaleMetrics.java
 a43b09db8c 
  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 8d9b5a3194 
  
llap-server/src/java/org/apache/hadoop/hive/llap/io/api/impl/LlapRecordReader.java
 3a2c19a3e6 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/GuaranteedTasksAllocator.java 
a52928cc7a 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/QueryAllocationManager.java 
9885ce7221 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WmPoolMetrics.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManager.java 
f0e620c684 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/TezProgressMonitor.java
 a14cdb609a 
  ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestWorkloadManager.java 
20a5947291 


Diff: https://reviews.apache.org/r/66237/diff/4/

Changes: https://reviews.apache.org/r/66237/diff/3-4/


Testing
---


Thanks,

Sergey Shelukhin



[jira] [Created] (HIVE-19053) RemoteSparkJobStatus#getSparkJobInfo treats all exceptions as timeout errors

2018-03-26 Thread Sahil Takiar (JIRA)
Sahil Takiar created HIVE-19053:
---

 Summary: RemoteSparkJobStatus#getSparkJobInfo treats all 
exceptions as timeout errors
 Key: HIVE-19053
 URL: https://issues.apache.org/jira/browse/HIVE-19053
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Sahil Takiar






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Review Request 66237: HIVE-18971 add HS2 WM metrics for use in Grafana and such

2018-03-26 Thread Sergey Shelukhin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/66237/
---

(Updated March 26, 2018, 9:01 p.m.)


Review request for hive and Prasanth_J.


Repository: hive-git


Description
---

.


Diffs (updated)
-

  
llap-server/src/java/org/apache/hadoop/hive/llap/counters/WmFragmentCounters.java
 8287adb636 
  
llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/ContainerRunnerImpl.java
 8cd723d2e0 
  
llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/LlapTaskReporter.java
 b05e0b9e43 
  
llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/TaskRunnerCallable.java
 b484a13e48 


Diff: https://reviews.apache.org/r/66237/diff/3/

Changes: https://reviews.apache.org/r/66237/diff/2-3/


Testing
---


Thanks,

Sergey Shelukhin



Re: Review Request 66285: HIVE-18770

2018-03-26 Thread Jesús Camacho Rodríguez


> On March 26, 2018, 8:24 p.m., Ashutosh Chauhan wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/cost/HiveVolcanoPlanner.java
> > Lines 126-128 (patched)
> > 
> >
> > I don't follow this. No where in logic cost becomes zero (or lower) for 
> > heuristic. 
> > Further, method should break out of recursion as soon as there is a MV, 
> > instead of recursing further. Since in hueristic strategy as soon as we 
> > find a MV we will use that plan

The idea here is that if there is a possible rewriting, we should choose it 
over the original plan. But if there are multiple plans rewritings (e.g., 
multiple MVs, multiple ways of rewriting with same MV, partial rewritings with 
union, etc.), we should use the one with the lower cost among them.
Hence, for the heuristic, 1) we assign a tiny cost to the TS that reads the MV 
and to all operators that are on top of the MV, and 2) we multiply by a certain 
factor the cost of the operators that are not directly on top of a TS with a 
materialized view (e.g., in a partial rewriting, the branch of the union that 
contains execution on the original sources, or in a bushy join tree the join 
operators that do not read any MV). This will help us select plans that replace 
as many joins as possible, and plans that contain full rewritings better than 
partial rewritings. Does it make sense?
In any case, I will add additional comments to the class.


> On March 26, 2018, 8:24 p.m., Ashutosh Chauhan wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/views/HiveMaterializedViewRule.java
> > Lines 97 (patched)
> > 
> >
> > Doesn't look this rule is used anywhere.

This is used in CalcitePlanner (L1910).


> On March 26, 2018, 8:24 p.m., Ashutosh Chauhan wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/views/HiveMaterializedViewRule.java
> > Lines 106 (patched)
> > 
> >
> > Same rule as previous?

The rule definition is here, but the rule instantiation is above. From 
CalcitePlanner, we only reference the final static instance, but we still need 
this rule. The rule basically overrides the _getFloorSqlFunction_ and 
_getRollup_ methods.


> On March 26, 2018, 8:24 p.m., Ashutosh Chauhan wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/views/HiveMaterializedViewRule.java
> > Lines 123 (patched)
> > 
> >
> > Unused rule?

We instantiate it in L102 in this same class. The final static instance is then 
referenced in CalcitePlanner.


> On March 26, 2018, 8:24 p.m., Ashutosh Chauhan wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/stats/HiveRelMdPredicates.java
> > Lines 108 (patched)
> > 
> >
> > Unused method. If you intend to use it, 
> > better name: addNotNullProject()

This methods are not used because they are called via reflection. In any case, 
I have reverted the changes here because they were not needed (they were coming 
from previous version of the patch that was not changing the nullability of the 
input columns).


> On March 26, 2018, 8:24 p.m., Ashutosh Chauhan wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/stats/HiveRelMdPredicates.java
> > Lines 124 (patched)
> > 
> >
> > addNotNullProjects()

Same as above.


> On March 26, 2018, 8:24 p.m., Ashutosh Chauhan wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java
> > Line 1494 (original), 1489 (patched)
> > 
> >
> > Do we store optimized plan or unoptimized plan when loading MV registry 
> > for defined MVs? If its optimized plan invoking rewrite rule at the end of 
> > optimization will make it easier for rewriting rule, else this should be 
> > invoked without any optimization for same reason.

We store the optimized plan. However, for matching purposes, the plan of the MV 
does not really matter because the rule can extract structural information from 
the top of the plan (filters present, columns present, joins, aggregation 
functions, etc).
However, the shape of the plan for the (sub)query that we are trying to match 
matters, as prejoin optimization stage may help us to infer some new 
predicates, remove some unused columns, etc.


- Jesús


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/66285/#review13
---


On March 26, 2018, 6:38 p.m., Jesús 

Re: Review Request 66285: HIVE-18770

2018-03-26 Thread Ashutosh Chauhan

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/66285/#review13
---




ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/cost/HiveVolcanoPlanner.java
Lines 126-128 (patched)


I don't follow this. No where in logic cost becomes zero (or lower) for 
heuristic. 
Further, method should break out of recursion as soon as there is a MV, 
instead of recursing further. Since in hueristic strategy as soon as we find a 
MV we will use that plan



ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/views/HiveMaterializedViewRule.java
Lines 97 (patched)


Doesn't look this rule is used anywhere.



ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/views/HiveMaterializedViewRule.java
Lines 106 (patched)


Same rule as previous?



ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/views/HiveMaterializedViewRule.java
Lines 123 (patched)


Unused rule?



ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/stats/HiveRelMdPredicates.java
Lines 108 (patched)


Unused method. If you intend to use it, 
better name: addNotNullProject()



ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/stats/HiveRelMdPredicates.java
Lines 124 (patched)


addNotNullProjects()



ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java
Line 1494 (original), 1489 (patched)


Do we store optimized plan or unoptimized plan when loading MV registry for 
defined MVs? If its optimized plan invoking rewrite rule at the end of 
optimization will make it easier for rewriting rule, else this should be 
invoked without any optimization for same reason.


- Ashutosh Chauhan


On March 26, 2018, 6:38 p.m., Jesús Camacho Rodríguez wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/66285/
> ---
> 
> (Updated March 26, 2018, 6:38 p.m.)
> 
> 
> Review request for hive and Ashutosh Chauhan.
> 
> 
> Bugs: HIVE-18770
> https://issues.apache.org/jira/browse/HIVE-18770
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> HIVE-18770
> 
> 
> Diffs
> -
> 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
> 8d9b5a3194708ffacabfdb69d6af7d6193dcf156 
>   ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java 
> 5ad4406ceff5d83bf74264c33947f207ff2c1a61 
>   
> ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveMaterializedViewsRegistry.java
>  3f73fd7fcc2d6c52a2015bdd947c1708723058d6 
>   
> ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveConfPlannerContext.java
>  b0f1a8dfafa46f2cb06ca05c673ba37c736d 
>   ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveRelBuilder.java 
> efd8a35699ef2c4bb9c363925b8adc1e2ca3cbd3 
>   
> ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/cost/HiveVolcanoPlanner.java
>  88aedb6381a293c0dd0f7d4e767df6726a86f40f 
>   
> ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/reloperators/HiveTableScan.java
>  94a3bac1a7df35c825247e51946ee6ef1b0b6342 
>   
> ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/views/HiveMaterializedViewRule.java
>  df9c1802c8983279500d3a06c1c526ce20af6146 
>   
> ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/stats/HiveRelMdPredicates.java
>  4dc48f4710196acb68a9df5331244827b212aefe 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java 
> 612deb8327d85966751834257ab686cfa74f9feb 
>   ql/src/test/queries/clientpositive/materialized_view_rewrite_1.q 
> PRE-CREATION 
>   ql/src/test/queries/clientpositive/materialized_view_rewrite_2.q 
> PRE-CREATION 
>   ql/src/test/queries/clientpositive/materialized_view_rewrite_3.q 
> PRE-CREATION 
>   ql/src/test/queries/clientpositive/materialized_view_rewrite_4.q 
> PRE-CREATION 
>   ql/src/test/queries/clientpositive/materialized_view_rewrite_5.q 
> PRE-CREATION 
>   ql/src/test/queries/clientpositive/materialized_view_rewrite_6.q 
> PRE-CREATION 
>   ql/src/test/queries/clientpositive/materialized_view_rewrite_7.q 
> PRE-CREATION 
>   ql/src/test/queries/clientpositive/materialized_view_rewrite_8.q 
> PRE-CREATION 
>   ql/src/test/queries/clientpositive/materialized_view_rewrite_9.q 
> PRE-CREATION 
>   ql/src/test/results/clientpositive/druid/druidmini_mv.q.out 
> 97f6d844806cf33ea4403b33665142c612da6e84 
>   ql/src/test/results/clientpositive/materialized_view_create_rewrite.q.out 
> 

Re: Review Request 66288: HIVE-18885 : DbNotificationListener has a deadlock between Java and DB locks (2.x line)

2018-03-26 Thread Alexander Kolbasov

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/66288/#review14
---



The fix itself is fine, just some comments about comments.


hcatalog/server-extensions/src/main/java/org/apache/hive/hcatalog/listener/DbNotificationListener.java
Line 71 (original), 71 (patched)


Nit: The first Javadoc sentence should not include links.



hcatalog/server-extensions/src/main/java/org/apache/hive/hcatalog/listener/DbNotificationListener.java
Line 73 (original), 73 (patched)


Nit: Please use  for paragraph separation.



hcatalog/server-extensions/src/main/java/org/apache/hive/hcatalog/listener/DbNotificationListener.java
Line 75 (original), 75 (patched)


s/puts//



hcatalog/server-extensions/src/main/java/org/apache/hive/hcatalog/listener/DbNotificationListener.java
Line 76 (original), 76 (patched)


Can you add a description of how this event ID is generated. This is a very 
important piece of functionality.



hcatalog/server-extensions/src/main/java/org/apache/hive/hcatalog/listener/DbNotificationListener.java
Lines 82 (patched)


Nit: use 



hcatalog/server-extensions/src/main/java/org/apache/hive/hcatalog/listener/DbNotificationListener.java
Lines 87 (patched)


It is important to note that this is R/W lock and it is obtained using 
SELECT FOR UPDATE for the single row.



hcatalog/server-extensions/src/main/java/org/apache/hive/hcatalog/listener/DbNotificationListener.java
Lines 89 (patched)


s/is can/is likely to/


- Alexander Kolbasov


On March 26, 2018, 7:42 p.m., Vihang Karajgaonkar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/66288/
> ---
> 
> (Updated March 26, 2018, 7:42 p.m.)
> 
> 
> Review request for hive and Alexander Kolbasov.
> 
> 
> Bugs: HIVE-18885
> https://issues.apache.org/jira/browse/HIVE-18885
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> HIVE-18885 : DbNotificationListener has a deadlock between Java and DB locks 
> (2.x line)
> 
> 
> Diffs
> -
> 
>   
> hcatalog/server-extensions/src/main/java/org/apache/hive/hcatalog/listener/DbNotificationListener.java
>  41347c22df21a678241edeb766264e6d19c7885a 
> 
> 
> Diff: https://reviews.apache.org/r/66288/diff/1/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Vihang Karajgaonkar
> 
>



[jira] [Created] (HIVE-19052) Vectorization: Disable Vector Pass-Thru MapJoin in the presence of old-style MR FilterMaps

2018-03-26 Thread Matt McCline (JIRA)
Matt McCline created HIVE-19052:
---

 Summary: Vectorization: Disable Vector Pass-Thru MapJoin in the 
presence of old-style MR FilterMaps
 Key: HIVE-19052
 URL: https://issues.apache.org/jira/browse/HIVE-19052
 Project: Hive
  Issue Type: Bug
  Components: Hive
Affects Versions: 3.0.0
Reporter: Matt McCline


Pass-Thru VectorMapJoinOperator and VectorSMBMapJoinOperator were not designed 
to handle old-style MR FilterMaps.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Review Request 66288: HIVE-18885 : DbNotificationListener has a deadlock between Java and DB locks (2.x line)

2018-03-26 Thread Vihang Karajgaonkar via Review Board

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/66288/
---

Review request for hive and Alexander Kolbasov.


Repository: hive-git


Description
---

HIVE-18885 : DbNotificationListener has a deadlock between Java and DB locks 
(2.x line)


Diffs
-

  
hcatalog/server-extensions/src/main/java/org/apache/hive/hcatalog/listener/DbNotificationListener.java
 41347c22df21a678241edeb766264e6d19c7885a 


Diff: https://reviews.apache.org/r/66288/diff/1/


Testing
---


Thanks,

Vihang Karajgaonkar



[jira] [Created] (HIVE-19051) Add units to displayed Spark metrics

2018-03-26 Thread Sahil Takiar (JIRA)
Sahil Takiar created HIVE-19051:
---

 Summary: Add units to displayed Spark metrics
 Key: HIVE-19051
 URL: https://issues.apache.org/jira/browse/HIVE-19051
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Sahil Takiar


When we print Spark stats in the logs, there are no units associated with the 
metrics, which can be confusing for users. Specifically, for time-based metrics 
like {{TaskDuration}} we should display units each value is in (I think for 
more of them its in milliseconds, but should be confirmed).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Review Request 66285: HIVE-18770

2018-03-26 Thread Jesús Camacho Rodríguez

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/66285/
---

Review request for hive and Ashutosh Chauhan.


Bugs: HIVE-18770
https://issues.apache.org/jira/browse/HIVE-18770


Repository: hive-git


Description
---

HIVE-18770


Diffs
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
8d9b5a3194708ffacabfdb69d6af7d6193dcf156 
  ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java 
5ad4406ceff5d83bf74264c33947f207ff2c1a61 
  
ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveMaterializedViewsRegistry.java
 3f73fd7fcc2d6c52a2015bdd947c1708723058d6 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveConfPlannerContext.java
 b0f1a8dfafa46f2cb06ca05c673ba37c736d 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveRelBuilder.java 
efd8a35699ef2c4bb9c363925b8adc1e2ca3cbd3 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/cost/HiveVolcanoPlanner.java
 88aedb6381a293c0dd0f7d4e767df6726a86f40f 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/reloperators/HiveTableScan.java
 94a3bac1a7df35c825247e51946ee6ef1b0b6342 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/views/HiveMaterializedViewRule.java
 df9c1802c8983279500d3a06c1c526ce20af6146 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/stats/HiveRelMdPredicates.java
 4dc48f4710196acb68a9df5331244827b212aefe 
  ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java 
612deb8327d85966751834257ab686cfa74f9feb 
  ql/src/test/queries/clientpositive/materialized_view_rewrite_1.q PRE-CREATION 
  ql/src/test/queries/clientpositive/materialized_view_rewrite_2.q PRE-CREATION 
  ql/src/test/queries/clientpositive/materialized_view_rewrite_3.q PRE-CREATION 
  ql/src/test/queries/clientpositive/materialized_view_rewrite_4.q PRE-CREATION 
  ql/src/test/queries/clientpositive/materialized_view_rewrite_5.q PRE-CREATION 
  ql/src/test/queries/clientpositive/materialized_view_rewrite_6.q PRE-CREATION 
  ql/src/test/queries/clientpositive/materialized_view_rewrite_7.q PRE-CREATION 
  ql/src/test/queries/clientpositive/materialized_view_rewrite_8.q PRE-CREATION 
  ql/src/test/queries/clientpositive/materialized_view_rewrite_9.q PRE-CREATION 
  ql/src/test/results/clientpositive/druid/druidmini_mv.q.out 
97f6d844806cf33ea4403b33665142c612da6e84 
  ql/src/test/results/clientpositive/materialized_view_create_rewrite.q.out 
4da3d0930fd30cc3ab74155efb4d82a910ea6944 
  
ql/src/test/results/clientpositive/materialized_view_create_rewrite_multi_db.q.out
 d7ee468b49af904da93a74c86f0898c310970cab 
  ql/src/test/results/clientpositive/materialized_view_rewrite_1.q.out 
PRE-CREATION 
  ql/src/test/results/clientpositive/materialized_view_rewrite_2.q.out 
PRE-CREATION 
  ql/src/test/results/clientpositive/materialized_view_rewrite_3.q.out 
PRE-CREATION 
  ql/src/test/results/clientpositive/materialized_view_rewrite_4.q.out 
PRE-CREATION 
  ql/src/test/results/clientpositive/materialized_view_rewrite_5.q.out 
PRE-CREATION 
  ql/src/test/results/clientpositive/materialized_view_rewrite_6.q.out 
PRE-CREATION 
  ql/src/test/results/clientpositive/materialized_view_rewrite_7.q.out 
PRE-CREATION 
  ql/src/test/results/clientpositive/materialized_view_rewrite_8.q.out 
PRE-CREATION 
  ql/src/test/results/clientpositive/materialized_view_rewrite_9.q.out 
PRE-CREATION 
  
standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
 3e1fea9d4fe707c59ee99781bd4c5aacdbd9d381 


Diff: https://reviews.apache.org/r/66285/diff/1/


Testing
---


Thanks,

Jesús Camacho Rodríguez



[jira] [Created] (HIVE-19050) DBNotificationListener does not catch exceptions in the cleaner thread

2018-03-26 Thread Vihang Karajgaonkar (JIRA)
Vihang Karajgaonkar created HIVE-19050:
--

 Summary: DBNotificationListener does not catch exceptions in the 
cleaner thread
 Key: HIVE-19050
 URL: https://issues.apache.org/jira/browse/HIVE-19050
 Project: Hive
  Issue Type: Bug
  Components: Metastore, Standalone Metastore
Affects Versions: 3.0.0, 2.4.0
Reporter: Vihang Karajgaonkar
Assignee: Vihang Karajgaonkar


The DbNotificationListener class has a separate thread which cleans the old 
notifications from the database. Here is the snippet from the {{run}} method.

{noformat}
public void run() {
  while (true) {
rs.cleanNotificationEvents(ttl);
LOG.debug("Cleaner thread done");
try {
  Thread.sleep(sleepTime);
} catch (InterruptedException e) {
  LOG.info("Cleaner thread sleep interrupted", e);
}
  }
}
{noformat}

If {{rs.cleanNotificationEvents}} throws a RuntimeException which datanucleus 
can throw the exception remains uncaught and the thread will die. This can lead 
to older notifications never getting cleaned until we restart HMS. Given that 
many operations generate loads of events, the notification log table can 
quickly have thousands of rows which are never get cleaned up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-19049) Add support for Alter table add columns for Druid

2018-03-26 Thread Nishant Bangarwa (JIRA)
Nishant Bangarwa created HIVE-19049:
---

 Summary: Add support for Alter table add columns for Druid
 Key: HIVE-19049
 URL: https://issues.apache.org/jira/browse/HIVE-19049
 Project: Hive
  Issue Type: Task
Reporter: Nishant Bangarwa
Assignee: Nishant Bangarwa


Add support for Alter table add columns for Druid. 
Currently it is not supported and throws exception. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Review Request 65716: HIVE-18696: The partition folders might not get cleaned up properly in the HiveMetaStore.add_partitions_core method if an exception occurs

2018-03-26 Thread Marta Kuczora via Review Board

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/65716/
---

(Updated March 26, 2018, 10:10 a.m.)


Review request for hive, Alexander Kolbasov, Peter Vary, and Adam Szita.


Changes
---

Rebase patch


Bugs: HIVE-18696
https://issues.apache.org/jira/browse/HIVE-18696


Repository: hive-git


Description
---

The idea behind the patch is

1) Separate the partition validation from starting the tasks which create the 
partition folders. 
Instead of doing the checks on the partitions and submit the tasks in one loop, 
separated the validation into a different loop. So first iterate through the 
partitions, validate the table/db names, and check for duplicates. Then if all 
partitions were correct, in the second loop submit the tasks to create the 
partition folders. This way if one of the partitions is incorrect, the 
exception will be thrown in the first loop, before the tasks are submitted. So 
we can be sure that no partition folder will be created if the list contains an 
invalid partition.

2) Handle the exceptions which occur during the execution of the tasks 
differently.
Previously if an exception occured in one task, the remaining tasks were 
canceled, and the newly created partition folders were cleaned up in the 
finally part. The problem was that it could happen that some tasks were still 
not finished with the folder creation when cleaning up the others, so there 
could have been leftover folders. After doing some testing it turned out that 
this use case cannot be avoided completely when canceling the tasks.
The idea of this patch is to set a flag if an exception is thrown in one of the 
tasks. This flag is visible in the tasks and if its value is true, the 
partition folders won't be created. Then iterate through the remaining tasks 
and wait for them to finish. The tasks which are started before the flag got 
set will then finish creating the partition folders. The tasks which are 
started after the flag got set, won't create the partition folders, to avoid 
unnecessary work. This way it is sure that all tasks are finished, when 
entering the finally part where the partition folders are cleaned up.


Diffs (updated)
-

  
standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
 519e8fe 
  
standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/client/TestAddPartitions.java
 4d9cb1b 
  
standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/client/TestAddPartitionsFromPartSpec.java
 1122057 


Diff: https://reviews.apache.org/r/65716/diff/4/

Changes: https://reviews.apache.org/r/65716/diff/3-4/


Testing
---

Added some new tests cases to the TestAddPartitions and 
TestAddPartitionsFromPartSpec tests.


Thanks,

Marta Kuczora



[jira] [Created] (HIVE-19048) Initscript errors are ignored

2018-03-26 Thread Zoltan Haindrich (JIRA)
Zoltan Haindrich created HIVE-19048:
---

 Summary: Initscript errors are ignored
 Key: HIVE-19048
 URL: https://issues.apache.org/jira/browse/HIVE-19048
 Project: Hive
  Issue Type: Bug
  Components: Beeline
Reporter: Zoltan Haindrich


I've been running some queries for a while when I've noticed that my initscript 
has an error; and beeline stops interpreting the initscript after encountering 
the first error.

{code}
echo 'invalid;' > init.sql
echo 'select 1;' > s1.sql
beeline -u jdbc:hive2://localhost:1/ -n hive -i init.sql -f s1.sql 
[...]
Running init script init.sql
0: jdbc:hive2://localhost:1/> invalid;
Error: Error while compiling statement: FAILED: ParseException line 1:0 cannot 
recognize input near 'invalid' '' '' (state=42000,code=4)
0: jdbc:hive2://localhost:1/> select 1;
[...]
$ echo $?
0
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-19047) Only the first init file is interpreted

2018-03-26 Thread Zoltan Haindrich (JIRA)
Zoltan Haindrich created HIVE-19047:
---

 Summary: Only the first init file is interpreted
 Key: HIVE-19047
 URL: https://issues.apache.org/jira/browse/HIVE-19047
 Project: Hive
  Issue Type: Bug
  Components: Beeline
Reporter: Zoltan Haindrich


I've passed multiple {{-i}} options to beeline; and I've expected to load both 
of them...but unfortunately it only parsed the first file (and ignored the 
second entirely)

I think it would be better to:

* either reject the command if it has multiple "-i" given
* or load *all* "-i" scripts



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Review Request 65716: HIVE-18696: The partition folders might not get cleaned up properly in the HiveMetaStore.add_partitions_core method if an exception occurs

2018-03-26 Thread Marta Kuczora via Review Board


> On March 13, 2018, 5:07 a.m., Alexander Kolbasov wrote:
> > standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
> > Line 3162 (original), 3169 (patched)
> > 
> >
> > is it possible that getSd() is null here?

No, it is not possible. If the SD is null, an exception will occur earlier when 
trying to create the partition folder. However this use case is not handled  
there either (NPE will occur), so I will create an other Jira to fix this.


- Marta


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/65716/#review199066
---


On March 8, 2018, 4:52 p.m., Marta Kuczora wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/65716/
> ---
> 
> (Updated March 8, 2018, 4:52 p.m.)
> 
> 
> Review request for hive, Alexander Kolbasov, Peter Vary, and Adam Szita.
> 
> 
> Bugs: HIVE-18696
> https://issues.apache.org/jira/browse/HIVE-18696
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> The idea behind the patch is
> 
> 1) Separate the partition validation from starting the tasks which create the 
> partition folders. 
> Instead of doing the checks on the partitions and submit the tasks in one 
> loop, separated the validation into a different loop. So first iterate 
> through the partitions, validate the table/db names, and check for 
> duplicates. Then if all partitions were correct, in the second loop submit 
> the tasks to create the partition folders. This way if one of the partitions 
> is incorrect, the exception will be thrown in the first loop, before the 
> tasks are submitted. So we can be sure that no partition folder will be 
> created if the list contains an invalid partition.
> 
> 2) Handle the exceptions which occur during the execution of the tasks 
> differently.
> Previously if an exception occured in one task, the remaining tasks were 
> canceled, and the newly created partition folders were cleaned up in the 
> finally part. The problem was that it could happen that some tasks were still 
> not finished with the folder creation when cleaning up the others, so there 
> could have been leftover folders. After doing some testing it turned out that 
> this use case cannot be avoided completely when canceling the tasks.
> The idea of this patch is to set a flag if an exception is thrown in one of 
> the tasks. This flag is visible in the tasks and if its value is true, the 
> partition folders won't be created. Then iterate through the remaining tasks 
> and wait for them to finish. The tasks which are started before the flag got 
> set will then finish creating the partition folders. The tasks which are 
> started after the flag got set, won't create the partition folders, to avoid 
> unnecessary work. This way it is sure that all tasks are finished, when 
> entering the finally part where the partition folders are cleaned up.
> 
> 
> Diffs
> -
> 
>   
> standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
>  662de9a 
>   
> standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/client/TestAddPartitions.java
>  4d9cb1b 
>   
> standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/client/TestAddPartitionsFromPartSpec.java
>  1122057 
> 
> 
> Diff: https://reviews.apache.org/r/65716/diff/3/
> 
> 
> Testing
> ---
> 
> Added some new tests cases to the TestAddPartitions and 
> TestAddPartitionsFromPartSpec tests.
> 
> 
> Thanks,
> 
> Marta Kuczora
> 
>



Re: Confluence - Need edit permission

2018-03-26 Thread Karthik P
Confluence:
User ID : kpalanisamy
Name : Karthik Palanisamy


On Mon, Mar 26, 2018 at 2:51 PM, Karthik P  wrote:

> Team,
>
> I need edit permission to Confluence HBaseBulkLoad
> . Some
> distribution like HDP has enabled 'Tez' as execution engine which by
> default. So HFile generation may not work properly and will throw the
> following exception.
>
> {code}
>
> Caused by: java.io.IOException: wrong key class: org.apache.hadoop.io
> .LongWritable is not class org.apache.hadoop.hive.ql.io.HiveKey
>
> at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2332)
>
> at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2384)
>
> at org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitio
> ner.readPartitions(TotalOrderPartitioner.java:306)
>
> at org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitio
> ner.setConf(TotalOrderPartitioner.java:88)
>
> ... 27 more
> {code}
>
>
> Some ref,  TEZ-2741 
>
>
> *Document fix:* Set map-reduce engine when generating HFiles.
> *set hive.execution.engine=mr;*
>
> --
> Thank you,
> *Karthik Palanisamy*
> Bangalore, *India*
> Mobile : +91 9940089181
> Skype : karthik.p01
>



-- 
Thank you,
*Karthik Palanisamy*
Bangalore, *India*
Mobile : +91 9940089181
Skype : karthik.p01


Confluence - Need edit permission

2018-03-26 Thread Karthik P
Team,

I need edit permission to Confluence HBaseBulkLoad
. Some
distribution like HDP has enabled 'Tez' as execution engine which by
default. So HFile generation may not work properly and will throw the
following exception.

{code}

Caused by: java.io.IOException: wrong key class:
org.apache.hadoop.io.LongWritable
is not class org.apache.hadoop.hive.ql.io.HiveKey

at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2332)

at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2384)

at org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner.
readPartitions(TotalOrderPartitioner.java:306)

at org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner.setConf(
TotalOrderPartitioner.java:88)

... 27 more
{code}


Some ref,  TEZ-2741 


*Document fix:* Set map-reduce engine when generating HFiles.
*set hive.execution.engine=mr;*

-- 
Thank you,
*Karthik Palanisamy*
Bangalore, *India*
Mobile : +91 9940089181
Skype : karthik.p01


[jira] [Created] (HIVE-19046) Refactor the common parts of the HiveMetastore add_partition_core and add_partitions_pspec_core methods

2018-03-26 Thread Marta Kuczora (JIRA)
Marta Kuczora created HIVE-19046:


 Summary: Refactor the common parts of the HiveMetastore 
add_partition_core and add_partitions_pspec_core methods
 Key: HIVE-19046
 URL: https://issues.apache.org/jira/browse/HIVE-19046
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Reporter: Marta Kuczora
Assignee: Marta Kuczora


This is a follow-up Jira of the 
[HIVE-18696|https://issues.apache.org/jira/browse/HIVE-18696] 
[review|https://reviews.apache.org/r/65716/].
The biggest part of these methods use the same code. It would make sense to 
move this code part to a common method.

This code is almost the same in the two methods:
{code}
List partFutures = Lists.newArrayList();
final Table table = tbl;
for (final Partition part : parts) {
  if (!part.getTableName().equals(tblName) || 
!part.getDbName().equals(dbName)) {
throw new MetaException("Partition does not belong to target table "
+ dbName + "." + tblName + ": " + part);
  }

  boolean shouldAdd = startAddPartition(ms, part, ifNotExists);
  if (!shouldAdd) {
existingParts.add(part);
LOG.info("Not adding partition " + part + " as it already exists");
continue;
  }

  final UserGroupInformation ugi;
  try {
ugi = UserGroupInformation.getCurrentUser();
  } catch (IOException e) {
throw new RuntimeException(e);
  }

  partFutures.add(threadPool.submit(new Callable() {
@Override
public Partition call() throws Exception {
  ugi.doAs(new PrivilegedExceptionAction() {
@Override
public Object run() throws Exception {
  try {
boolean madeDir = createLocationForAddedPartition(table, 
part);
if (addedPartitions.put(new PartValEqWrapper(part), 
madeDir) != null) {
  // Technically, for ifNotExists case, we could insert one 
and discard the other
  // because the first one now "exists", but it seems 
better to report the problem
  // upstream as such a command doesn't make sense.
  throw new MetaException("Duplicate partitions in the 
list: " + part);
}
initializeAddedPartition(table, part, madeDir);
  } catch (MetaException e) {
throw new IOException(e.getMessage(), e);
  }
  return null;
}
  });
  return part;
}
  }));
}

try {
  for (Future partFuture : partFutures) {
Partition part = partFuture.get();
if (part != null) {
  newParts.add(part);
}
  }
} catch (InterruptedException | ExecutionException e) {
  // cancel other tasks
  for (Future partFuture : partFutures) {
partFuture.cancel(true);
  }
  throw new MetaException(e.getMessage());
}
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Review Request 65716: HIVE-18696: The partition folders might not get cleaned up properly in the HiveMetaStore.add_partitions_core method if an exception occurs

2018-03-26 Thread Marta Kuczora via Review Board


> On Feb. 21, 2018, 3:32 p.m., Sahil Takiar wrote:
> > Would be good to know which Hive queries invoke this method.
> 
> Marta Kuczora wrote:
> Thanks a lot Sahil for the review. I will check where these methods are 
> used and come back to you with the answer a bit later.

I checked how these methods are used.

The 'add_partition_core' method is used when adding multiple partitions to the 
table. A simple query, like the following, can trigger it.

ALTER TABLE bubu ADD PARTITION (year='2017',month='march') PARTITION 
(year='2017',month='april') PARTITION (year='2018',month='march') PARTITION 
(year='2018',month='may') PARTITION (year='2017',month='march', day="3");

It is also used by the DDLTask.createPartitionsInBatches method which is used 
by the 'msck repair' command.


I didn't find much about the 'add_partitions_pspec_core' method. I only found 
that it is used by the HCatClientHMSImpl.addPartitionSpec method, but I don't 
know if the HCatClientHMSImpl is used or how it is used.


> On Feb. 21, 2018, 3:32 p.m., Sahil Takiar wrote:
> > standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
> > Line 3032 (original), 3065 (patched)
> > 
> >
> > this code looks very similar to the block above. I know its was never 
> > the intention of this JIRA to do any re-factoring, but how difficult would 
> > it be to move all this code into a common method so that we don't have to 
> > fix the bug in two places? not a blocking issue though
> 
> Marta Kuczora wrote:
> Yeah, I absolutely agree. This code duplication annoys me as well, just I 
> wasn't sure that it is acceptable doing the refactoring in the scope of this 
> Jira. But it is not so difficult, so I will upload a patch where I moved the 
> common parts to a separate method and we can decide if it is ok like that or 
> rather do it in a different Jira.
> 
> Marta Kuczora wrote:
> I checked how this could be refactored and there are some differences 
> between the methods which makes it not that straightforward. It is not that 
> difficult and basically I have the patch, but I would do it in the scope of 
> an other Jira, so we can discuss some details there. Would this be ok for you 
> Sahil?

Created a follow-up Jira for the refactoring
https://issues.apache.org/jira/browse/HIVE-19046


- Marta


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/65716/#review197829
---


On March 8, 2018, 4:52 p.m., Marta Kuczora wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/65716/
> ---
> 
> (Updated March 8, 2018, 4:52 p.m.)
> 
> 
> Review request for hive, Alexander Kolbasov, Peter Vary, and Adam Szita.
> 
> 
> Bugs: HIVE-18696
> https://issues.apache.org/jira/browse/HIVE-18696
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> The idea behind the patch is
> 
> 1) Separate the partition validation from starting the tasks which create the 
> partition folders. 
> Instead of doing the checks on the partitions and submit the tasks in one 
> loop, separated the validation into a different loop. So first iterate 
> through the partitions, validate the table/db names, and check for 
> duplicates. Then if all partitions were correct, in the second loop submit 
> the tasks to create the partition folders. This way if one of the partitions 
> is incorrect, the exception will be thrown in the first loop, before the 
> tasks are submitted. So we can be sure that no partition folder will be 
> created if the list contains an invalid partition.
> 
> 2) Handle the exceptions which occur during the execution of the tasks 
> differently.
> Previously if an exception occured in one task, the remaining tasks were 
> canceled, and the newly created partition folders were cleaned up in the 
> finally part. The problem was that it could happen that some tasks were still 
> not finished with the folder creation when cleaning up the others, so there 
> could have been leftover folders. After doing some testing it turned out that 
> this use case cannot be avoided completely when canceling the tasks.
> The idea of this patch is to set a flag if an exception is thrown in one of 
> the tasks. This flag is visible in the tasks and if its value is true, the 
> partition folders won't be created. Then iterate through the remaining tasks 
> and wait for them to finish. The tasks which are started before the flag got 
> set will then finish creating the partition folders. The tasks which are 
> started after the flag got set, won't create the partition folders, to avoid 
> unnecessary work. This way it is sure that all tasks are