[jira] [Created] (HIVE-14305) To/From UTC timestamp may return incorrect result because of DST

2016-07-20 Thread Rui Li (JIRA)
Rui Li created HIVE-14305:
-

 Summary: To/From UTC timestamp may return incorrect result because 
of DST
 Key: HIVE-14305
 URL: https://issues.apache.org/jira/browse/HIVE-14305
 Project: Hive
  Issue Type: Bug
Reporter: Rui Li
Assignee: Rui Li






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-14304) Beeline command will fail when entireLineAsCommand set to true

2016-07-20 Thread niklaus xiao (JIRA)
niklaus xiao created HIVE-14304:
---

 Summary: Beeline command will fail when entireLineAsCommand set to 
true
 Key: HIVE-14304
 URL: https://issues.apache.org/jira/browse/HIVE-14304
 Project: Hive
  Issue Type: Bug
  Components: Beeline
Affects Versions: 1.3.0, 2.2.0
Reporter: niklaus xiao
Assignee: niklaus xiao


Use beeline
{code}
beeline --entireLineAsCommand=true
{code}

show tables fail:
{code}
0: jdbc:hive2://189.39.151.44:21066/> show tables;
Error: Error while compiling statement: FAILED: ParseException line 1:11 
extraneous input ';' expecting EOF near '' (state=42000,code=4)
{code}

We should remove the trailing semi-colon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-14303) CommonJoinOperator.checkAndGenObject should return directly in CLOSE state to avoid NPE if ExecReducer.close is called twice.

2016-07-20 Thread zhihai xu (JIRA)
zhihai xu created HIVE-14303:


 Summary: CommonJoinOperator.checkAndGenObject should return 
directly in CLOSE state to avoid NPE if ExecReducer.close is called twice.
 Key: HIVE-14303
 URL: https://issues.apache.org/jira/browse/HIVE-14303
 Project: Hive
  Issue Type: Bug
Reporter: zhihai xu
Assignee: zhihai xu
 Fix For: 2.1.0


CommonJoinOperator.checkAndGenObject should return directly in CLOSE state to 
avoid NPE if ExecReducer.close is called twice. ExecReducer.close implements 
Closeable interface and ExecReducer.close can be called multiple time. We saw 
the following NPE which hide the real exception due to this bug.
{code}
Error: java.lang.RuntimeException: Hive Runtime Error while closing operators: 
null

at 
org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:296)

at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)

at 
org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)

at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)

at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:415)

at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)

at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

Caused by: java.lang.NullPointerException

at 
org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:718)

at 
org.apache.hadoop.hive.ql.exec.JoinOperator.endGroup(JoinOperator.java:256)

at 
org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:284)

... 8 more
{code}
The code from ReduceTask.runOldReducer:
{code}
  reducer.close(); //line 453
  reducer = null;
  
  out.close(reporter);
  out = null;
} finally {
  IOUtils.cleanup(LOG, reducer);// line 459
  closeQuietly(out, reporter);
}
{code}
Based on the above stack trace and code, reducer.close() is called twice 
because the exception happened when reducer.close() is called for the first 
time at line 453, the code exit before reducer was set to null. 
NullPointerException is triggered when reducer.close() is called for the second 
time in IOUtils.cleanup. NullPointerException hide the real exception which 
happened when reducer.close() is called for the first time at line 453.
The reason for NPE is:
The first reducer.close called CommonJoinOperator.closeOp which clear 
{{storage}}
{code}
Arrays.fill(storage, null);
{code}
the second reduce.close generated NPE due to null {{storage[alias]}} which is 
set to null by first reducer.close.
The following reducer log can give more proof:
{code}
2016-07-14 22:24:51,016 INFO [main] 
org.apache.hadoop.hive.ql.exec.JoinOperator: 0 finished. closing... 
2016-07-14 22:24:51,016 INFO [main] 
org.apache.hadoop.hive.ql.exec.JoinOperator: 0 finished. closing... 
2016-07-14 22:24:51,016 INFO [main] 
org.apache.hadoop.hive.ql.exec.JoinOperator: SKEWJOINFOLLOWUPJOBS:0
2016-07-14 22:24:51,016 INFO [main] 
org.apache.hadoop.hive.ql.exec.SelectOperator: 1 finished. closing... 
2016-07-14 22:24:51,016 INFO [main] 
org.apache.hadoop.hive.ql.exec.SelectOperator: 2 finished. closing... 
2016-07-14 22:24:51,016 INFO [main] 
org.apache.hadoop.hive.ql.exec.SelectOperator: 3 finished. closing... 
2016-07-14 22:24:51,016 INFO [main] 
org.apache.hadoop.hive.ql.exec.FileSinkOperator: 4 finished. closing... 
2016-07-14 22:24:51,016 INFO [main] 
org.apache.hadoop.hive.ql.exec.FileSinkOperator: FS[4]: records written - 53466
2016-07-14 22:25:11,555 ERROR [main] ExecReducer: Hit error while closing 
operators - failing tree
2016-07-14 22:25:11,649 WARN [main] org.apache.hadoop.mapred.YarnChild: 
Exception running child : java.lang.RuntimeException: Hive Runtime Error while 
closing operators: null
at 
org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:296)
at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
at 
org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:718)
at 
org.apache.hadoop.hive.ql.exec.JoinOperator.endGroup(JoinOperator.java:256)
 

Re: Review Request 49965: HIVE-13995 Hive generates inefficient metastore queries for TPCDS tables with 1800+ partitions leading to higher compile time

2016-07-20 Thread Ashutosh Chauhan

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49965/#review143041
---




metastore/if/hive_metastore.thrift (line 548)


you should provide default value here.
5: optional i32 numPartitions = -1


- Ashutosh Chauhan


On July 20, 2016, 10:16 p.m., Hari Sankar Sivarama Subramaniyan wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/49965/
> ---
> 
> (Updated July 20, 2016, 10:16 p.m.)
> 
> 
> Review request for hive and Ashutosh Chauhan.
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> Hive generates inefficient metastore queries for TPCDS tables with 1800+ 
> partitions leading to higher compile time
> 
> 
> Diffs
> -
> 
>   
> itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java
>  d90085b 
>   
> itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/hbase/TestHBaseAggrStatsCacheIntegration.java
>  51d96dd 
>   metastore/if/hive_metastore.thrift 4d92b73 
>   metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java 
> 38c0eed 
>   
> metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java 
> 909d8eb 
>   metastore/src/java/org/apache/hadoop/hive/metastore/IMetaStoreClient.java 
> b6fe502 
>   metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java 
> 9c900af 
>   metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java 
> 5adfa02 
>   metastore/src/java/org/apache/hadoop/hive/metastore/RawStore.java bbd47b8 
>   metastore/src/java/org/apache/hadoop/hive/metastore/hbase/HBaseStore.java 
> c65c7a4 
>   
> metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreControlledCommit.java
>  1ea72a0 
>   
> metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreForJdoConnection.java
>  3e6acc7 
>   
> metastore/src/test/org/apache/hadoop/hive/metastore/hbase/TestHBaseAggregateStatsCache.java
>  6cd3a46 
>   
> metastore/src/test/org/apache/hadoop/hive/metastore/hbase/TestHBaseAggregateStatsCacheWithBitVector.java
>  e0c4094 
>   
> metastore/src/test/org/apache/hadoop/hive/metastore/hbase/TestHBaseAggregateStatsExtrapolation.java
>  f4e55ed 
>   
> metastore/src/test/org/apache/hadoop/hive/metastore/hbase/TestHBaseAggregateStatsNDVUniformDist.java
>  62918be 
>   ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java ef0bb3d 
>   ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartitionPruner.java 
> 26e936e 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/PrunedPartitionList.java 
> da2e1e2 
>   ql/src/java/org/apache/hadoop/hive/ql/stats/StatsUtils.java d8acf94 
> 
> Diff: https://reviews.apache.org/r/49965/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Hari Sankar Sivarama Subramaniyan
> 
>



[jira] [Created] (HIVE-14302) Tez: Optimized Hashtable can support DECIMAL keys of same precision

2016-07-20 Thread Gopal V (JIRA)
Gopal V created HIVE-14302:
--

 Summary: Tez: Optimized Hashtable can support DECIMAL keys of same 
precision
 Key: HIVE-14302
 URL: https://issues.apache.org/jira/browse/HIVE-14302
 Project: Hive
  Issue Type: Improvement
  Components: Tez
Affects Versions: 2.2.0
Reporter: Gopal V


Decimal support in the optimized hashtable was decided on the basis of the fact 
that Decimal(10,1) == Decimal(10, 2) when both contain "1.0" and "1.00".

However, the joins now don't have any issues with decimal precision because 
they cast to common.

{code}
create temporary table x (a decimal(10,2), b decimal(10,1)) stored as orc;
insert into x values (1.0, 1.0);

> explain logical select count(1) from x, x x1 where x.a = x1.b;
OK  
LOGICAL PLAN:
$hdt$_0:$hdt$_0:x
  TableScan (TS_0)
alias: x
filterExpr: (a is not null and true) (type: boolean)
Filter Operator (FIL_18)
  predicate: (a is not null and true) (type: boolean)
  Select Operator (SEL_2)
expressions: a (type: decimal(10,2))
outputColumnNames: _col0
Reduce Output Operator (RS_6)
  key expressions: _col0 (type: decimal(11,2))
  sort order: +
  Map-reduce partition columns: _col0 (type: decimal(11,2))
  Join Operator (JOIN_8)
condition map:
 Inner Join 0 to 1
keys:
  0 _col0 (type: decimal(11,2))
  1 _col0 (type: decimal(11,2))
Group By Operator (GBY_11)
  aggregations: count(1)
  mode: hash
  outputColumnNames: _col0
{code}

See cast up to Decimal(11, 2) in the plan, which normalizes both sides of the 
join to be able to compare HiveDecimal as-is.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-14301) insert overwrite fails for nonpartitioned tables in s3

2016-07-20 Thread Rajesh Balamohan (JIRA)
Rajesh Balamohan created HIVE-14301:
---

 Summary: insert overwrite fails for nonpartitioned tables in s3
 Key: HIVE-14301
 URL: https://issues.apache.org/jira/browse/HIVE-14301
 Project: Hive
  Issue Type: Bug
Reporter: Rajesh Balamohan
Assignee: Rajesh Balamohan
Priority: Minor


{noformat}

hive> insert overwrite table s3_2 select * from default.test2;
Query ID = hrt_qa_20160719164737_90fb1f30-0ade-4a64-ab65-a6a7550be25a
Total jobs = 1
Launching Job 1 out of 1


Status: Running (Executing on YARN cluster with App id 
application_1468941549982_0010)


VERTICES  STATUS  TOTAL  COMPLETED  RUNNING  PENDING  FAILED  KILLED

Map 1 ..   SUCCEEDED  1  100   0   0

VERTICES: 01/01  [==>>] 100%  ELAPSED TIME: 11.90 s

Loading data to table default.s3_2
Failed with exception java.io.IOException: rename for src path: 
s3a://test-ks/test2/.hive-staging_hive_2016-07-19_16-47-37_787_4725676452829013403-1/-ext-1/00_0.deflate
 to dest path:s3a://test-ks/test2/00_0.deflate returned false
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.MoveTask


2016-07-19 16:43:46,244 ERROR [main]: exec.Task 
(SessionState.java:printError(948)) - Failed with exception 
java.io.IOException: rename for src path: 
s3a://test-ks/testing/.hive-staging_hive_2016-07-19_16-42-20_739_1716954454570249450-1/-ext-1/00_0.deflate
 to dest path:s3a://test-ks/testing/00_0.deflate returned false
org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: rename 
for src path: 
s3a://test-ks/testing/.hive-staging_hive_2016-07-19_16-42-20_739_1716954454570249450-1/-ext-1/00_0.deflate
 to dest path:s3a://test-ks/testing/00_0.deflate returned false
at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:2856)
at org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:3113)
at org.apache.hadoop.hive.ql.metadata.Hive.loadTable(Hive.java:1700)
at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:328)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:89)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1726)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1472)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1271)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1138)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1128)
at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:216)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:168)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:379)
at 
org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:739)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:684)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:624)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.io.IOException: rename for src path: 
s3a://test-ks/testing/.hive-staging_hive_2016-07-19_16-42-20_739_1716954454570249450-1/-ext-1/00_0.deflate
 to dest path:s3a://test-ks/testing/00_0.deflate returned false
at org.apache.hadoop.hive.ql.metadata.Hive$3.call(Hive.java:2836)
at org.apache.hadoop.hive.ql.metadata.Hive$3.call(Hive.java:2825)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-14300) LLAP: Renaming query specific files on completion should wait for completion of threads in the daemon

2016-07-20 Thread Siddharth Seth (JIRA)
Siddharth Seth created HIVE-14300:
-

 Summary: LLAP: Renaming query specific files on completion should 
wait for completion of threads in the daemon
 Key: HIVE-14300
 URL: https://issues.apache.org/jira/browse/HIVE-14300
 Project: Hive
  Issue Type: Bug
Reporter: Siddharth Seth
Assignee: Siddharth Seth


Post HIVE-14224 - there's a race where the AM could inform the daemon about 
query completion before local threads which were executing the query are still 
cleaning up.
This can result in the file being moved early, and a new file being created - 
which is never moved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-14299) Log serialized plan size

2016-07-20 Thread Prasanth Jayachandran (JIRA)
Prasanth Jayachandran created HIVE-14299:


 Summary: Log serialized plan size 
 Key: HIVE-14299
 URL: https://issues.apache.org/jira/browse/HIVE-14299
 Project: Hive
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Prasanth Jayachandran
Assignee: Prasanth Jayachandran
Priority: Minor


It will be good to log the size of the serialized plan. This can help 
identifying cases where large objects are accidentally serialized. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 49766: HIVE-14035 Enable predicate pushdown to delta files created by ACID Transactions

2016-07-20 Thread Saket Saurabh

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49766/
---

(Updated July 20, 2016, 3:55 p.m.)


Review request for hive and Eugene Koifman.


Changes
---

Updated the patch by rebasing with master. No additional code changes. Same as 
Patch #10 at https://issues.apache.org/jira/browse/HIVE-14035


Repository: hive-git


Description
---

https://issues.apache.org/jira/browse/HIVE-14035

In current Hive version, delta files created by ACID transactions do not allow 
predicate pushdown if they contain any update/delete events. This is done to 
preserve correctness when following a multi-version approach during event 
collapsing, where an update event overwrites an existing insert event. 
This JIRA proposes to split an update event into a combination of a delete 
event followed by a new insert event, that can enable predicate push down to 
all delta files without breaking correctness. To support backward compatibility 
for this feature, this JIRA also proposes to add some sort of versioning to 
ACID that can allow different versions of ACID transactions to co-exist 
together.


Diffs (updated)
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 66203a5 
  
hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/FosterStorageHandler.java
 14f7316 
  
hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/AbstractRecordWriter.java
 974c6b8 
  metastore/if/hive_metastore.thrift 4d92b73 
  metastore/src/gen/thrift/gen-cpp/hive_metastore_constants.h ae14bd1 
  metastore/src/gen/thrift/gen-cpp/hive_metastore_constants.cpp f982bf2 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/hive_metastoreConstants.java
 5a666f2 
  metastore/src/gen/thrift/gen-php/metastore/Types.php f505208 
  metastore/src/gen/thrift/gen-py/hive_metastore/constants.py d1c07a5 
  metastore/src/gen/thrift/gen-rb/hive_metastore_constants.rb eeccc84 
  
metastore/src/java/org/apache/hadoop/hive/metastore/TransactionalValidationListener.java
 3e74675 
  orc/src/java/org/apache/orc/impl/TreeReaderFactory.java c4a2093 
  ql/src/java/org/apache/hadoop/hive/ql/exec/FetchTask.java db6848a 
  ql/src/java/org/apache/hadoop/hive/ql/exec/SMBMapJoinOperator.java 57b6c67 
  ql/src/java/org/apache/hadoop/hive/ql/exec/mr/MapredLocalTask.java 23a13d6 
  ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java c150ec5 
  ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java 945b828 
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java 69d58d6 
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcOutputFormat.java b0f8c8b 
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcRecordUpdater.java e577961 
  ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java ef0bb3d 
  ql/src/java/org/apache/hadoop/hive/ql/plan/TableScanDesc.java 8cf261d 
  ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/CompactorMR.java 6caca98 
  ql/src/test/org/apache/hadoop/hive/ql/TestTxnCommands2.java d48e441 
  ql/src/test/org/apache/hadoop/hive/ql/io/TestAcidUtils.java b83cea4 

Diff: https://reviews.apache.org/r/49766/diff/


Testing
---

Tests for the feature are in 
ql/src/test/org/apache/hadoop/hive/ql/TestTxnCommands2.java. These are mostly 
integration tests that test end-to-end insert/update/delete scenarios followed 
by compaction and cleaning.


Thanks,

Saket Saurabh



Re: Review Request 49965: HIVE-13995 Hive generates inefficient metastore queries for TPCDS tables with 1800+ partitions leading to higher compile time

2016-07-20 Thread Hari Sankar Sivarama Subramaniyan

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49965/
---

(Updated July 20, 2016, 10:16 p.m.)


Review request for hive and Ashutosh Chauhan.


Repository: hive-git


Description
---

Hive generates inefficient metastore queries for TPCDS tables with 1800+ 
partitions leading to higher compile time


Diffs (updated)
-

  
itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java
 d90085b 
  
itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/hbase/TestHBaseAggrStatsCacheIntegration.java
 51d96dd 
  metastore/if/hive_metastore.thrift 4d92b73 
  metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java 
38c0eed 
  metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java 
909d8eb 
  metastore/src/java/org/apache/hadoop/hive/metastore/IMetaStoreClient.java 
b6fe502 
  metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java 
9c900af 
  metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java 5adfa02 
  metastore/src/java/org/apache/hadoop/hive/metastore/RawStore.java bbd47b8 
  metastore/src/java/org/apache/hadoop/hive/metastore/hbase/HBaseStore.java 
c65c7a4 
  
metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreControlledCommit.java
 1ea72a0 
  
metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreForJdoConnection.java
 3e6acc7 
  
metastore/src/test/org/apache/hadoop/hive/metastore/hbase/TestHBaseAggregateStatsCache.java
 6cd3a46 
  
metastore/src/test/org/apache/hadoop/hive/metastore/hbase/TestHBaseAggregateStatsCacheWithBitVector.java
 e0c4094 
  
metastore/src/test/org/apache/hadoop/hive/metastore/hbase/TestHBaseAggregateStatsExtrapolation.java
 f4e55ed 
  
metastore/src/test/org/apache/hadoop/hive/metastore/hbase/TestHBaseAggregateStatsNDVUniformDist.java
 62918be 
  ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java ef0bb3d 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartitionPruner.java 
26e936e 
  ql/src/java/org/apache/hadoop/hive/ql/parse/PrunedPartitionList.java da2e1e2 
  ql/src/java/org/apache/hadoop/hive/ql/stats/StatsUtils.java d8acf94 

Diff: https://reviews.apache.org/r/49965/diff/


Testing
---


Thanks,

Hari Sankar Sivarama Subramaniyan



[jira] [Created] (HIVE-14298) NPE could be thrown in HMS when an ExpressionTree could not be made from a filter

2016-07-20 Thread Chaoyu Tang (JIRA)
Chaoyu Tang created HIVE-14298:
--

 Summary: NPE could be thrown in HMS when an ExpressionTree could 
not be made from a filter
 Key: HIVE-14298
 URL: https://issues.apache.org/jira/browse/HIVE-14298
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Reporter: Chaoyu Tang
Assignee: Chaoyu Tang


In many cases where an ExpressionTree could not be made from a filter (e.g. 
parser fails to parse a filter etc.) and its value is null. But this null is 
passed around and used by a couple of HMS methods which can cause 
NullPointerException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-14297) OrcRecordUpdater floods logs trying to create _orc_acid_version file

2016-07-20 Thread Eugene Koifman (JIRA)
Eugene Koifman created HIVE-14297:
-

 Summary: OrcRecordUpdater floods logs trying to create 
_orc_acid_version file
 Key: HIVE-14297
 URL: https://issues.apache.org/jira/browse/HIVE-14297
 Project: Hive
  Issue Type: Bug
  Components: Transactions
Affects Versions: 1.3.0
Reporter: Eugene Koifman


{noformat}
try {
  FSDataOutputStream strm = fs.create(new Path(path, ACID_FORMAT), false);
  strm.writeInt(ORC_ACID_VERSION);
  strm.close();
} catch (IOException ioe) {
  if (LOG.isDebugEnabled()) {
LOG.debug("Failed to create " + path + "/" + ACID_FORMAT + " with " +
ioe);
  }
}
{noformat}

this file is created in the table/partition dir.  So in streaming ingest cases 
this happens repeatedly and HDFS prints long stack trace with a WARN

{noformat}
2016-07-18 09:22:13.051 o.a.h.i.r.RetryInvocationHandler [WARN] Exception while 
invoking ClientNamenodeProtocolTranslatorPB.create over null. Not retrying 
because try once and fail.
org.apache.hadoop.ipc.RemoteException: 
/apps/hive/warehouse/stormdb.db/store_sales/dt=2016%2F07%2F18/_orc_acid_version 
for client 172.22.111.42 already exists
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2639)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2526)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2410)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:729)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:405)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)

at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552) 
~[stormjar.jar:?]
at org.apache.hadoop.ipc.Client.call(Client.java:1496) ~[stormjar.jar:?]
at org.apache.hadoop.ipc.Client.call(Client.java:1396) ~[stormjar.jar:?]
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
 ~[stormjar.jar:?]
at com.sun.proxy.$Proxy44.create(Unknown Source) ~[?:?]
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:311)
 ~[stormjar.jar:?]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
~[?:1.8.0_77]
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
~[?:1.8.0_77]
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:1.8.0_77]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_77]
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
 [stormjar.jar:?]
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)
 [stormjar.jar:?]
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)
 [stormjar.jar:?]
at com.sun.proxy.$Proxy45.create(Unknown Source) [?:?]
at 
org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1719)
 [stormjar.jar:?]
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1699) 
[stormjar.jar:?]
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1634) 
[stormjar.jar:?]
at 
org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:478)
 [stormjar.jar:?]
at 
org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:474)
 [stormjar.jar:?]
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
 [stormjar.jar:?]
at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:474)
 [stormjar.jar:?]
at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:415)
 [stormjar.jar:?]
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:926) 
[stormjar.jar:?]
at org.apache.hadoop.fs.Fi

[jira] [Created] (HIVE-14296) Session count is not decremented when HS2 clients do not shutdown cleanly.

2016-07-20 Thread Naveen Gangam (JIRA)
Naveen Gangam created HIVE-14296:


 Summary: Session count is not decremented when HS2 clients do not 
shutdown cleanly.
 Key: HIVE-14296
 URL: https://issues.apache.org/jira/browse/HIVE-14296
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 2.0.0
Reporter: Naveen Gangam
Assignee: Naveen Gangam


When a JDBC client like beeline abruptly disconnects from HS2, the session gets 
closed on the serverside but the session count reported in the logs is 
incorrect. It never gets decremented.
For example, I created 6 connections from the same instance of beeline to HS2.
{code}
2016-07-20T15:05:17,987  INFO [HiveServer2-Handler-Pool: Thread-40] 
thrift.ThriftCLIService: Opened a session SessionHandle 
[28b225ee-204f-4b3e-b4fd-0039ef8e276e], current sessions: 1
.
2016-07-20T15:05:24,239  INFO [HiveServer2-Handler-Pool: Thread-45] 
thrift.ThriftCLIService: Opened a session SessionHandle 
[1d267de8-ff9a-4e76-ac5c-e82c871588e7], current sessions: 2
.
2016-07-20T15:05:25,710  INFO [HiveServer2-Handler-Pool: Thread-50] 
thrift.ThriftCLIService: Opened a session SessionHandle 
[04d53deb-8965-464b-aa3f-7042304cfb54], current sessions: 3
.
2016-07-20T15:05:26,795  INFO [HiveServer2-Handler-Pool: Thread-55] 
thrift.ThriftCLIService: Opened a session SessionHandle 
[b4bb8b86-74e1-4e3c-babb-674d34ad1caf], current sessions: 4
2016-07-20T15:05:28,160  INFO [HiveServer2-Handler-Pool: Thread-60] 
thrift.ThriftCLIService: Opened a session SessionHandle 
[6d3c3ed9-fadb-4673-8c15-3315b7e2995d], current sessions: 5
.
2016-07-20T15:05:29,136  INFO [HiveServer2-Handler-Pool: Thread-65] 
thrift.ThriftCLIService: Opened a session SessionHandle 
[88b630c0-f272-427d-8263-febfef8d], current sessions: 6
{code}

When I CNTRL-C the beeline process, in the HS2 logs I see
{code}
2016-07-20T15:11:37,858  INFO [HiveServer2-Handler-Pool: Thread-55] 
thrift.ThriftCLIService: Session disconnected without closing properly. 
2016-07-20T15:11:37,858  INFO [HiveServer2-Handler-Pool: Thread-40] 
thrift.ThriftCLIService: Session disconnected without closing properly. 
2016-07-20T15:11:37,858  INFO [HiveServer2-Handler-Pool: Thread-65] 
thrift.ThriftCLIService: Session disconnected without closing properly. 
2016-07-20T15:11:37,858  INFO [HiveServer2-Handler-Pool: Thread-60] 
thrift.ThriftCLIService: Session disconnected without closing properly. 
2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-50] 
thrift.ThriftCLIService: Session disconnected without closing properly. 
2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-45] 
thrift.ThriftCLIService: Session disconnected without closing properly. 
2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-55] 
thrift.ThriftCLIService: Closing the session: SessionHandle 
[b4bb8b86-74e1-4e3c-babb-674d34ad1caf]
2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-40] 
thrift.ThriftCLIService: Closing the session: SessionHandle 
[28b225ee-204f-4b3e-b4fd-0039ef8e276e]
2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-65] 
thrift.ThriftCLIService: Closing the session: SessionHandle 
[88b630c0-f272-427d-8263-febfef8d]
2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-60] 
thrift.ThriftCLIService: Closing the session: SessionHandle 
[6d3c3ed9-fadb-4673-8c15-3315b7e2995d]
2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-45] 
thrift.ThriftCLIService: Closing the session: SessionHandle 
[1d267de8-ff9a-4e76-ac5c-e82c871588e7]
2016-07-20T15:11:37,859  INFO [HiveServer2-Handler-Pool: Thread-50] 
thrift.ThriftCLIService: Closing the session: SessionHandle 
[04d53deb-8965-464b-aa3f-7042304cfb54]
{code}

The next time I connect to HS2 via beeline, I see
{code}
2016-07-20T15:14:33,679  INFO [HiveServer2-Handler-Pool: Thread-50] 
thrift.ThriftCLIService: Client protocol version: HIVE_CLI_SERVICE_PROTOCOL_V8
2016-07-20T15:14:33,710  INFO [HiveServer2-Handler-Pool: Thread-50] 
session.SessionState: Created HDFS directory: 
/tmp/hive/hive/d47759e8-df3a-4504-9f28-99ff5247352c
2016-07-20T15:14:33,725  INFO [HiveServer2-Handler-Pool: Thread-50] 
session.SessionState: Created local directory: 
/var/folders/_3/0w477k4j5bjd6h967rw4vflwgp/T/ngangam/d47759e8-df3a-4504-9f28-99ff5247352c
2016-07-20T15:14:33,735  INFO [HiveServer2-Handler-Pool: Thread-50] 
session.SessionState: Created HDFS directory: 
/tmp/hive/hive/d47759e8-df3a-4504-9f28-99ff5247352c/_tmp_space.db
2016-07-20T15:14:33,737  INFO [HiveServer2-Handler-Pool: Thread-50] 
session.HiveSessionImpl: Operation log session directory is created: 
/var/folders/_3/0w477k4j5bjd6h967rw4vflwgp/T/ngangam/operation_logs/d47759e8-df3a-4504-9f28-99ff5247352c
2016-07-20T15:14:33,737  INFO [HiveServer2-Handler-Pool: Thread-50] 
thrift.ThriftCLIService: Opened a session SessionHandle 
[d47759e8-df3a-4504-9f28-99ff524

[GitHub] hive pull request #86: HIVE-14242. Backport of ORC-53.

2016-07-20 Thread omalley
Github user omalley closed the pull request at:

https://github.com/apache/hive/pull/86


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Review Request 49965: HIVE-13995 Hive generates inefficient metastore queries for TPCDS tables with 1800+ partitions leading to higher compile time

2016-07-20 Thread Ashutosh Chauhan

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49965/#review142949
---




metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java 
(line 1313)


this breaks extrapolation.


- Ashutosh Chauhan


On July 20, 2016, 6:15 a.m., Hari Sankar Sivarama Subramaniyan wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/49965/
> ---
> 
> (Updated July 20, 2016, 6:15 a.m.)
> 
> 
> Review request for hive and Ashutosh Chauhan.
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> Hive generates inefficient metastore queries for TPCDS tables with 1800+ 
> partitions leading to higher compile time
> 
> 
> Diffs
> -
> 
>   metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java 
> 38c0eed 
>   
> metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java 
> 909d8eb 
>   metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java 
> 9c900af 
>   ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartitionPruner.java 
> 26e936e 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/PrunedPartitionList.java 
> da2e1e2 
>   ql/src/java/org/apache/hadoop/hive/ql/stats/StatsUtils.java d8acf94 
> 
> Diff: https://reviews.apache.org/r/49965/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Hari Sankar Sivarama Subramaniyan
> 
>



[GitHub] hive pull request #91: HIVE-14249: Add simple materialized views with manual...

2016-07-20 Thread jcamachor
GitHub user jcamachor opened a pull request:

https://github.com/apache/hive/pull/91

HIVE-14249: Add simple materialized views with manual rebuilds



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jcamachor/hive HIVE-MVs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hive/pull/91.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #91


commit fc5e6e3b0e826ff9a0b3437ae8e05eb9484a3856
Author: Alan Gates 
Date:   2016-07-20T11:37:31Z

HIVE-14249: Add simple materialized views with manual rebuilds (Alan Gates, 
reviewed by Jesus Camacho Rodriguez)

commit 86648e2f3440f7f01c18ff4819a07c7b02050f08
Author: Jesus Camacho Rodriguez 
Date:   2016-07-20T11:38:09Z

HIVE-14249: Add simple materialized views with manual rebuilds




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (HIVE-14295) Some metastore event listeners always initialize deleteData as false

2016-07-20 Thread niklaus xiao (JIRA)
niklaus xiao created HIVE-14295:
---

 Summary: Some metastore event listeners always initialize 
deleteData as false
 Key: HIVE-14295
 URL: https://issues.apache.org/jira/browse/HIVE-14295
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 2.1.0, 1.3.0
Reporter: niklaus xiao
Assignee: niklaus xiao
Priority: Minor


DropTableEvent:
{code}
  public DropTableEvent(Table table, boolean status, boolean deleteData, 
HMSHandler handler) {
super(status, handler);
this.table = table;
// In HiveMetaStore, the deleteData flag indicates whether DFS data should 
be
// removed on a drop.
this.deleteData = false;
  }
{code}

Same as PreDropPartitionEvent and PreDropTableEvent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 49881: HIVE-14204: Optimize loading loaddynamic partitions

2016-07-20 Thread Rajesh Balamohan

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/49881/
---

(Updated July 20, 2016, 11:42 a.m.)


Review request for hive and Ashutosh Chauhan.


Bugs: HIVE-14204
https://issues.apache.org/jira/browse/HIVE-14204


Repository: hive-git


Description
---

Lots of time is spent in sequential fashion to load dynamic partitioned dataset 
in driver side. 

E.g simple dynamic partitioned load as follows takes 300+ seconds

INSERT INTO web_sales_test partition(ws_sold_date_sk) select * from 
tpcds_bin_partitioned_orc_200.web_sales;

Time taken to load dynamic partitions: 309.22 seconds


Diffs (updated)
-

  ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java ef0bb3d 

Diff: https://reviews.apache.org/r/49881/diff/


Testing
---


Thanks,

Rajesh Balamohan



[GitHub] hive pull request #90: fix the import database_name.table_name from path

2016-07-20 Thread utf7
GitHub user utf7 opened a pull request:

https://github.com/apache/hive/pull/90

fix the import database_name.table_name from path

detail :
use test;
create table a(id int,name string);
export table a to '/tmp/a';
drop table a;
import table test.a from '/tmp/a';
hive> import table test.a from '/tmp/a';
Failed with exception Invalid table name test.test.a
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.MoveTask

when use import database_name.table_name from ,
the tblDesc.getTableName() return database_name.table_name ,not table_name
Table table = new Table(dbname,database_name.table_name) will return the 
table's name is dbname.database_name.table_name
correct table name should be test.a,not   test.test.a

we can fix this:
  String[] dbTableName 
=Utilities.getDbTableName(dbname,tblDesc.getTableName());
   Table table = new Table(dbTableName[0], dbTableName[1]);

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/utf7/hive patch-2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hive/pull/90.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #90


commit 3862872280c785e81a2b5d7bc0fee99d23b3de98
Author: utf7 
Date:   2016-07-20T08:55:52Z

fix the import database_name.table_name from 

detail :
use test;
create table a(id int,name string);
export table a to '/tmp/a';
drop table a;
import table test.a from '/tmp/a';
hive> import table test.a from '/tmp/a';
Failed with exception Invalid table name test.test.a
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.MoveTask

when use import database_name.table_name from ,
the tblDesc.getTableName() return database_name.table_name ,not table_name
Table table = new Table(dbname,database_name.table_name) will return the 
table's name is dbname.database_name.table_name
correct table name should be test.a,not   test.test.a

we can fix this:
  String[] dbTableName 
=Utilities.getDbTableName(dbname,tblDesc.getTableName());
   Table table = new Table(dbTableName[0], dbTableName[1]);




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] hive pull request #89: Update TaskFactory.java

2016-07-20 Thread utf7
GitHub user utf7 opened a pull request:

https://github.com/apache/hive/pull/89

Update TaskFactory.java

getAndIncrementId() method use new Integer() ,Integer.valueOf() is a better 
method

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/utf7/hive patch-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hive/pull/89.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #89


commit fdfa972f1c94e1d98a6a06e60a8de1f972fd3c21
Author: utf7 
Date:   2016-07-20T07:54:14Z

Update TaskFactory.java

getAndIncrementId() method use new Integer() ,Integer.valueOf() is a better 
method




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---